Accepted Workshops

Learnersourcing: Student-Generated Content @ Scale

Organizers: Steven Moore (, Carnegie Mellon University; Anjali Singh (, University of Michigan; Xinyi Lu(, University of Michigan; Hyoungwook Jin (, Korea Advanced Institute of Science & Technology; Paul Denny (, The University of Auckland; Hassan Khosravi (, The University of Queensland; Chris Brooks  (, University of Michigan; Xu Wang (, University of Michigan; Juho Kim (, Korea Advanced Institute of Science & Technology; John Stamper (, Carnegie Mellon University

Description: The second annual workshop on Learnersourcing: Student-generated Content @ Scale is taking place at Learning @ Scale 2024. This full day hybrid workshop will feature invited speakers, interactive activities, paper presentations, and discussions, as we delve into the field’s opportunities and challenges. Attendees will engage in hands-on development of learnersourcing activities suited to their own courses or systems and gain access to various learnersourcing systems and datasets for exploration. This workshop aims to foster discussions on new types of learnersourcing activities, strategies for evaluating the quality of student-generated content, the integration of LLMs with the field, and approaches to scaling learnersourcing to produce valuable instructional and assessment materials. We believe participants from a wide range of backgrounds and prior knowledge on learnersourcing can both benefit and contribute to this workshop, as learnersourcing draws on work from education, crowdsourcing, learning analytics, data mining, ML/NLP, and many more fields! Additionally, as the learnersourcing process involves many stakeholders (students, instructors, researchers, instructional designers, etc.), multiple viewpoints can help to inform what future and existing student-generated content might be useful, new and better ways to assess the quality of the content, and spark potential collaboration efforts between attendees. We ultimately want to show how everyone can make use of learnersourcing and have participants gain hands-on experience using learnersourcing tools, such as RiPPLE or PeerWise. Participants will take part in creating their own learnersourcing activities using these tools or their own platforms, and take part in discussing the next challenges and opportunities in the learnersourcing space. Our hope is to attract attendees interested in scaling the generation of quality instructional and assessment content and those interested in the use of online learning platforms.

Link to Submission:

AI-Driven Content Creation: Revolutionizing Educational Materials

Organizers: Miguel Morales-Chan (, Galileo University; Héctor Amado-Salvatierra (, Galileo University; Rocael Hernández Rizzardini (, Galileo University

Description: The workshop starts with an Introduction to the foundational concepts of GEN AI (i), followed by an in-depth look at Effective Prompt Creation (ii), including strategies and main features of impactful prompts. The session then proceeds to an exploration of ChatGPT and Bard (iii), guiding participants through the registration process and demonstrating basic to advanced prompt utilization. A classification of Effective Prompts specifically tailored for teaching are examined (iv), showcasing twenty-five innovative ways to integrate ChatGPT into educational practices (v). The agenda will further delve into the application of GEN AI tools for educational enhancement (vi), as well as for academic research (vii). The half-day workshop concludes with a critical discussion on the Challenges of Implementing GEN AI in Teaching Processes (viii), ensuring a well-rounded understanding of the subject matter.

Link to Submission:

Scaling Classrooms: A Forum for Practitioners Seeking, Developing and Adapting their Own Tools

Organizers: Gururaj Deshpande (, Georgia Institute of Technology; Christopher Cui (, Georgia Institute of Technology; Celeste Mason (, Georgia Institute of Technology); Thad Starner (, Georgia Institute of Technology

Description: Fueled by lucrative career benefits and the more recent widespread adoption of large language models such as GPT-4 and AI-powered applications, interest has surged in computer science as a field. This surge in interest has transformed the classroom. Where instructors once only had to engage with twenty to thirty students in a semester, enrollment numbers now task instructors with the education, assessment and oversight of hundreds to thousands of learners in both the physical and digital setting. Hiring additional teaching staff can help alleviate some of the burden, but it is still all too easy for the classroom staff to be overwhelmed by administrative duties that could be efficiently and effectively managed through automated solutions. Furthermore, while the scaling challenges are always not fully unique to any particular classroom, the patchwork solutions implemented by the teaching staff typically are in an attempt to return the teaching environment to that of the smaller classrooms familiar and favored by most instructors. In this half-day workshop, we hope to begin a dialogue across classrooms about the challenges faced from a rapidly expanding student body. We plan to catalog a comprehensive list of these challenges, the tools internally developed to overcome these challenges, and the barriers to external adoption of these tools for the learning community at large. We invite all classroom stakeholders to participate in this dialogue, both instructors and students, for the sake of improving classroom experience for all (All instructors from all grade levels and all disciplines are welcome to attend). In order to keep having these conversations, we plan on creating either a Slack or Discord channel and mailing list with interested participants at the workshop. This channel also serves to create a set of experts that can help other users when they are interested in adopting new technologies that emerge from research.

Link to Submission:

Fifth Annual Workshop on A/B Testing and Platform-Enabled Learning Research

Organizers: Steve Ritter (, Carnegie Learning; Stephen Fancsali (, Carnegie Learning; April Murphy (, Carnegie Learning; Neil Heffernan (, Worcester Polytechnic Institute;  Ben Motz (, Indiana University;  Debshila Basu Mallick (, Rice University; Jeremy Roschelle (, Digital Promise; Danielle McNamara (, Arizona State University;  Joseph Jay Williams (, University of Toronto

Description: Learning engineering adds tools and processes to learning platforms to support improvement research. One kind of tool is A/B testing–common in large software companies and also represented academically at conferences like the Annual Conference on Digital Experimentation (CODE), and the International Consortium for Innovation and Collaboration in Learning Engineering (IEEE ICICLE). Recently, several A/B testing systems have arisen that focus on conducting research in educational environments, including UpGrade, Terracotta, and E-TRIALS. A/B testing can help improve educational platforms, yet there are challenging issues unique to conducting such work in these contexts. In response, a number of digital learning platforms have opened their systems to learning-improvement research by instructors and/or third-party researchers, with specific supports necessary for education-specific research designs. This workshop will explore challenges of A/B testing in educational contexts, how learning platforms are accelerating education research, and how empirical approaches can be used to drive powerful gains in student learning. It will also discuss opportunities for funding to conduct platform-enabled learning research.

Link to Submission: