KEYNOTES
First Keynote confirmed! Chris Piech

The Next Educational Revolution: Grand Challenges for Learning @ Scale in the Generative AI Era
The community working on learning at scale has made tremendous progress over the last decade, successfully achieving many of our previously stated grand challenges. As we enter the Generative AI era, what new ambitious milestones should we shoot for in order to make progress towards the joyful, high-quality education at scale for all learners? This talk will explore several potential objectives, including scaling human teaching, developing effective generative AI tools, reaching new heights in student understanding, and addressing a major persistent constraint: student motivation.
Second Keynote confirmed:
Olga Viberg will represent L@S in a shared panel session.
Olga Viberg is associate professor in Media Technology at the Department of Human-Centered Technology, School of Electrical Engineering and Computer Science at KTH and a faculty member of Digital Futures, a national research center. Viberg’s research includes a focus on AI and learning analytics in education, self-regulated learning, digital assessment, cross-cultural research and responsible use of student data and technologies in education. Viberg served as the PC chair for Learning@Scale 2023 and ECTEL2023, and as the vice-president of SOLAR ( Society for Learning Analytics Research). She has also contributed to the UNESCO policy work on quality of online education and teachers’ AI competency development. Currently she serves as the Editor-in-Chief of the International Journal of Learning Analytics.
WORKSHOPS
Sixth Annual Workshop on A/B Testing and Platform-Enabled Learning Engineering (PELE)
April Murphy (Carnegie Learning), Stephen E. Fancsali (Carnegie Learning), Steve Ritter (Carnegie Learning), Neil Heffernan (Worcester Polytechnic Institute), Debshila Basu Mallick (Rice University), Jeremy Roschelle (Digital Promise), Danielle McNamara (Arizona State University), Joseph Jay Williams (University of Toronto), John Stamper (Carnegie Mellon University), Norman Bier (Carnegie Mellon University), Jeff Carver (University of Alabama)
Learning engineering applies data and learning science principles to better understand outcomes and support improvement research. One important approach is A/B testing. Several systems supporting A/B testing in educational applications have arisen recently, including UpGrade, E-TRIALS, and Terracotta. A/B testing can be part of the puzzle of how to improve educational outcomes in digital learning platforms. To that end, a set of learning platforms has opened their systems to improvement research, with supports necessary for education-specific research designs. This workshop will explore how A/B testing is conducted in educational contexts, how digital learning platforms open new possibilities for research, and how empirical approaches can be used to drive powerful gains in student learning. Papers and demos may address any topics associated with conducting A/B testing with digital learning platforms and learning engineering research.
Link to Workshop Website: https://sites.google.com/carnegielearning.com/pele-2025/home
Room: F170
Advancing the Science of Teaching with Tutoring Data: A Collaborative Workshop with the National Tutoring Observatory
Danielle R. Thomas (Carnegie Mellon University), Dorottya Demszky (Stanford University), Kenneth R. Koedinger (Carnegie Mellon University), Joshua Marland (Cornell University), Doug Pietrzak (FreshCognate), Justin Reich (Massachusetts Institute of Technology), Rachel Slama (Cornell University), Amalia Toutziaridi (Massachusetts Institute of Technology), René F. Kizilcec (Cornell University)
Join us at L@S 2025 for a full-day workshop exploring the science behind effective tutoring and teaching. While tutoring’s impact on student learning is well-documented, critical gaps remain in understanding the in-the-moment instructional moves that drive learning. This workshop, organized by the National Tutoring Observatory (NTO), will bring together researchers, educators, developers, and policymakers to address these challenges. A key focus will be the Million Tutor Moves dataset, an open-access resource leveraging AI to analyze tutoring interactions at scale. The agenda includes paper presentations, an interactive demo, and a panel discussion featuring leading voices in tutoring, AI, and learning analytics. We invite empirical and theoretical paper submissions related, but not limited, to: Tutoring & teaching strategies, De-identification, Modeling of student learning outcome, and Data sharing. Papers will undergo a single-blind review process and be evaluated based on alignment with the workshop theme, relevance, and overall quality. Authors of accepted papers will be invited to present their work.
Link to Workshop Website: https://sites.google.com/andrew.cmu.edu/nto/home
Room: F180
Learnersourcing: Student-generated Content @ Scale: 3rd Annual Workshop
Steven Moore (Carnegie Mellon University), Anjali Singh (University of Texas at Austin), Xinyi Lu (University of Michigan), Hyoungwook Jin (School of Computing, KAIST), Hassan Khosravi (The University of Queensland), Paul Denny (The University of Auckland), Christopher Brooks (University of Michigan), Xu Wang (University of Michigan), Juho Kim (School of Computing, KAIST), John Stamper (Carnegie Mellon University)
This hybrid full day workshop at L@S 2025 is a unique venue to showcase work and initiatives related to learnersourcing and crowdsourcing in education. Learnersourcing is the practice of involving learners in creating or refining educational content, such as annotating explanations or generating questions, harnessing collective student insights to enhance educational resources. The rise of LLMs complements this practice by enabling new types of activities and facilitating a partnership where the LLM provides feedback to the student and vice-versa. While no submission is required in order to participate, we are accepting papers that are 4 – 6 pages in length. We invite you to participate and submit learnersourcing papers on:
- Strategies for engaging and motivating student participation in learnersourcing activities
- Exploration of innovative learnersourcing content formats
- Methods for evaluating the quality of student-generated content
- Incentivizing high-quality student contributions
- Techniques for providing actionable feedback during the learnersourcing process
- Training students to develop high-quality resources
- Exploring models of co-creating content
- Leveraging LLMs to assist in the different stages of the learnersourcing process
Link to Workshop Website: https://sites.google.com/andrew.cmu.edu/learnersourcing
Room: F190