{"id":26,"date":"2025-11-26T16:43:49","date_gmt":"2025-11-26T16:43:49","guid":{"rendered":"https:\/\/learningatscale.hosting.acm.org\/las2026\/?page_id=26"},"modified":"2025-12-12T00:16:07","modified_gmt":"2025-12-12T00:16:07","slug":"call-for-papers","status":"publish","type":"page","link":"https:\/\/learningatscale.hosting.acm.org\/las2026\/call-for-papers\/","title":{"rendered":"Call for Papers"},"content":{"rendered":"<div class=\"et_pb_section_0 et_pb_section et_section_regular et_flex_section\">\n<div class=\"et_pb_row_0 et_pb_row et_flex_row\">\n<div class=\"et_pb_column_0 et_pb_column et-last-child et_flex_column_24_24 et_flex_column et_pb_css_mix_blend_mode_passthrough\">\n<div class=\"et_pb_text_0 et_pb_text et_pb_bg_layout_light et_pb_module\"><div class=\"et_pb_text_inner\"><p>&nbsp;<\/p>\n<h1><strong>Call for Papers<\/strong><\/h1>\n<p><span style=\"font-weight: 400;\">L@S investigates large-scale, technology-mediated learning environments that typically have many active learners and few experts on hand to guide their progress or respond to individual needs. Modern learning at scale typically draws on data at scale, collected from current learners and previous cohorts of learners over time. Large-scale learning environments are very diverse. Formal institutional education in K-16 and campus-based courses in popular fields involve many learners, relative to the number of teaching staff, and leverage varying forms of data collection and automated support. Evolving forms of online courses, mobile learning applications, intelligent tutoring systems, open courseware, learning games, citizen science communities, collaborative programming communities, community tutorial systems, shared critique communities, informal communities of learners, and curricular and administrative systems that support learner pathfinding are all examples of learning at scale. All share a common purpose to increase human potential, leveraging data collection, data analysis, human interaction, and varying forms of computational assessment, adaptation and guidance.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Research on learning at scale naturally brings together several research communities. Learning scientists are drawn to study established and emerging forms of knowledge development, transfer, modelling, and co-creation. Computer and data scientists are drawn to the specific and challenging needs for data collection, data sharing, analysis, computation, modeling, and interaction. The cornerstone of L@S is interdisciplinary research and progressive confluence toward more effective and varied future learning.<\/span><\/p>\n<p>&nbsp;<\/p>\n<h1><span style=\"font-weight: 400;\">Submissions<\/span><\/h1>\n<p><span style=\"font-weight: 400;\">The ACM Learning@Scale conference solicits original research paper submissions on methodologies, case studies, qualitative and quantitative analyses, tools, or technologies that can contribute to learning at scale, broadly construed. Three kinds of contributions will be accepted: Research Papers, Work-in-Progress, and Demonstrations. Accepted works must be presented at the conference and will be included in the proceedings (more details about these contributions can be found below).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Paper submissions, reviewing and notification to authors will be handled using the conference page at <\/span><b>EasyChair<\/b><span style=\"font-weight: 400;\"> (<\/span><a href=\"https:\/\/easychair.org\/conferences\/?conf=las2026\"><b>https:\/\/easychair.org\/conferences\/?conf=las2026<\/b><\/a><span style=\"font-weight: 400;\">). Submissions must be in PDF format, anonymized for double-blind review (see below), follow the <\/span><b>ACM 2-column proceedings template (<\/b><a href=\"https:\/\/www.acm.org\/binaries\/content\/assets\/publications\/word_style\/interim-template-style\/interim-layout.docx\"><b>Word<\/b><\/a><b> or <\/b><a href=\"https:\/\/www.overleaf.com\/latex\/templates\/association-for-computing-machinery-acm-sig-proceedings-template\/bmvfhcdnxfty\"><b>Latex<\/b><\/a><b>)<\/b><span style=\"font-weight: 400;\">, written in English, contain original work, and not be under review for any other venue while under review for this conference. <\/span><b>The page limits for the different submissions exclude the pages used for references. <\/b><span style=\"font-weight: 400;\">For research papers, the abstract must be submitted before the final contribution. <\/span><b>The length of the abstract should not exceed 350 words.<\/b><\/p>\n<p><b><i>Anonymization policy for double-blind review<\/i><\/b><span style=\"font-weight: 400;\">: Submissions will be reviewed on the basis of <\/span><span style=\"font-weight: 400;\">originality, research quality, potential impact, and value to the development of future learning at scale. In order to increase high quality papers and independent merit, the evaluation process will be double-blind. All submissions, with the exception of workshop proposals (see below), should be anonymous. Thus, papers submitted for review <\/span><b>MUST NOT<\/b><span style=\"font-weight: 400;\"> contain the authors\u2019 names, affiliations, or any information that may disclose the authors\u2019 identity (this includes names or logos of labs\/centers, tools, grant numbers, and acknowledgments). Identifying information is to be restored in the camera-ready version upon acceptance. Please replace author names and affiliations with Xs on submitted papers. In particular, in the version submitted for review please avoid explicit self-references, such as \u201cin [1] we show\u201d \u2014 consider instead \u201cin [1] it is shown\u201d or \u201cin [1] Smith et al. show &#8230;\u201d (citing yourself in the third person just like how you would cite other researchers). You should definitely cite your own relevant previous work, so that a reviewer can access it and see the new contributions. However, the text should not explicitly state that the cited work belongs to the authors (i.e., do not use \u201cAuthor (year)\u201d to anonymize). Authors are permitted to post their work to public archives (e.g., arXiv, EdArXiv). Reviewers will be asked not to check archives during the review process.<br \/><\/span><span style=\"font-weight: 400;\"><br \/><\/span><b><i>Statement on Open Science<\/i><\/b><span style=\"font-weight: 400;\">: Authors are encouraged to conduct their scientific inquiry using emerging best practices in open science. Authors are encouraged to pre-register their study design, hypotheses, and analysis plans, and publish these using platforms such as OSF.io (<\/span><a href=\"https:\/\/osf.io\/\"><span style=\"font-weight: 400;\">https:\/\/osf.io\/<\/span><\/a><span style=\"font-weight: 400;\">) or AsPredicted.org (<\/span><a href=\"https:\/\/aspredicted.org\/\"><span style=\"font-weight: 400;\">https:\/\/aspredicted.org\/<\/span><\/a><span style=\"font-weight: 400;\">). Whenever possible, feasible, and ethical, authors are encouraged to make their data, materials, and scripts openly available for inspection, replication, and follow-up analysis. The best way to share these materials is to use an established platform like <\/span><a href=\"https:\/\/osf.io\/\"><span style=\"font-weight: 400;\">OSF.io<\/span><\/a><span style=\"font-weight: 400;\">.\u00a0 When you submit your paper, there will be three questions in the submission form where you can say if any part of our paper is pre-registered, gives out data, or gives out your materials.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><\/span><\/p>\n<h2><b>Research Papers<\/b><\/h2>\n<p><i><span style=\"font-weight: 400;\">Up to 10 pages (not including references). We recommend keeping the length of the papers proportional to the size and the scope of the research contribution. All appendices and supporting materials should be published in an online repository (e.g., OSF.io) and referenced in the main article. Please make sure that the materials adhere to blind-reviewing standards (e.g., use the \u201canonymous link\u201d function on OSF.io).<\/span><\/i><span style=\"font-weight: 400;\"><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Abstract due &#8211; <\/span><b>February 9, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Papers due &#8211; <\/span><b>February 16, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><span style=\"font-weight: 400;\"><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Learning@Scale 2026 solicits empirical and theoretical papers on, but not limited to, the following topics (in no particular order):\u00a0<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Instruction at scale: studies that examine how teachers and educators scale their instructions, what aspects of instruction could be scaled effectively, and which of these instructional strategies are the most effective for learning.\u00a0<\/span><\/li>\n<\/ul>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Interventions at scale: studies that examine the effects of interventions on student learning and performance when implemented at scale. We welcome studies that use both qualitative and quantitative methods.\u00a0<\/span><\/li>\n<\/ul>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The use of generative AI to scale learning: studies that investigate stakeholders\u2019 experiences with generative AI, students\u2019 and teachers\u2019 interactions with generative AI, the potentials and limitations of using generative AI in education.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Systems and tools to support learning at scale: research that designs and develops systems supporting the learning process For example, this could involve the design or augmentation of MOOCs, visualization techniques, intelligent tutoring systems, gamification, immersive techniques (AR\/VR\/MR), mobile technologies, tangible interfaces, or administrative tools that affect student access and pathfinding.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The evaluation of existing learning at scale systems and online learning environments using but not limited to the above mentioned technologies.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Methods and algorithms that model learner behavior: research that contributes methods, algorithms, pipelines that process large student data to enhance learning at scale.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Scaling learning in informal contexts: studies that explore how people take advantage of online environments to pursue their interests informally.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Review and synthesis of existing literature related to learning at scale.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Empirical studies and interventions that address equity, trust, algorithmic transparency and explainability, fairness and bias when using AI in education.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Research that addresses accessibility in learning at scale contexts.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Design and deployment of learning at scale systems for learners from underrepresented groups.<\/span><\/span>\n<p>&nbsp;<\/p>\n<\/li>\n<\/ul>\n<h2><b>Work-in-Progress<\/b><\/h2>\n<p><i><span style=\"font-weight: 400;\">Up to 4 pages (not including r<\/span><\/i><i><span style=\"font-weight: 400;\">eferences). <\/span><\/i><i><span style=\"font-weight: 400;\">All appendices and supporting materials should be published in an online repository (e.g., OSF.io) and referenced in the main article. Please make sure that the materials adhere to blind-reviewing standards (e.g., use the \u201canonymous link\u201d function on <\/span><\/i><a href=\"http:\/\/osf.io\"><i><span style=\"font-weight: 400;\">OSF.io<\/span><\/i><\/a><i><span style=\"font-weight: 400;\">).<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">Due &#8211; <\/span><b>Feb 16, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><span style=\"font-weight: 400;\"><\/span><\/p>\n<p><span style=\"font-weight: 400;\">This year, WiP will follow the same timeline as the Research Papers track.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A Work-in-Progress <\/span><span style=\"font-weight: 400;\">(WiP) concisely summarizes recent findings or other types of innovative or thought-provoking work that has not yet reached a level of completion for a full paper. Areas of interest are the same as for full papers. At the conference, all accepted WiP submissions will be presented in poster form.<\/span><\/p>\n<p>&nbsp;<\/p>\n<h2><b>Demonstrations<\/b><\/h2>\n<p><i><span style=\"font-weight: 400;\">Up to 2 pages (not including references). All appendices and supporting materials should be published in an online repository (e.g., OSF.io) and referenced in the main article. Please make sure that the materials adhere to blind-reviewing standards (e.g., use the \u201canonymous link\u201d function on OSF.io).<\/span><\/i><i><span style=\"font-weight: 400;\"><\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">Due &#8211; <\/span><b>April 14, 2026 <\/b><span style=\"font-weight: 400;\">(11:59 PM AoE)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Demonstrations show aspects of learning at scale in an interactive, hands-on form. A live demonstration is a great opportunity to communicate ideas and concepts in a powerful way that a regular presentation cannot provide. We invite demonstrations of learning and analytical environments and other systems that have direct relevance to learning at scale. We especially encourage authors of accepted papers to showcase their technologies using this format.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A demonstration submission should address two components:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The merit and nature of the demonstrated technology. If the proposed demonstration is associated with a Full Paper or a WiP submission, please cite it in the demonstration 2-pager.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Details of how the demo will be executed in practice, and how visitors will interact with it during the conference.<br \/><\/span><span style=\"font-weight: 400;\"><\/span><\/li>\n<\/ol>\n<h2><b><\/b><\/h2>\n<h2><b>Workshops and Tutorials<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">This year, workshops and tutorials will be handled by the Festival of Learning. Please see their <\/span><a href=\"https:\/\/festival-of-learning-2026.info\/\"><span style=\"font-weight: 400;\">website<\/span><\/a><span style=\"font-weight: 400;\"> for details.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<h2><b>Open Access to Proceedings<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to <\/span><i><span style=\"font-weight: 400;\">two weeks<\/span><\/i><span style=\"font-weight: 400;\"> prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. (For those rare conferences whose proceedings are published in the ACM Digital Library <\/span><i><span style=\"font-weight: 400;\">after<\/span><\/i><span style=\"font-weight: 400;\"> the conference is over, the official publication date remains the first day of the conference).<\/span><\/p>\n<p>&nbsp;<\/p>\n<h1><span style=\"font-weight: 400;\">Important Dates<\/span><\/h1>\n<p><span style=\"font-weight: 400;\"><\/span><\/p>\n<h2><b>Research Papers &amp; Work-in-Progress<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Please, note that this year there will NOT be any extensions of the deadlines for all types of submissions! All deadlines are final.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Abstract Submissions: <\/span><b>February 9, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 <\/span><span style=\"font-weight: 400;\">PM AoE)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Full Paper Submission: <\/span><b>February 16, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Author Notification: <\/span><b>April 7, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Camera-ready Version: <\/span><b>April 21, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><\/li>\n<\/ul>\n<h2><\/h2>\n<h2><b>Demonstrations<\/b><\/h2>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Paper Submission: <\/span><b>April 14, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Author Notification: <\/span><b>April 28, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Camera-ready Version: <\/span><b>May 08, 2026<\/b><span style=\"font-weight: 400;\"> (11:59 PM AoE)<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<\/div><\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"","protected":false},"author":4,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-26","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/learningatscale.hosting.acm.org\/las2026\/wp-json\/wp\/v2\/pages\/26","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/learningatscale.hosting.acm.org\/las2026\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/learningatscale.hosting.acm.org\/las2026\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/learningatscale.hosting.acm.org\/las2026\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/learningatscale.hosting.acm.org\/las2026\/wp-json\/wp\/v2\/comments?post=26"}],"version-history":[{"count":9,"href":"https:\/\/learningatscale.hosting.acm.org\/las2026\/wp-json\/wp\/v2\/pages\/26\/revisions"}],"predecessor-version":[{"id":95,"href":"https:\/\/learningatscale.hosting.acm.org\/las2026\/wp-json\/wp\/v2\/pages\/26\/revisions\/95"}],"wp:attachment":[{"href":"https:\/\/learningatscale.hosting.acm.org\/las2026\/wp-json\/wp\/v2\/media?parent=26"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}