A comprehensive collection of academic research on generative AI in preK12 education organized into three categories:
Descriptive - Research that describes how generative AI is being used in classrooms, schools, or districts or how products are designed and built.
Impact (includes RCT + Quasi-Experimental) - Studies that test how well something works including but not limited to randomly dividing people into groups and comparing the results.
Review - Studies that combine and summarize all the research on a specific genAI topic to find patterns and answers.
We aim to include all research in the above categories on generative AI in preK12 education in the US. As research diverges from genAI for preK12 in the US - such as machine learning, education systems beyond preK12, or studies conducted outside the US - inclusion in the repository is based on relevance to our target audiences:
Superintendents, state, and federal K12 leaders
Education support organizations (unions, parent groups, etc.)
Leadership and product teams at technology companies
Academic researchers
Global education leaders
The Research Repository includes pre-published works but does not include journalism on AI for education.
Our goal is to establish a dynamic community of practice that will challenge and positively shape the future of AI in education. Thank you for joining us, your contributions are valued and appreciated.
The AI in Education at Oxford University (AIEOU) interdisciplinary research hub is led by Dr Sara Ratner (Principal Investigator), Professor Rebecca Williams (Co-Investigator) and Professor Elizabeth Wonnacott (Co-Investigator) thanks to an award from the Social Sciences Division with the support the Department of Education at the University of Oxford.
AIEOU aims to promote a research-informed, ethical, human-centered approach to AI in Education through collaboration and knowledge exchange. Working across the four pillars of design, regulation, implementation and impact, researchers at the University of Oxford will collaborate and convene with expert colleagues and key stakeholders from around the world to establish a shared research agenda. We seek to co-create a use case for AI in Education that represents best practice in quality teaching and learning.
Informing product leads and their teams of innovators, designers, and developers as they work toward safety, security, and trust while creating AI products and services for use in education.
The work of the EDSAFE centers around the SAFE Benchmarks Framework as we engage stakeholders to align equitable outcomes for all learners and improved working experiences for dedicated and innovative educators. We intend to clarify the urgency and specific areas of need to prevent failures in data management that compromise the potential for how responsible AI can be a lever for equity and innovation while protecting student privacy. Frameworks and benchmarks are important to innovation as a means of targeted guidance, focusing disparate efforts towards shared objectives and outcomes and ensuring the development of appropriate guidelines and guardrails.
Welche Bedeutung hat ChatGPT im Hochschulkontext? Das Hochschulforum Digitalisierung stellt in dieser Linksammlung relevante Stimmen & Stimmungen zusammen.
Are you teriffied about your jobs will be taken away by AI. If you do, then check out the first jobs that will be eliminated by AI. This will help you prep.
Yuval Noah Harari speaks with Impact Theory host, Tom Bilyeu. Yuval and Tom explore the potential implications of hacking humans, both the benefits and the r
L. He, M. Mavrikis, and M. Cukurova. Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky, page 327--333. Cham, Springer Nature Switzerland, (2024)