Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Who Designs, Decides: Rethinking AI in Education Through the Lens of Co-creation

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization. 

By Wilson Wong
June 20, 2025

When OpenAI released ChatGPT in late 2022, it set off a wave of excitement and speculation across the global education sector. Yet, two years on, the much-anticipated transformation has failed to materialize.

Despite the rapid advancement of AI tools, their real-world integration into classrooms remains limited and uneven. According to an October 2024 Education Week national survey, while more teachers have undergone AI-related professional development, 58 percent had still received no training at all and only 2 percent reported using generative AI tools frequently in their teaching. This stark contrast between the promises of AI and its actual usage highlights a growing disconnect.

A separate investigation published in Education Next sheds further light on this gap. Over the past decade, U.S. schools have widely adopted online math platforms like Khan Academy, DreamBox, i-Ready and IXL. While research suggests these tools can improve student performance when used as intended, only a tiny fraction, about 5 percent, of students actually meet the recommended usage guidelines. Most of the gains seen in studies are concentrated among motivated, high-performing students from more affluent backgrounds. This phenomenon, dubbed the “5 Percent Problem,” raises a troubling question: Are AI learning tools really helping close achievement gaps or are they quietly widening them?

This challenge, where advanced tools benefit a small, advantaged subset of students while bypassing the majority, has profound implications for developed education systems. It suggests that the very students AI is supposed to help the most are those least likely to engage with or benefit from it.

Stanford University’s CS293/EDUC473 course, titled Empowering Educators via Language Technology, offers a comprehensive framework for understanding this disconnect. The course brings together students, educators, technologists and researchers to examine the limitations of current AI tools in real classroom settings. One of its core insights is that AI excels at automating standardized, quantifiable tasks, but education, especially in K–12 settings, is profoundly human, contextual and relational.

When designers build AI tools as efficiency engines, automating grading, generating feedback or simulating tutoring, they risk reducing education to a series of mechanical tasks. But learning is not just data processing. It involves failure, reflection, dialogue and growth. Even the most advanced AI models cannot truly understand why a student made a mistake, what kind of support they need or how to guide them empathetically through a learning journey.

The CS293 team also warns that many tools are designed for the few, those who already excel at navigating digital environments. The study notes that many AI educational tools are created by engineers and scientists who themselves are high-functioning learners. As a result, these tools often reflect the assumptions, habits and learning styles of their creators, leaving behind students who struggle with language, motivation or access.

This bias is not malicious; it is structural. Developers often lack deep experience in classrooms and unintentionally build tools that assume a level of digital fluency, linguistic sophistication and self-regulation that many students simply do not have. At the heart of these challenges lies a fundamental question: Who gets to design educational technologies, and who decides what counts as “good learning”? This question is more than philosophical, it is political, cultural and deeply practical.

Who designs decides. AI systems do not arrive with neutral values; they encode the assumptions, priorities and blind spots of their creators. When decisions about how students should learn, what constitutes success and how performance is evaluated are made by engineers far removed from classrooms, the risk of misalignment is enormous.

Instead of top-down solutions, we need a new paradigm of co-design and co-creation, where teachers, students and communities are embedded in every stage of development. This approach acknowledges that educators are not mere users of technology, they are co-creators of educational meaning. They should be empowered to define the problems worth solving, the metrics that matter and the ways technology should serve rather than steer pedagogy.

This concern is especially urgent where AI adoption is accelerating. Developed countries across North America, Europe and Asia are most likely to adopt AI tools rapidly, often under political or market pressure to modernize. If we continue to outsource educational decision-making to private vendors or distant developers, we risk building systems that serve compliance more than creativity, efficiency more than equity. Yet without addressing the equity, usability and pedagogical alignment of these tools, they risk reinforcing systemic disparities.

The “5 Percent Problem” is not a technological failure. It is a design and implementation failure. It reminds us that access to tools is not the same as access to learning, and that digital solutions must be grounded in human realities.

The future of educational AI must be designed with, not merely delivered to educators and students. The question of who holds the power to shape learning environments is foundational. AI can support learning, but only if we ensure that the right people, teachers, students and communities are the ones deciding what learning should look like.

As AI continues to evolve, we would do well to remember: the future of education is not determined by code, but by values and by who gets to define them.


Author: Wilson Wong is the Founding Director and an Associate Professor of Data Science and Policy Studies (DSPS), School of Governance and Policy Science, at The Chinese University of Hong Kong (CUHK). He is also a Senior Research Fellow at the School of Management, UCL, and a Fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University. His major research areas include AI and Big Data, digital governance, ICT and comparative public administration.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *