Only three years after the public release of ChatGPT, generative AI — which includes any AI that creates original content — has become the most disruptive technology in the history of K-12 education. Students are using AI to write essays and avoiding detection by running those essays through “humanizers.” When kids do write on their own, much of their research is assisted by AI that often “hallucinates,” or simply invents quotes and laces returns with misinformation. Of course, it’s no surprise that students in these cases struggle to recognize misinformation, because they haven’t actually done the research. All of this has led to deep concern about both academic integrity and diminishing critical thinking skills.
These are real concerns. And they must be addressed.

But, even as we work to mitigate this harm, generative AI is here to stay, and it will keep changing education. The question isn’t whether we can stop it, but how we can use it in schools in a way that helps students learn responsibly and prepares them for the future.
To help, more than half of state governments — which control most school decisions — have created standards on how public schools should use AI. Most emphasize AI literacy as an interdisciplinary approach to AI education that incorporates civics, jobs training, privacy, civil rights, and ethics alongside the development of technical know-how and the exploration of new creative possibilities.
There’s just one catch: Only two states, Ohio and Tennessee, mandate that districts implement these AI standards. In the remaining states with well-researched, comprehensive guidance, no action is required by districts — their guidance frameworks stand as polite suggestions, offering districts no money for training or compliance. As such, most districts are still operating without board-approved AI policy or curriculum, and most students continue to receive little or no education on AI, even as the world around them is transformed by the technology.
Speaking to The Preamble, Anil Hurkadli, who previously led the Department of Education’s Office of Education Technology, explained, “AI is a general-purpose technology that will impact every aspect of our lives, and schools need to respond with an interdisciplinary approach that addresses AI in all its facets.” He continued, “With proper training for young people — helping them understand what responsible, ethical use looks like and helping them know the risks that AI poses — it can still lead to better outcomes for every learner.”
One district in metro Atlanta is leading the charge. Gwinnett County Public Schools has empowered a team of teachers to weave AI literacy into its pre-existing curriculum for students, where learners will grapple with many of the questions related to the use of AI, like data collection, sustainability, and algorithmic bias.

Gwinnett’s support extends to teachers as well. Each school has a designated innovation coach who supports teachers in using AI in the classroom. The district has invested in platforms like MagicSchool AI, to assist teachers in providing feedback, tailoring lessons to student needs, creating engaging lessons, and prioritizing privacy protection.
Yet Gwinnett is an outlier. A more common story is that teachers feel underprepared and overwhelmed when it comes to AI. A nationwide EdWeek survey from last year found that 58% of teachers have yet to receive a single professional development session on the subject. Forbes reports that 69% of American teachers feel their schools have offered insufficient training on AI. Moreover, teachers harbor deep suspicion of the technology — largely stemming from concerns over academic integrity and critical thinking skills — with a recent Pew poll showing only 6% of educators believe AI offers greater upside than downside.
Overcoming these suspicions requires a multifaceted approach that responds to teacher concerns and trains them to tap the benefits of AI. First, every district needs a comprehensive guidance framework to establish clear policy on AI and academic integrity. Second, all teachers must have tools to block generative AI, like ChatGPT, at their discretion. And finally, if we hope to unlock AI’s potential while limiting its harm, greater investment is needed in developing AI literacy curricula and providing teacher training.
Not giving teachers proper AI training — and the AI-literacy lessons that go with it — is not just the districts’ fault. It’s a complicated problem.
As Hurkadli explained, “A robust AI use policy takes work.” Although there are many free resources available, he said districts need financial support for beefed-up cybersecurity and often costly legal advice. And it’s not cheap to provide the sustained learning opportunities required for educating students, teachers, administrators, and board members in comprehensive AI literacy curricula. Nor is it without costs to weave AI-related issues into relevant courses on computer science, government, and English.

In districts without board-approved AI guidance and curricula, students receive a scattershot education in this critical new technology. At best, students in these districts — which represent the majority of students in the United States — may have a teacher or two who are innovating in their use of AI, but that teacher will not be working within a coherent, comprehensive district strategy.
Some teachers want to teach AI skills, but without support from their school district, it can be risky. Many teachers, myself included, are hesitant to address a range of AI-related issues — from algorithmic bias to the environmental impact of data centers and the corrosive potential of disinformation — that could elicit complaints from the community and, without board-approved standards, potentially subject them to reprisal.
This district-to-district disparity is leading to a divide between haves and have nots in AI education, one that’s developing along familiar lines and exacerbating the pre-existing digital divide between suburban schools and low-income urban or rural schools. Robin Lake, director of Arizona State University’s Center on Reinventing Public Education, recently told NPR that an “AI divide is starting to show up in just about every major study,” including her own research that finds that “suburban, majority white, and low-poverty districts are about twice as likely to provide AI training to teachers as are urban, rural or high-poverty districts.”

A new gap in skills will grow, hitting low-income and rural students the hardest, leaving them behind as AI disrupts roughly 1.1 billion jobs worldwide. If inaction continues, it will affect them for decades to come, widening the divide between those who are prepared and those who are not. The damage is likely to be political as well as economic. As citizens, these students could also be misled by a flood of AI-generated fake news and disinformation, which could hurt the trust our democracy depends on.
Although we cannot return to pre-AI educational norms, it’s not too late to imbue AI with our collective values. If we require comprehensive AI frameworks of every district and provide robust teacher training, we can prepare students as citizens and workers in the world of AI, giving us a chance to harness the benefits of this disruptive technology while limiting some of its harm.
We must act with urgency because even as we fail to shape AI, it continues shaping our students, our schools, and our society.