Why 90% of Schools Are Behind on AI Policy
- James Purdy
- Apr 11
- 6 min read

Key Takeaways
Despite AI's rapid integration into classrooms, only about 10-12% of North American school boards have implemented formal AI policies, creating significant governance gaps.
While schools debate policies, students have already embraced AI tools at astonishing rates—95% report grade improvements with AI tutoring and nearly one-third use AI for written assignments.
Without coherent policies, schools risk creating a new "AI divide" where inconsistent guidance leaves students and teachers navigating ethical uncertainties without institutional support.
[Affiliate disclosure: Your success fuels this operation. I have partnered with only the best AI companies who allow me sell their fine services. If you can, do me a solid and click around on these pictures a bit because some of these folks pay me for it. Think of it as a win-win: you get resources that accelerate your growth, while supporting my amazing content.]
The Growing Governance Gap in Education's AI Revolution
After two decades in education, I've watched countless technological "revolutions" sweep through schools with great fanfare. I've seen institutions transform overnight as students traded in their textbooks for tablets, with administrators proudly declaring the dawn of a "new era of learning." I've witnessed entire curricula reconstructed around educational apps and software, with schools earning prestigious innovation awards simply for buying into (and buying) the latest digital ecosystem.
In my own classroom, I remember the frenzy when interactive whiteboards first arrived. We received exactly three days of training before being expected to revolutionize our teaching. The principal would parade visitors through my classroom while I performed what I called my "digital dance"—demonstrating how students could now drag-and-drop parts of a sentence instead of, you know, writing them. Meanwhile, half the teachers couldn't reliably connect their laptops without the one tech-savvy colleague who became the building's unofficial IT support. Each new technology followed the same pattern: grand promises, hurried implementation, and teachers left scrambling to actually make it work.
The AI revolution is fundamentally different. Unlike previous educational technologies that required institutional buy-in and top-down support, AI adoption is being driven by learners themselves. While schools and governments debate policies and procedures, students have already incorporated AI into their daily academic workflows at astonishing rates.

The Urgent Need for AI Governance in Education
There is a concerning gap in educational AI governance. A comprehensive review by the Center on Reinventing Public Education found that as of early 2025, only about 10-12% of North American school boards have implemented formal AI policies. Among the 14,200 school boards across the United States and Canada, approximately 1,415 had established governance frameworks for artificial intelligence use in educational settings.
The geographical distribution reveals further disparities. In the United States, while 13 states had developed official guidance for schools by October 2023, two-thirds of states either had no plans to provide guidance or did not respond to inquiries about their approach. More recently, the EdWeek Research Center survey found that 79% of educators say their districts still do not have clear policies on AI use in education, despite 56% expecting AI tool usage to increase in their districts over the next year.
This policy vacuum exists despite widespread adoption by students. Recent studies paint a striking picture of AI's rapid integration into student life:
95% of students report grade improvements using AI tutoring
Nearly one-third already use AI for written assignments
90% prefer AI tutoring when available
A Harvard University study from the Department of Physics found that students learn more than twice as much in less time using AI-supported learning compared to traditional methods
The High Stakes of Inaction
The consequences of this policy lag are significant. As Catherine Truitt, North Carolina's superintendent of public instruction, bluntly states: "The consequences of us ignoring it and sticking our heads in the sand is that students will game the system."
This concern is echoed by educational technology experts like Pat Yongpradit, who warns that without coherent policies, "You can have, in the same school, a teacher allowing their 10th grade English class to use ChatGPT freely... And then literally, right down the hall, you can have another teacher banning it totally." He describes this inconsistency as creating a new "digital divide [that] will be an AI divide."
Teachers find themselves caught in an impossible position – expected to police technology they barely understand while simultaneously being encouraged to leverage it for instruction, preparation, feedback, and marking. According to the EdWeek Research Center, 78% of educators report lacking the time or bandwidth to address AI appropriately alongside their existing responsibilities.
Meanwhile, educational institutions are scrambling to develop governance frameworks that could be outdated before implementation. The rapid pace of AI development means that policies written today based on current capabilities may become obsolete within months as new models and applications emerge.
The Need for Comprehensive Policies
Effective AI policies must address both student usage (academic integrity, digital literacy) and teacher implementation (lesson planning, assessment design, grading). Schools with comprehensive AI guidelines demonstrate improved learning outcomes while maintaining academic integrity, suggesting policy development is not just regulatory but pedagogically beneficial.
Some pioneering institutions are leading the way:
The Ottawa Catholic School Board provides an exemplary approach for K-12 students, with clear guiding principles that emphasize transparency: "When the use of AI is approved in student work, students will be expected to be clear and honest about AI's role in the work and properly cite its use."
Oxford University's guidelines mandate transparency, stating: "We will be open with our audiences about the use of AI in our work, including publishing these guidelines and using boilerplate labels where appropriate."
Harvard Business School's policy emphasizes that "students must review all AI-generated content very carefully, recognizing that they are ultimately responsible for the accuracy of any work they submit."
These examples illustrate that effective AI governance is possible and provide models that other institutions can adapt to their specific contexts.
The Dual Challenge
The challenge facing educational institutions is twofold: they must develop frameworks that address immediate concerns around academic integrity and appropriate AI use, while also preparing students for a future where AI proficiency will be an expected skill in higher education and the workplace.
As we'll explore in the next article in this series, this requires thoughtful approaches to student AI usage, including clear guidelines for when AI assistance is appropriate, how such use should be documented, and which skills remain fundamentally human even in an AI-enhanced learning environment.
Educational institutions cannot afford to wait for perfect solutions. The AI revolution in education is already underway, driven by students seeking more personalized and efficient learning experiences. Those institutions that develop thoughtful, balanced policies now will be better positioned to harness AI's benefits while preserving the essential human elements of education.
In our next article, we'll dive deeper into the student perspective, teacher, and administrator perspectives exploring how effective AI policies can provide essential guardrails rather than guesswork. We'll examine exemplary frameworks for academic integrity in an AI-enabled world and identify the critical questions every institution must address to support ethical AI use by students.
This is the first article in our six-part "AI in Education" series. In upcoming installments, we'll examine student academic integrity frameworks, teacher implementation strategies, AI-resistant pedagogies, future-proof policy development, and practical implementation approaches.
References
Artificial Intelligence and the Future of Teaching and Learning. (2023). U.S. Department of Education, Office of Educational Technology. https://www2.ed.gov/documents/ai-report/ai-report.pdf
Dusseault, B., & Lee, J. (2023, October). AI is Already Disrupting Education, but Only 13 States are Offering Guidance for Schools. Center on Reinventing Public Education. https://crpe.org/publications/ai-is-already-disrupting-education-but-only-13-states-are-offering-guidance-for-schools/
Intelligent.com. (2023, October). New Survey Finds Students Are Replacing Human Tutors With ChatGPT. https://www.intelligent.com/new-survey-finds-students-are-replacing-human-tutors-with-chatgpt/
Kestin, G., Miller, K., & Klales, A. (2024). AI Tutoring Outperforms Active Learning. Harvard University Department of Physics. https://doi.org/10.21203/rs.3.rs-4243877/v1
Klein, A. (2024, February 19). Schools Are Taking Too Long to Craft AI Policy. Why That's a Problem. Education Week. https://www.edweek.org/technology/schools-are-taking-too-long-to-craft-ai-policy-why-thats-a-problem/2024/02
Ottawa Catholic School Board. (2024). Artificial Intelligence at the OCSB. https://www.ocsb.ca/ai/
Oxford University. (2024, February 20). Guidelines on the use of generative AI. https://communications.admin.ox.ac.uk/guidelines-on-the-use-of-generative-ai
Partovi, H., & Yongpradit, P. (2024, January 18). AI and education: Kids need AI guidance in school. But who guides the schools? World Economic Forum. https://www.weforum.org/agenda/2024/01/artificial-intelligence-education-children-schools/
Comentarios