The Operational Gap in Educational AI Governance: A Landscape Analysis of International Frameworks
- James Purdy
- 2 hours ago
- 28 min read

The Operational Gap in Educational AI Governance:
A Landscape Analysis of International Frameworks
AI Governance in Education Series
Memorandum No. 1
Ryan James Purdy
Purdy House Institute
December 2025
Working Paper
Abstract
The rapid deployment of artificial intelligence systems in educational settings has prompted significant policy development from international bodies, regional regulators, and national governments. This memorandum examines the current landscape of AI governance frameworks applicable to primary, secondary, and post-secondary education, identifying a consequential gap between policy articulation and institutional implementation. Through systematic analysis of frameworks from UNESCO, the OECD, UNICEF, the European Union, and various national jurisdictions, this paper establishes a four-category taxonomy: aspirational frameworks (which articulate principles), regulatory frameworks (which establish legal requirements), sector guidance (which provides education-specific recommendations), and operational frameworks (which supply implementation infrastructure). The analysis reveals that while the first three categories have proliferated, operational frameworks capable of translating policy into documented, auditable institutional practice remain largely absent from authoritative sources. This gap has significant implications for institutional compliance, insurance liability, and the protection of student welfare. The paper concludes that the education sector requires purpose-built operational infrastructure to bridge the distance between governance principles and demonstrable practice.
Keywords: AI governance, education policy, regulatory compliance, operational frameworks, EU AI Act, UNESCO, institutional implementation
Key Findings
This memorandum's analysis of AI governance frameworks applicable to education yields five principal findings:
A four-category taxonomy clarifies the governance landscape. AI governance frameworks serve distinct functions: aspirational frameworks articulate principles, regulatory frameworks establish legal requirements, sector guidance provides education-specific recommendations, and operational frameworks supply implementation infrastructure.
The first three categories have developed substantially; the fourth has not. International bodies, regulators, and sector associations have produced abundant guidance on AI governance principles and requirements. Operational frameworks providing implementation methodology remain largely absent from authoritative sources.
The gap is structural, not incidental. Authoritative bodies are not positioned to supply operational procedures. International organizations build consensus; regulators create enforceable requirements; sector associations provide advisory guidance. None occupies the role necessary to provide templates, calendars, role specifications, and audit documentation.
Institutions face a compliance demonstration problem. Educational institutions may govern AI responsibly in practice but lack the documentation infrastructure to demonstrate compliance to regulators, insurers, or oversight bodies.
Insurance and liability developments are transforming the gap from aspirational concern to operational risk. Emerging insurance endorsements addressing AI liability, combined with increasing underwriter attention to documented governance, create forcing functions that make operational infrastructure a business necessity.
Introduction
Between 2019 and 2025, international organizations, regional bodies, and national governments produced an unprecedented volume of guidance on artificial intelligence governance. UNESCO adopted the first global normative instrument on AI ethics, ratified by 194 member states.1 The European Union enacted the AI Act, classifying educational AI systems as high-risk and establishing binding compliance obligations. The OECD refined its AI Principles, now endorsed by over fifty countries.2 UNICEF launched its Generation AI initiative with specific guidance on children's rights in AI systems. National governments from Singapore to Canada published frameworks, strategies, and directives addressing AI deployment across public services.
Educational institutions now operate within a dense regulatory environment. A secondary school in the European Union deploying AI-powered assessment tools must demonstrate compliance with the AI Act's human oversight requirements, the GDPR's data protection obligations, and applicable national education law. A North American district implementing generative AI for student support must navigate FERPA, COPPA, state-level AI guidance, and evolving judicial interpretations of algorithmic decision-making. The compliance surface has expanded dramatically, yet institutional capacity to address it has not kept pace.
This memorandum argues that the proliferation of AI governance frameworks, while necessary, has created a specific and consequential gap: the absence of operational infrastructure capable of translating policy principles into documented institutional practice. The frameworks that currently dominate the landscape articulate what institutions should achieve but rarely specify how they should achieve it. They establish legal requirements but do not provide the procedural architecture necessary to demonstrate compliance: the forms, calendars, role assignments, configuration guides, and audit documentation that constitute verifiable governance.
This gap is not merely inconvenient; it is consequential. Insurance underwriters increasingly require evidence of AI governance as a condition of coverage. Regulators expect documented compliance systems, not aspirational commitments. Parents and oversight bodies demand accountability that can be demonstrated, not merely asserted. The distance between governance principles and verifiable practice represents institutional risk that existing frameworks do not address.
To substantiate this argument, this paper proceeds in three parts. Section One establishes a taxonomy for classifying AI governance frameworks, distinguishing aspirational, regulatory, sector guidance, and operational categories. Section Two analyzes the major frameworks within each category, documenting their scope, requirements, and limitations. Section Three identifies the operational gap with specificity, illustrating through concrete examples what existing frameworks provide and what they omit. The paper concludes with observations on the implications of this gap for institutional practice. Memorandum No. 2 in this series will examine the forcing functions (regulatory timelines, insurance requirements, and liability exposure) that make addressing this gap urgent.
Scope and Methodology
This analysis examined AI governance frameworks issued by international bodies (UNESCO, OECD, UNICEF, World Bank), regional regulators (European Union), and national governments (United States, Canada, United Kingdom, Australia, Singapore, Japan) between 2019 and 2025. Frameworks were included if they addressed AI deployment in educational settings, either explicitly or through general provisions applicable to public institutions serving minors.
For purposes of this analysis, "operational infrastructure" is defined as implementation resources that enable documented, auditable compliance. This includes: procedural templates (intake forms, consent documentation, incident reports), governance calendars (review schedules, audit cycles, training timelines), role specifications (named responsibilities, authority boundaries, escalation paths), technical configuration guidance (platform hardening procedures, access control specifications, data retention settings), and evidence artifacts (audit logs, decision records, compliance attestations). A framework was classified as "operational" only if it provided resources in at least three of these five categories with sufficient specificity for direct institutional adoption.
1. A Taxonomy of AI Governance Frameworks
Before analyzing specific frameworks, it is necessary to establish the categories that distinguish them. Not all governance documents serve the same function, and conflating their purposes obscures the gaps between them. This section proposes a four-category taxonomy based on the primary function each framework type performs: articulating principles, establishing legal requirements, providing sector-specific recommendations, or enabling implementation.
1.1 Aspirational Frameworks
Aspirational frameworks establish normative foundations for AI governance. They articulate principles, values, and ethical commitments that should guide AI development and deployment. Their authority derives from consensus-building among diverse stakeholders and their adoption by recognized international bodies. They shape discourse, inform regulatory development, and provide legitimacy for institutional action.
The paradigmatic example is the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021), the first global normative instrument on AI adopted by the international community. Ratified by 194 member states, the Recommendation establishes principles including human oversight, transparency, fairness, and accountability. It explicitly addresses education, requiring that AI systems in educational settings respect human dignity, preserve agency, and avoid discrimination. The Recommendation's authority is substantial: it represents genuine international consensus on AI ethics and provides a normative baseline against which national policies can be evaluated.
However, aspirational frameworks are not designed to provide implementation methodology. The UNESCO Recommendation tells institutions that they must ensure AI systems avoid discriminatory bias, a critical requirement, but does not specify the technical configurations, procurement procedures, or documentation systems necessary to demonstrate compliance. It establishes that human oversight must be meaningful but does not define the role structures, review protocols, or audit trails that make oversight verifiable. This is not a criticism of the Recommendation; normative instruments operate at a level of generality appropriate to their function. The gap exists not because aspirational frameworks fail but because they were never intended to fill it.
Similar observations apply to the OECD AI Principles (2019, updated 2024), which establish human-centered values, transparency, robustness, and accountability as foundational commitments. The Principles have been adopted by over fifty countries and inform regulatory development globally. Yet they operate at the same level of abstraction: they articulate what responsible AI governance requires without specifying how institutions should achieve it. The OECD's subsequent implementation guidance provides greater detail but remains focused on policy development rather than operational procedure.
UNICEF's Policy Guidance on AI for Children (2021) and the Generation AI initiative apply these principles specifically to children's rights.3 The guidance addresses age-appropriate design, data minimization, and protection from commercial exploitation, essential considerations for educational AI. It provides more sector-specific direction than general frameworks but still operates at the policy level rather than the procedural level. An institution seeking to implement UNICEF's recommendations would find principles to guide decisions but not forms to document them, calendars to schedule reviews, or configuration guides to secure platforms.
Aspirational frameworks perform an essential function in the governance ecosystem. They establish legitimacy, build consensus, and provide normative foundations for regulatory and operational development. The gap this memorandum identifies is not a failure of aspirational frameworks but a recognition that additional infrastructure is required to translate their principles into institutional practice.
1.2 Regulatory Frameworks
Regulatory frameworks establish legally binding requirements for AI deployment. Unlike aspirational frameworks, they carry enforcement mechanisms: penalties for non-compliance, regulatory oversight, and judicial review. They translate normative principles into legal obligations and create accountability structures with real consequences.
The European Union's Artificial Intelligence Act (2024) represents the most comprehensive regulatory framework for AI governance currently in force.4 The Act classifies AI systems by risk level, with educational AI systems explicitly designated as high-risk under Annex III.5 This classification triggers substantial compliance obligations: institutions deploying AI in education must implement risk management systems (Article 9), ensure data quality (Article 10), maintain technical documentation (Article 11), enable human oversight (Article 14), and achieve appropriate levels of accuracy, robustness, and cybersecurity (Article 15).
The AI Act's requirements are specific and enforceable. Article 9 mandates that risk management be continuous throughout the AI system lifecycle, with documented identification, analysis, and mitigation of risks. Article 14 requires that human oversight measures enable individuals to fully understand the AI system's capacities and limitations, properly interpret its outputs, and decide not to use it or override its decisions. These are not aspirational suggestions; they are legal requirements subject to penalties reaching €35 million or 7% of global turnover for the most serious violations.6
Yet regulatory frameworks, including the AI Act, also exhibit the operational gap. The Act requires bias mitigation but provides no template for documenting disparate impact across protected student populations. It mandates human oversight but does not specify whether this requires a dedicated AI ethics officer, a quarterly review board, or real-time override capabilities. It requires risk management systems but does not supply the risk assessment templates, severity matrices, or documentation formats necessary to demonstrate compliance. Institutions know what they must achieve but not how to achieve it in ways that satisfy regulatory scrutiny.
This pattern repeats across regulatory jurisdictions. In North America, FERPA establishes data protection requirements for student records but does not address AI-specific governance procedures. COPPA restricts data collection from children under thirteen but provides limited guidance on AI system configuration. State-level AI guidance, where it exists, varies dramatically in specificity and enforceability. A 2024 analysis identified AI-related guidance in thirty-three U.S. states, yet most such guidance operates at the policy level, advising institutions to consider AI implications without specifying implementation procedures.7
Canada's proposed Artificial Intelligence and Data Act (AIDA), under Bill C-27, would establish requirements for high-impact AI systems including those used in education. The legislation mandates risk assessment, bias mitigation, and human oversight: familiar requirements that create familiar gaps. The United Kingdom's approach, emphasizing sector-specific regulation rather than horizontal AI legislation, similarly establishes obligations without providing operational methodology.
Regulatory frameworks advance governance beyond aspiration by creating enforceable obligations. However, they presuppose operational infrastructure that does not yet exist in the education sector. Institutions face legal requirements they cannot systematically demonstrate meeting because the procedural architecture for compliance has not been developed or disseminated.
1.3 Sector Guidance
Between aspirational frameworks and regulatory requirements sits a third category: sector-specific guidance issued by professional associations, government agencies, and educational bodies. This guidance attempts to translate general principles into education-specific recommendations but typically stops short of operational specification.
UNESCO's Guidance for Generative AI in Education and Research (2023) exemplifies this category.8 The document addresses educational AI specifically, recommending age restrictions for generative AI platforms, emphasizing the importance of human agency in learning, and calling for institutional policies that protect student welfare. It provides more concrete direction than general ethical frameworks: institutions should validate AI tools before deployment, ensure age-appropriate access, and maintain human oversight of AI-influenced assessments. Yet the guidance remains at the recommendation level. It advises institutions to validate AI tools but does not provide validation checklists. It recommends age restrictions but does not specify the platform configurations that enforce them. It calls for documentation but does not supply documentation templates.
The U.S. Department of Education's AI Toolkit (2024) represents a national government's attempt to provide educational guidance.9 The Toolkit addresses AI in teaching, learning, and administration, offering recommendations on transparency, equity, and safety. It acknowledges the complexity institutions face and provides frameworks for thinking about AI governance. However, it does not provide implementation procedures. A superintendent reading the Toolkit would understand what considerations should inform AI policy but would not receive forms for vendor assessment, protocols for incident response, or calendars for governance review.
EDUCAUSE, the association serving higher education information technology, has produced substantial guidance on AI governance. Its resources address data governance, ethical considerations, and institutional policy development. The association's work provides valuable conceptual frameworks but, consistent with its advisory role, does not supply operational infrastructure. Similarly, state education departments that have issued AI guidance typically provide policy frameworks rather than implementation tools.
The pattern across sector guidance is consistent: increasing specificity about educational contexts combined with persistent abstraction about operational procedures. Guidance documents identify the right questions (How should institutions validate AI tools? What oversight structures are appropriate? How should incidents be documented?) without providing the forms, templates, and procedures that constitute answers.
1.4 Operational Frameworks: The Missing Category
The fourth category in this taxonomy is operational frameworks: resources that provide the procedural infrastructure necessary to translate principles and requirements into documented, auditable institutional practice. Operational frameworks supply what the preceding categories omit: implementation methodology.
An operational framework for educational AI governance would include procedural templates (vendor intake forms, consent documentation, incident report formats), governance calendars (60-day pilot structures, quarterly review cycles, annual audit schedules), role specifications (named AI governance leads, defined authority boundaries, documented escalation paths), technical configuration guidance (platform hardening procedures for Microsoft 365 and Google Workspace, age-based access controls, data retention settings), and evidence artifacts (audit log specifications, decision documentation formats, compliance attestation templates).
This category is notable primarily for its absence from authoritative sources. The analysis underlying this memorandum identified no widely adopted operational frameworks meeting the specified criteria from UNESCO, the OECD, UNICEF, the European Commission, the U.S. Department of Education, or major national governments. Sector associations such as EDUCAUSE and CoSN provide valuable guidance but do not supply comprehensive operational infrastructure. The gap is structural: authoritative bodies have produced aspirational frameworks, regulatory requirements, and sector guidance in abundance, but the operational layer that would enable systematic implementation remains underdeveloped.
This absence has consequences. Institutions cannot demonstrate compliance with requirements they lack procedures to implement. Insurance underwriters cannot assess governance adequacy without documentation standards to evaluate. Regulators cannot verify institutional claims without audit artifacts to examine. The operational gap is not merely a convenience problem; it is a structural impediment to effective AI governance in education.
The following section examines specific frameworks in greater detail, documenting precisely what they provide and what they omit, to substantiate the operational gap with concrete evidence.
2. Framework Analysis: What Is Provided, What Is Omitted
The taxonomy established in Section One provides a structure for analysis. This section examines representative frameworks from each category, documenting with specificity what guidance they provide and what operational elements they omit. The purpose is not to criticize these frameworks, which serve their intended functions, but to demonstrate that the operational layer necessary for institutional implementation has not been supplied by authoritative sources.
The frameworks analyzed here were selected based on three criteria: prominence in international AI governance discourse, explicit applicability to educational settings, and jurisdictional diversity. The analysis covers the primary international instruments (UNESCO, OECD, UNICEF), the most comprehensive regional regulation (EU AI Act), and representative national guidance from North America and Europe. This is not an exhaustive review of all AI governance documents, nor a systematic analysis of the fifty-plus state and provincial guidance documents now in circulation. The selection is illustrative rather than comprehensive, chosen to demonstrate a pattern rather than catalogue every instance.
Throughout this analysis, "operational infrastructure" refers to the procedural resources that enable documented, auditable compliance: role-accountability structures assigning named responsibilities, standardized procedures specifying required actions, auditable artifacts demonstrating compliance, and platform or procurement controls translating policy into technical enforcement. The central question for each framework is whether it supplies such infrastructure or presupposes that institutions will develop it independently.
2.1 Aspirational Frameworks in Detail
UNESCO Recommendation on the Ethics of AI
The UNESCO Recommendation addresses education explicitly in several provisions. Paragraphs 34 through 36 require that AI systems in education avoid discriminatory bias, respect cultural and linguistic diversity, and preserve human agency in learning.10 Paragraphs 23 through 27 establish that institutions must assign clear responsibilities for AI oversight and ensure that personnel understand their obligations.11
These provisions establish important normative requirements. However, consider what an institution attempting to implement them would need:
The Recommendation requires avoiding discriminatory bias. Implementation requires a bias assessment methodology specifying which student populations to analyze, statistical thresholds for identifying disparate impact, documentation formats for recording assessment results, remediation procedures when bias is detected, and ongoing monitoring protocols. The Recommendation does not specify institution-ready procedures for any of these requirements.
The Recommendation requires assigning clear responsibilities. Implementation requires role descriptions specifying AI governance duties, authority matrices defining decision rights, escalation procedures for contested decisions, and documentation systems for recording accountability. These operational elements are not addressed within the Recommendation itself.
The Recommendation requires preserving human agency. Implementation requires override procedures enabling human intervention in AI-driven processes, review protocols for AI-influenced decisions affecting student outcomes, and documentation standards for recording human judgments. The normative instrument establishes the principle; the procedural architecture remains unspecified.
This pattern is consistent throughout: normatively sound requirements without the operational infrastructure to implement them. An institution could adopt the UNESCO Recommendation as policy and still lack the means to demonstrate compliance in practice.
OECD AI Principles
The OECD Principles establish that AI systems should be transparent, explainable, and subject to human accountability.12 Principle 1.2 specifically addresses inclusivity and fairness, requiring that AI deployment avoid systemic bias and respect human rights.
The transparency requirement illustrates the operational gap. The Principles state that stakeholders should be able to understand AI system behavior and challenge outcomes. For an educational institution, this would require disclosure templates informing students and parents when AI influences decisions, explanation protocols enabling staff to articulate how AI recommendations were generated, challenge procedures allowing families to contest AI-influenced outcomes, and documentation systems recording disclosures and responses. The Principles establish the normative expectation; they do not supply the procedural mechanisms.
The OECD has published implementation guidance, but this guidance addresses national policy development rather than institutional operations. It assists governments in designing AI strategies; it does not assist schools in documenting AI governance. The gap between policy guidance and operational procedure remains.
UNICEF Policy Guidance on AI for Children
UNICEF's guidance is more specific than general ethical frameworks because it addresses children directly. The guidance calls for age-appropriate AI interactions, data minimization in systems affecting children, and protection from commercial exploitation.13
The age-appropriate interaction requirement is instructive. UNICEF recommends that AI systems be designed for developmental appropriateness and that children's access to AI be calibrated to their maturity. Implementation would require age verification mechanisms in platform configurations, differentiated access policies for elementary, middle, and secondary students, content filtering specifications appropriate to developmental stages, and documentation of age-based controls. The guidance identifies the principle; it does not supply the configuration methodology.
Consider a concrete scenario: a school district seeks to implement UNICEF's recommendation that children under thirteen should have restricted access to generative AI platforms. The guidance tells the district this restriction is important. It does not specify how to configure identity management systems to enforce age-based restrictions, what platform settings control AI feature access, how to document the restriction for compliance purposes, or how to handle exceptions.14 The district has a principle but not a procedure.
2.2 Regulatory Frameworks in Detail
European Union AI Act
The AI Act creates binding legal obligations for institutions deploying high-risk AI systems, including those used in education. Unlike aspirational frameworks, the Act carries enforcement mechanisms and substantial penalties. Its requirements are more specific than ethical principles. Yet the operational gap persists.
This persistence is by design. The AI Act follows the European Union's "New Legislative Framework" approach: it establishes essential requirements while intentionally delegating technical implementation to harmonised standards and market actors.15 The Act assumes that conformity assessment bodies, industry associations, and professional service providers will translate legal obligations into operational procedures. For education, where such infrastructure has not developed, the delegation creates a gap rather than filling one.
Article 9 requires risk management systems that identify and mitigate risks throughout the AI lifecycle. The system must be "a continuous iterative process planned and run throughout the entire lifecycle."16 This is a legal requirement with significant penalties attached. Implementation would require risk identification checklists covering technical, operational, and rights-related risks; severity assessment matrices enabling consistent risk categorization; mitigation planning templates documenting response strategies; monitoring schedules specifying review frequencies; and audit documentation formats demonstrating ongoing compliance. The Act mandates the system but does not specify the procedures that would constitute it.
Article 14 requires human oversight measures that enable users to "fully understand the capacities and limitations of the high-risk AI system," "properly interpret" its outputs, and "decide not to use" the system or "override" its decisions.17 An educational institution deploying AI must determine who constitutes the "user" for oversight purposes (teacher, administrator, AI coordinator), what training ensures "full understanding" of system capabilities, what documentation demonstrates "proper interpretation" of outputs, and what technical mechanisms enable "override" of AI decisions. The Act establishes the legal standard; the institution must develop the procedures to meet it.
Article 10 requires that training data be "relevant, sufficiently representative, and to the best extent possible, free of errors and complete."18 For institutions that procure rather than develop AI systems (the typical case in education), this requirement translates into vendor assessment obligations. Implementation requires vendor questionnaires addressing data quality practices, contractual provisions requiring data quality representations, audit rights enabling verification of vendor claims, and documentation systems recording vendor assessments. The Act creates the obligation; it does not supply the procurement instruments.
North American Regulatory Landscape
In the United States, educational AI governance operates within a fragmented regulatory environment. FERPA establishes data protection requirements for student educational records.19 COPPA restricts data collection from children under thirteen.20 Neither statute was designed for AI governance, and both predate the current generation of AI systems.
FERPA requires that schools protect student records and obtain consent before disclosing personally identifiable information. When AI systems process education records or personally identifiable information within FERPA's scope, FERPA obligations may be implicated. But FERPA does not address how to assess whether an AI vendor qualifies for the "school official" exception, what contractual provisions ensure AI vendors meet FERPA obligations, how to document AI system access to student records, or how to respond when AI systems generate inferences that may constitute new educational records. Schools are often required to interpret a 1974 statute in contexts it did not anticipate, with limited operational guidance tailored to modern AI systems.
COPPA presents similar challenges. The statute restricts collection of personal information from children under thirteen, requiring verifiable parental consent. AI systems that interact with young children may implicate COPPA, but the statute does not address whether AI-generated inferences constitute "collection," how consent mechanisms should function for AI features, what documentation demonstrates COPPA compliance in AI contexts, or how schools should configure platforms to restrict AI features for COPPA-covered students. The regulatory requirement exists; the operational methodology for AI contexts does not.
State-level AI guidance has proliferated but varies dramatically in specificity. Some states have issued detailed recommendations; others have provided only general principles. To date, no state has supplied comprehensive operational infrastructure for educational AI governance. A district operating across state lines, or seeking to establish governance that would satisfy any regulator, cannot rely on state guidance alone.
2.3 Sector Guidance in Detail
UNESCO Guidance for Generative AI in Education
UNESCO's 2023 guidance on generative AI represents the most education-specific framework from an authoritative international body. It addresses age restrictions, academic integrity, teacher preparation, and institutional policy development.21 The guidance is more concrete than general ethical frameworks and acknowledges the specific challenges generative AI poses for education.
Yet the guidance illustrates the sector guidance pattern: recommendations without procedures. Consider three examples:
First, age restrictions. The guidance recommends that countries "consider defining age limits for the independent use of GenAI applications."22 An institution accepting this recommendation would need to determine what age threshold to set, how to verify student ages in platform configurations, what technical controls enforce the restriction, how to handle supervised versus independent use, and how to document the restriction for compliance purposes. The guidance identifies the policy question; it does not provide the implementation methodology.
Second, validation. The guidance recommends that institutions validate AI tools before deployment. Validation would require assessment criteria for evaluating AI educational value, security review protocols for examining data practices, bias evaluation methodologies for testing system outputs, pilot procedures for controlled deployment, and documentation formats for recording validation decisions. The guidance recommends validation without specifying what validation entails in operational terms.
Third, transparency. The guidance emphasizes that students and educators should understand when AI is being used and how it influences learning. Implementation requires disclosure templates for AI-influenced activities, explanation protocols for AI-generated recommendations, and documentation systems for recording AI interactions. The guidance establishes the importance of transparency without supplying the transparency mechanisms.
U.S. Department of Education Guidance
The Department of Education's guidance addresses AI in teaching, learning, and educational administration.23 It acknowledges the transformative potential of AI while emphasizing equity, safety, and human oversight. The guidance is thoughtful and reflects genuine engagement with educational AI challenges.
However, the guidance operates at the policy level. It helps administrators think about AI governance; it does not help them implement AI governance systematically. An administrator reading the guidance would learn that equity considerations are important, that human oversight should be maintained, and that data privacy must be protected. The administrator would not receive vendor assessment questionnaires, incident response protocols, staff training curricula, platform configuration guides, or audit documentation templates. The guidance informs policy development; it does not enable documented implementation.
National Examples: Ireland's Guidance
Ireland's Department of Education issued guidance on generative AI in 2024, representing a national government's attempt to provide practical direction.24 The guidance is more specific than many international frameworks and addresses Irish educational contexts directly.
The guidance advises schools to "check age restrictions for any platforms being used." This is sound advice. Implementation requires identifying which platforms have age restrictions, determining how those restrictions interact with student ages in the school, configuring platforms to enforce appropriate restrictions, documenting the configuration for compliance purposes, and establishing review procedures when restrictions change. The guidance poses a question; it does not provide the methodology to answer it.
The guidance advises schools to "ensure student data is protected." Implementation requires data mapping to identify what AI systems collect, contractual provisions ensuring vendor data protection, technical controls limiting data retention, access controls restricting data visibility, and audit procedures verifying protection measures. The guidance establishes the obligation; the school must create the compliance system to meet it.
This pattern, evident across national guidance documents examined in this analysis, demonstrates the sector guidance limitation: increasing specificity about what schools should consider combined with persistent absence of how schools should act.
2.4 Summary: The Consistent Pattern
Across the aspirational frameworks, regulatory requirements, and sector guidance examined in this analysis, a consistent pattern emerges. Each category advances AI governance in important ways: aspirational frameworks establish legitimacy and normative foundations; regulatory frameworks create enforceable obligations; sector guidance translates principles into educational contexts. Yet none provides the operational infrastructure necessary for systematic institutional implementation.
The gap is not a failure of existing frameworks but a recognition that they were designed for different purposes. International bodies establish principles that can achieve consensus among diverse members. Regulators create requirements that can be enforced across varied contexts. Sector associations provide guidance appropriate to advisory relationships. None occupies the position necessary to supply operational procedures, and the education sector has not yet developed the infrastructure from other sources that exists in more mature compliance domains such as financial services or healthcare.
The following section examines this operational gap with greater specificity, illustrating through detailed examples what operational infrastructure would include and what its absence means for institutional practice, insurance, and liability.
3. The Operational Gap: Concrete Implications
The preceding sections established that aspirational frameworks, regulatory requirements, and sector guidance have proliferated without corresponding development of operational infrastructure. This section examines what that gap means in practice: what institutions cannot demonstrate, what insurers cannot assess, and what regulators cannot verify. The analysis proceeds through three lenses: institutional compliance, insurance and liability, and comparative sector development.
3.1 The Compliance Demonstration Problem
Consider a secondary school that has adopted AI-powered tools for student assessment, learning support, and administrative efficiency. The school's leadership has read the UNESCO Recommendation, reviewed the EU AI Act requirements (if operating in Europe), and examined available sector guidance. They are committed to responsible AI governance. What can they demonstrate?
The school can demonstrate policy adoption. It can point to board resolutions, acceptable use policies, and statements of principle. These documents establish intent. They do not establish implementation.
What the school likely cannot demonstrate, absent operational infrastructure it has developed independently:
Vendor governance. The school cannot produce a standardized record showing that each AI vendor was assessed against consistent criteria before deployment, that data protection provisions were verified, that bias testing was conducted or required, and that ongoing monitoring obligations were established. Without vendor intake procedures and assessment templates, vendor governance exists as intention rather than documented practice.
Human oversight. The school cannot produce documentation showing which staff members reviewed AI-generated recommendations before they influenced student outcomes, what criteria guided those reviews, and what override decisions were made. Without review protocols and decision logs, human oversight is asserted but not evidenced.
Bias monitoring. The school cannot produce records showing that AI system outputs were tested for disparate impact across student populations, that identified disparities triggered remediation, and that monitoring continues on a defined schedule. Without bias assessment methodologies and monitoring calendars, fairness commitments remain aspirational.
Incident response. The school cannot produce documentation showing how AI-related incidents (data exposure, harmful outputs, system failures) were identified, escalated, investigated, and resolved. Without incident classification frameworks and response protocols, the school's capacity to manage AI failures is untested and undocumented.
This is the compliance demonstration problem: institutions may be governing AI responsibly in practice, but they lack the documentation infrastructure to prove it. When a regulator, insurer, parent, or oversight body asks "How do you govern AI?", the institution can describe its values but cannot produce the audit trail that demonstrates those values in action.
3.2 Insurance and Liability Exposure
The operational gap has immediate implications for institutional insurance and liability. In July 2024, Verisk (ISO) filed new endorsement language for commercial general liability policies, introducing AI-specific provisions that carriers may adopt beginning in 2025.25 These endorsements (CG 40 47, CG 40 48, CG 35 08) allow insurers to exclude or limit coverage for AI-related claims. Educational institutions renewing policies in 2025 and beyond may face coverage gaps, increased premiums, or underwriting requirements they cannot satisfy.
Insurance underwriting increasingly requires evidence of governance. In mature sectors such as financial services and healthcare, institutions seeking coverage must demonstrate documented risk management: policies, procedures, training records, audit reports, and incident histories. Underwriters assess not merely whether institutions have policies but whether they have implemented those policies through verifiable procedures.
Educational institutions face a structural disadvantage in this environment. They have governance principles (adopted from UNESCO, the OECD, or sector guidance) but may lack the documentation infrastructure that underwriters expect. When an underwriter asks for evidence of AI risk management, an institution without operational infrastructure cannot produce vendor assessment records, oversight documentation, bias monitoring reports, or incident response histories. The institution's coverage terms, premiums, and exclusions may reflect this documentation gap regardless of its actual governance quality.
Public reporting and early litigation signals suggest that AI-related liability in education is not hypothetical. Data breaches affecting educational technology platforms have exposed student records at significant scale. Legal claims have been filed alleging that AI system interactions contributed to student harm. These developments, while still emerging, indicate that institutions may face increasing pressure to demonstrate AI governance through documented evidence rather than policy statements alone.
The insurance industry's increasing attention to AI governance creates a forcing function: institutions will need to demonstrate compliance in terms underwriters recognize, using documentation formats underwriters can assess. The operational gap becomes a business problem, not merely a governance aspiration.
3.3 Comparative Sector Development
The operational gap in education becomes more visible when compared to sectors with more mature AI governance infrastructure. Healthcare and financial services, both heavily regulated and subject to significant liability exposure, have developed operational frameworks that translate principles into documented practice.
In healthcare, AI systems that inform clinical decisions are subject to regulatory frameworks that include operational requirements.26 The FDA's guidance on clinical decision support software, combined with health IT certification requirements, creates an environment where AI governance is not merely principled but proceduralized. Healthcare institutions deploying AI can draw on established frameworks for validation, monitoring, and documentation. Third-party assessment services have developed to verify compliance. The gap between policy and practice has been substantially addressed through sector-specific infrastructure.
In financial services, AI governance builds on existing risk management and audit infrastructure.27 Model risk management frameworks, SOC 2 attestation standards, and regulatory examination procedures provide operational foundations that AI governance can extend rather than create from scratch. Financial institutions implementing AI have established templates, assessment methodologies, and documentation standards to adopt. The sector's audit culture means that governance claims are routinely verified against documented evidence.
Education lacks equivalent infrastructure. No widely adopted regulatory framework has issued operational requirements for educational AI governance comparable in specificity to FDA guidance or financial examination standards. No sector association has published comprehensive implementation frameworks with templates, calendars, and audit specifications that have achieved broad adoption. No mature third-party assessment market has developed to verify educational AI governance claims at scale. Individual institutions, districts, and systems must develop operational infrastructure independently, resulting in inconsistent implementation, duplicated effort, and uncertain quality.
This comparative underdevelopment is notable given education's characteristics. Educational institutions serve minors, a population entitled to heightened protection under most legal frameworks. Educational AI systems influence consequential decisions affecting students' academic trajectories, opportunities, and wellbeing. The data processed includes sensitive information about children's learning, behavior, and development. By most measures, education should have robust AI governance infrastructure. It does not.
3.4 What Operational Infrastructure Would Include
To clarify what the operational gap excludes, it is useful to specify what comprehensive operational infrastructure for educational AI governance would include. This is not an exhaustive specification but an illustration of the category of resources that authoritative frameworks have not supplied.
Governance architecture. Operational infrastructure would include role specifications defining AI governance responsibilities (who approves AI deployments, who monitors ongoing use, who responds to incidents), authority matrices clarifying decision rights at each level, and escalation procedures for decisions exceeding delegated authority. These specifications would enable institutions to assign accountability rather than merely assert it.
Vendor management. Operational infrastructure would include intake procedures for evaluating new AI vendors, assessment templates covering data practices, security posture, bias testing, and contractual commitments, ongoing monitoring requirements, and offboarding procedures for discontinued tools. These resources would enable documented vendor governance throughout the AI lifecycle.
Platform configuration. Operational infrastructure would include technical guidance for implementing governance decisions in actual systems: how to configure identity management to enforce age-based access restrictions, how to set data retention policies in learning management systems, how to enable audit logging for AI interactions. These guides would translate policy into technical controls.
Documentation systems. Operational infrastructure would include templates for recording governance decisions, oversight reviews, incident reports, and compliance attestations. These artifacts would create the audit trail necessary to demonstrate governance in practice.
Implementation calendars. Operational infrastructure would include schedules specifying when governance activities occur: pilot timelines for new AI deployments, review frequencies for ongoing systems, audit cycles for compliance verification, training schedules for staff development. These calendars would transform governance from episodic response to systematic practice.
This infrastructure exists in fragments across individual institutions, consulting practices, and unpublished internal documents. It has not been consolidated, standardized, or disseminated by authoritative bodies. The operational gap is the absence of such infrastructure from the sources institutions would naturally consult for AI governance guidance.
4. Conclusion
This memorandum has examined the landscape of AI governance frameworks applicable to education, establishing a four-category taxonomy (aspirational, regulatory, sector guidance, and operational) and demonstrating that while the first three categories have developed substantially, the fourth remains largely absent from authoritative sources.
The gap is structural rather than incidental. International bodies produce frameworks appropriate to their consensus-building function. Regulators create requirements appropriate to their enforcement function. Sector associations provide guidance appropriate to their advisory function. None is positioned to supply the operational procedures, templates, and documentation systems that translate principles into auditable institutional practice. The gap persists not because existing actors have failed but because no actor has filled the necessary role.
The implications are significant. Educational institutions face regulatory requirements they may be unable to demonstrate meeting. Insurance underwriters cannot assess governance quality without documentation standards to evaluate. Liability exposure increases as AI systems influence more consequential decisions about students. The operational gap creates institutional risk that aspirational commitments cannot address.
Addressing this gap requires purpose-built operational infrastructure designed for educational contexts: governance architectures appropriate to school organizational structures, vendor management procedures scaled to educational procurement, platform configuration guidance for the systems schools actually use, and documentation templates that satisfy both regulatory expectations and practical constraints.
Memorandum No. 2 in this series will examine the forcing functions that make addressing this gap urgent: regulatory compliance timelines, insurance market developments, and liability exposure patterns that are transforming AI governance from an aspirational concern into an operational necessity.
References
International Frameworks
OECD. (2019, amended 2024). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). Paris: OECD Publishing. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. Paris: UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000381137
UNESCO. (2023). Guidance for Generative AI in Education and Research. Paris: UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000386693
UNICEF. (2021). Policy Guidance on AI for Children. New York: UNICEF. https://www.unicef.org/innocenti/reports/policy-guidance-ai-children
UNICEF. (2024). Generative AI and Children. New York: UNICEF.
Regional Regulation
European Parliament and Council. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
European Parliament and Council. (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data (General Data Protection Regulation). Official Journal of the European Union, L 119.
National Legislation and Guidance
Canada. (2022). Bill C-27, Digital Charter Implementation Act, 2022 (including Artificial Intelligence and Data Act). Parliament of Canada.
Ireland Department of Education. (2024). Guidance on Generative Artificial Intelligence in Education. Dublin: Government of Ireland.
United States. (1974). Family Educational Rights and Privacy Act (FERPA), 20 U.S.C. § 1232g; 34 CFR Part 99.
United States. (1998). Children's Online Privacy Protection Act (COPPA), 15 U.S.C. §§ 6501-6506; 16 CFR Part 312.
U.S. Department of Education, Office of Educational Technology. (2023). Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. Washington, DC.
U.S. Department of Education. (2024). AI Toolkit. Washington, DC.
Industry and Insurance
Education Commission of the States. (2024). State Education Policy Tracking: Artificial Intelligence. Denver, CO.
Verisk Insurance Solutions. (2024). ISO Circular LI-CF-2024-175: Artificial Intelligence Liability Insurance Endorsements. Jersey City, NJ.
Comparative Sector Frameworks
Board of Governors of the Federal Reserve System. (2011). SR 11-7: Guidance on Model Risk Management. Washington, DC.
U.S. Food and Drug Administration. (2022). Clinical Decision Support Software: Guidance for Industry and Food and Drug Administration Staff. Silver Spring, MD.
Appendix: Framework Comparison Matrix
The following table summarizes the operational infrastructure elements present or absent in the frameworks analyzed in Section 2. "Operational infrastructure" is defined as resources enabling documented, auditable compliance across five categories: procedural templates, governance calendars, role specifications, configuration guidance, and evidence artifacts.
Framework | Category | Templates | Calendars | Roles | Config | Evidence |
UNESCO AI Ethics (2021) | Aspirational | — | — | — | — | — |
OECD AI Principles (2024) | Aspirational | — | — | — | — | — |
UNICEF AI for Children (2021) | Aspirational | — | — | — | — | — |
EU AI Act (2024) | Regulatory | — | — | General | — | General |
FERPA (1974) | Regulatory | — | — | — | — | — |
COPPA (1998) | Regulatory | — | — | — | — | — |
UNESCO GenAI (2023) | Sector | — | — | — | — | — |
US DoE AI Toolkit (2024) | Sector | — | — | — | — | — |
Ireland DoE (2024) | Sector | — | — | — | — | — |
Note: "General" indicates that the framework establishes obligations in this area but does not provide institution-ready procedures or templates. "—" indicates the framework does not address this operational element with implementation-level specificity.
About the Author
Ryan James Purdy is an independent researcher and writer focused on AI governance in education and other regulated sectors. His work examines how AI policy and regulatory requirements translate into institutional implementation.
About Purdy House Publishing
Purdy House Institute is an independent research imprint publishing working papers on AI governance in education and related regulated contexts. The AI Governance in Education series examines gaps between AI governance policy and institutional implementation.
Correspondence: jamespurdy624@gmail.com
LinkedIn: www.linkedin.com/in/purdyhouse
1UNESCO, Recommendation on the Ethics of Artificial Intelligence (Paris: UNESCO, 2021). The Recommendation was adopted by the UNESCO General Conference at its 41st session on 23 November 2021.
2OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449 (Paris: OECD, 2019; amended 2024). As of 2024, the Principles have been adopted by all 38 OECD member countries and 13 non-member adherents.
3UNICEF, Policy Guidance on AI for Children (New York: UNICEF, 2021); UNICEF, Generative AI and Children (New York: UNICEF, 2024).
4Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L 2024/1689, 12.7.2024.
5AI Act, Annex III, Section 3(a) classifies AI systems intended to be used for educational or vocational training purposes as high-risk.
6AI Act, Article 99 establishes maximum administrative fines of €35,000,000 or 7% of total worldwide annual turnover for specified violations.
7Education Commission of the States, State Education Policy Tracking: Artificial Intelligence (2024). Analysis current as of September 2024.
8UNESCO, Guidance for Generative AI in Education and Research (Paris: UNESCO, 2023).
9U.S. Department of Education, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations (Washington, DC: Office of Educational Technology, 2023); U.S. Department of Education, AI Toolkit (2024).
10UNESCO Recommendation, Paragraphs 34-36.
11UNESCO Recommendation, Paragraphs 23-27.
12OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449 (2019; amended 2024), Principle 1.2.
13UNICEF, Policy Guidance on AI for Children (New York: UNICEF, 2021), Section 4: Protect children's data and privacy.
14For illustrative purposes, Microsoft Entra ID (formerly Azure Active Directory) provides conditional access policies and age-gating controls that can restrict AI feature access by user group. Similar controls exist in Google Workspace for Education. Configuration specifics vary by platform version and licensing tier.
15The AI Act follows the "New Legislative Framework" approach, establishing essential requirements while leaving conformity assessment and technical implementation to harmonised standards and market actors. See Recitals 40-41.
16Regulation (EU) 2024/1689 (AI Act), Article 9(2).
17AI Act, Article 14(4).
18AI Act, Article 10(2)-(3).
1920 U.S.C. § 1232g; 34 CFR Part 99.
2015 U.S.C. §§ 6501-6506; 16 CFR Part 312.
21UNESCO, Guidance for Generative AI in Education and Research (Paris: UNESCO, 2023), pp. 10-14.
22UNESCO GenAI Guidance, p. 18.
23U.S. Department of Education, Artificial Intelligence and the Future of Teaching and Learning (Washington, DC: Office of Educational Technology, 2023), pp. 42-48.
24Ireland Department of Education, Guidance on Generative Artificial Intelligence in Education (Dublin: Government of Ireland, 2024).
25Verisk Insurance Solutions, ISO Circular LI-CF-2024-175 (July 2024), introducing endorsements CG 40 47, CG 40 48, and CG 35 08 addressing AI-related liability. Carrier adoption and effective dates vary by jurisdiction and policy renewal cycle.
26For comparison, see healthcare AI governance frameworks including FDA guidance on Clinical Decision Support Software (2022) and ONC Health IT Certification Program requirements.
27Financial services AI governance draws on existing frameworks including SOC 2 Type II attestation, FFIEC IT Examination Handbook, and sector-specific model risk management guidance (SR 11-7).



Comments