top of page
Search

OECD Confirms "Soft Governance" Is A Major Compliance Risk in 2026

  • Writer: Ryan James Purdy
    Ryan James Purdy
  • Apr 7
  • 7 min read

Key Takeaways


  • The OECD's January 2026 Due Diligence Guidance for Responsible AI explicitly identifies stakeholder engagement and remediation as where existing AI governance frameworks fall shortest. My own mapping of the guidance extends this to human oversight documentation, vendor supply chain management, risk assessment, and impact assessment, all areas where most educational institutions and EdTech vendors have no documentation at all.

  • The guidance groups actors by their role in the AI value chain rather than by a strict hierarchy. Most school systems will generally fall under Group 3 as users of AI systems. Most EdTech vendors will fall under Group 2 when they design, deploy, or operate AI systems. Many organizations span both, and the OECD explicitly notes these groupings are not rigid or exclusive.

  • Meaningful stakeholder engagement is framed in the guidance as two-way, ongoing, and built into the AI system lifecycle. The OECD states directly that it "should not be seen as a one-off event." Most institutions are treating it as exactly that.

  • The framework is voluntary today. The OECD itself notes that due diligence expectations derived from this guidance are increasingly being integrated into legal requirements. In my professional assessment, formalization into structured audit requirements will follow within 12 to 18 months. Some regulators have already written binding obligations. Insurers are beginning to reference this standard in renewal language now.


A few months ago I was in a conversation with a senior administrator who told me, politely, that AI governance in his district was handled. They had a policy. It had been reviewed by legal. It was posted on the website. I asked him one question: if a parent asked you to demonstrate how your district oversees AI-assisted decisions that affect their child, what would you show them? There was a long pause. That pause is what this article is about.

The OECD's Due Diligence Guidance for Responsible AI, approved January 26, 2026, is the most significant international AI governance document released this year. Not because it introduces new principles, but because it operationalizes them, and in doing so, names the compliance gap that written policies cannot close. That gap is what I have been calling soft governance: the layer of obligation that lives between policy commitment and verifiable evidence. The OECD does not use that term. But it describes exactly that problem, and it does so in the language of international trade and investment law. This article defines what soft governance is, maps it to the OECD framework, and explains what it means practically for educational institutions and the EdTech vendors who serve them.

What Soft Governance Is and What the OECD Says About It

Hard governance is findable. A school either has a data processing agreement with a vendor or it does not. A vendor either has a privacy policy or it does not. These gaps are real but they are visible and correctable. Soft governance is different. It refers to the operational evidence layer: documented proof that governance commitments are actually functioning. Not that a policy exists, but that it is embedded, practiced, tracked, and capable of surviving a third-party review.

The OECD framework describes soft governance obligations across several domains. The first is meaningful stakeholder engagement, defined in the guidance as a two-way, ongoing, documented process integrated into the AI system lifecycle. The OECD states it "should not be seen as a one-off event, but rather as a continuous process built into the AI system lifecycle." The second is human oversight documentation: not simply the presence of a human in a workflow, but institutional accountability structures demonstrated through assigned oversight roles, documented intervention triggers, override authority designations, and oversight activity logs. The third is AI system inventory and risk classification: a maintained, current registry of every AI system in use, with associated risk levels, deployment dates, and named owners. The fourth is impact assessment: documented evaluation of how AI systems affect the people they touch, conducted before deployment and updated as systems evolve. The fifth is vendor supply chain management: structured assessment, approval, and ongoing oversight of every AI vendor, including audit rights, data governance clauses, and annual re-verification. The sixth is remediation capacity: demonstrable ability to restore affected parties when AI-related harm occurs.

The OECD explicitly identifies stakeholder engagement and remediation as where existing frameworks fall shortest. That is a significant admission from a body that helped produce several of those frameworks. My own analysis of the guidance extends this finding to the full list above, because the evidentiary gap in each domain is equally real and equally undocumented in practice.

One area where the OECD guidance remains notably silent is professional development as a compliance function. The guidance references stakeholder literacy and worker awareness in passing, but it does not establish professional development as a mandatory, documented, measurable component of AI governance. The Stop-Gap AI Compliance Guide, published five months ago, does exactly that. Professional development in that framework is not optional enrichment. It is a documented compliance domain with attendance records, content standards, and audit trails, because training without documentation is compliance debt, not compliance. That operationalization is ahead of where the OECD currently sits and is likely to appear in the next iteration of formal standards.

What my public assessments of educational institutions have been documenting for months, under the category of transparency gaps, maps directly onto what the OECD now formally describes as soft governance failures. Stakeholder engagement records that do not exist. AI inventories that are incomplete or undated. Human oversight assignments with no corresponding logs. These findings were publicly documented and available before this guidance was released. The OECD has now provided the international authority that explains why those gaps matter.

What This Means for Educational Institutions

Under the OECD framework, educational institutions sit generally in Group 3 with documented due diligence obligations across every AI tool they deploy. This is not limited to AI systems the district procured intentionally. It covers AI embedded in tools already in classrooms, already approved by boards, already in use by students and staff.

The documentation that satisfies these obligations does not look like a policy document. It looks like a timestamped AI system inventory with risk classifications. It looks like human oversight records with named accountabilities and intervention logs. It looks like vendor governance files with signed attestations and audit rights. Policy is a starting point. Evidence of the process in action is the requirement.

What ten public assessments are revealing is a consistent pattern. From what I can determine, this documentation does not exist in most school boards or companies. That conclusion is grounded in a simple observation: when governance work has actually been done, it leaves fingerprints. Meeting minutes reference AI inventory reviews. Board agendas include risk assessment items. Public websites disclose stakeholder engagement processes. Due diligence shows up somewhere in the public record. When those fingerprints are absent, the work almost certainly has not been done.

The Colorado AI Act, which carries binding enforcement from June 30, 2026, and the EU AI Act's high-risk system obligations, effective August 2026, require exactly this documentation. Some regulators have already written these requirements into law. Insurance companies are going to use the OCED document (and documents like this) as a baseline for what due diligence looks like going forward. Institutions building this documentation now are not over-preparing. They are getting ahead of a formalization curve that has already begun.

What This Means for EdTech and Adjacent Sectors

The OECD places EdTech vendors who design, deploy, or operate AI systems in Group 2. Many vendors span multiple groups depending on their product architecture, and the OECD is explicit that these groupings are not rigid. What is clear regardless of grouping is that the practical requirements are extensive: risk identification before deployment, adverse impact assessments, deployment controls, post-deployment monitoring systems, supply chain transparency, and documented remediation pathways. These are not checkbox requirements. They require operational infrastructure that most EdTech vendors have not built and cannot quickly produce alone.

The vendor supply chain provisions deserve particular attention. The OECD framework creates a chain of accountability from the institutions that use AI systems back through every vendor that supplies them. Educational institutions implementing AI governance are now required to ask vendors for documentation that most vendors cannot provide: bias audit logs, model cards, training data lineage, subprocessor lists with associated data processing agreements, change notification processes, and customer audit rights with evidence those rights have been exercised. Vendors without this documentation will lose procurement decisions to vendors who have it.

Adjacent sectors carry the same exposure. AI-powered HR platforms, student support tools, AI integrated administrative systems, and any software category where AI touches decisions affecting minors are in scope under the OECD's value chain framing. Being adjacent to education is not a shield. If your product influences an educational institution's operations or decisions, you carry significant due diligence

The market shift is not coming. It is beginning now. Superintendents who have read this guidance, or whose insurers reference it at renewal, now have international authority behind procurement questions that vendors are not prepared to answer. That asymmetry is a business risk for every EdTech company without governance documentation in place.

If You Are Reading This and Wondering Where You Stand

I have built an affordable assurance assessment that produces a compliance heatmap aligned with the OECD framework, the EU AI Act, ISO 42001, NIST AI RMF, and Colorado SB 24-205. It covers AI system inventory and classification, risk and impact assessment, vendor governance, human oversight documentation, transparency and stakeholder engagement, professional development as a compliance function, incident response capacity, and remediation readiness. It is designed to show organizations their actual position, not their aspirational one, and to produce the documentation format that insurers, procurement bodies, and regulators will require.

That pause in the conversation I described at the beginning of this article is expensive. It costs coverage at renewal. It costs procurement decisions. It costs board confidence. The documentation that fills it is not complicated to produce with the right infrastructure. It is just infrastructure most organizations do not yet have.

About the Author

Ryan James Purdy is a Senior AI Assurance and Compliance Advisor and the founder of Purdy House Publishing and Consulting. With nearly 30 years of experience in education across North America, Europe, and in online and blended learning environments, he is the author of the Stop-Gap AI Policy Guide series. He was selected as one of four Canadians for Pakistan's 100 Minds international AI governance initiative.

References

OECD. (2026, January 26). OECD Due Diligence Guidance for Responsible AI. OECD Publishing. https://doi.org/10.1787/41671712-en

European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (EU AI Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689

Colorado General Assembly. (2024). SB 24-205: Colorado Artificial Intelligence Act. Effective June 30, 2026.

National Institute of Standards and Technology. (2023). AI Risk Management Framework 1.0. https://doi.org/10.6028/NIST.AI.100-1

ISO/IEC 42001:2023. Information technology: Artificial intelligence: Management system. International Organization for Standardization.

Purdy, R. J. (2025). Memorandum No. 3: Evidence Translation in Educational AI Governance. Purdy House Institute Working Paper Series.

Purdy, R. J. (2025). Memorandum No. 4: ISO 42001 and Education. Purdy House Institute Working Paper Series.

Purdy, R. J. (2025). The Stop-Gap AI Compliance Guide. Purdy House Publishing.

 
 
 

Comments


bottom of page