top of page
Search

The People Behind the Exclusion: Three Insurance Roles Rewriting Education's AI Rules

  • Writer: Ryan James Purdy
    Ryan James Purdy
  • 2 days ago
  • 6 min read

The People Behind the Exclusion: Three Insurance Roles Rewriting Education's AI Rules

Ryan James Purdy | Purdy House Publishing & Consulting

Key Takeaways:


  1. Three roles inside the insurance industry, actuaries, lawyers, and underwriters, are becoming the most consequential decision-makers in education AI governance. Almost no one in education knows who they are.

  2. Exclusion language is not coming. It is here. Berkley, Hamilton Select, and Philadelphia Indemnity have already filed AI exclusions. Verisk's generative AI endorsements became available to carriers in January 2026.

  3. Each role needs different evidence from your institution. Actuaries need loss data to model. Lawyers need documentation that survives discovery, and a carrier has already voided a cyber policy from inception because the insured misrepresented its controls. Underwriters need governance artifacts at renewal. Most schools cannot produce any of it.

  4. Colorado's AI Act enforcement begins June 30, 2026. EU AI Act high-risk obligations begin applying August 2, with some rules extending into 2027. Your renewal falls somewhere in between.

  5. Education is the only major regulated sector facing this convergence without dedicated AI governance infrastructure. Healthcare has the FDA. Finance has SOC 2. Education has nothing sector-specific.


When you think about who sets AI policy for your schools, you think about governments. UNESCO. Your school board.

You should be thinking about the actuary in Hartford building a risk model from PowerSchool's 62-million-record breach. The lawyer in London drafting exclusion language that takes effect at your next renewal. The underwriter reviewing your application who will decide, based on a governance questionnaire you have never seen, whether your AI-related losses are covered or excluded.

Berkley Insurance's Absolute AI Exclusion already covers any actual or alleged use, deployment, or development of AI across D&O, E&O, and fiduciary liability products. Hamilton Select has a blanket exclusion for claims involving generative AI. Philadelphia Indemnity excludes content created using generative AI for third-party work. Verisk's ISO endorsements became available to carriers in January 2026. The exclusion language is written. Carriers are adopting it.

Behind those decisions sit three roles that you need to understand.

I have been documenting this dynamic since December 2025 across a seven-memorandum research series on insurers as de facto regulators of AI in education. A recent paper from the Center for Democracy & Technology by Park and Bogen, "Who Insures AI," provides a useful general framework for these three roles. What follows applies their mapping to education specifically.

The Three Roles

Park and Bogen get the framework right. Three distinct roles inside the insurance industry collectively determine who gets covered, at what price, and under what conditions. Here is how they work.

Actuaries build the statistical models that determine whether AI risk is insurable at all. They work from historical loss data, and where data is scarce, they rely on proxy data and assumptions. Without AI-specific expertise feeding realistic assumptions into those models, the actuarial default is conservative. Conservative means exclusion.

Lawyers negotiate policy terms and draft the exclusion language carriers adopt. In an immature market, they have outsized influence because they are writing the rules in real time. Post-incident, they are the ones who determine whether your governance representations on the application were accurate enough to sustain a claim.

Underwriters apply the actuarial models and legal frameworks to your specific renewal. They adjust premiums, add conditions, require controls, or exclude risks. They are the person deciding your coverage terms this year.

What Each Role Needs From You

Here is what each role needs from you, drawn from the operational evidence my memorandum series and public school board assessments have produced.

What actuaries need: loss signals. Education's AI loss history is not yet actuarially mature compared to healthcare or finance, but the signals are accelerating. PowerSchool: 62 million student records exposed. Character.AI: platform liability litigation involving a minor. Assessment tools: algorithmic bias claims developing in peer-reviewed research.

The claims cycle runs 18 to 36 months. That means incidents from 2024 and 2025 produce settlements in 2026 and 2027, directly feeding the models that price your next policy. If you are building governance documentation now, you are creating the evidence that lets actuaries distinguish you from the undocumented baseline. Adjusted pricing beats blanket exclusion, but only if there is something to adjust against.

What lawyers need: documentation that survives discovery. In Travelers v. International Control Services, a cyber policy was voided from inception. The reason: the insured misrepresented its security controls on the application. Not a gray area. Void from day one.

The same logic applies to AI governance. If your renewal application says you have AI oversight procedures and you cannot produce the documentation, the coverage you think you have may not survive a claim. Lawyers also need to see that your documentation translates across jurisdictions: FERPA for US regulators, GDPR for EU exposure, provincial privacy law in Canada. It is not enough to have a policy. You need evidence shaped for the authority asking.

What underwriters need: governance artifacts, now. Underwriters are the immediate pressure point. They decide this year's renewal terms. Based on emerging questionnaires and broker guidance, they are converging on a core set of evidence categories: AI inventory and classification, governance structure with board oversight, risk management documentation, human oversight protocols, and data governance policies. Bias testing and vendor management round out the list.

On bias testing specifically: most districts will not be running statistical audits themselves. The realistic expectation is documented vendor attestations, governance controls around tool selection, and evidence that someone asked the question. That is still more than most institutions can currently produce.

In healthcare, these categories have mature equivalents: device registries, HIPAA compliance, clinical validation protocols. In financial services: SR 11-7, SOC 2, fair lending audits. In education, there is no sector-specific equivalent for any of them. From the half-dozen public assessments I have conducted of Canadian school boards, most institutions can partially satisfy basic legal requirements but cannot produce auditability evidence. Maturity documentation is almost entirely absent.

The Convergence

These three roles do not operate in sequence. They operate in parallel, and they reinforce each other in ways that narrow your options.

Actuaries processing 2024-2025 loss data feed the models that justify exclusions. Lawyers draft the exclusion language those models support. Underwriters apply both to your renewal. Each role's output becomes another role's input. The system tightens.

Colorado's AI Act enforcement begins June 30, 2026. EU AI Act high-risk obligations, which explicitly include educational AI systems, begin applying August 2, 2026, with some requirements extending into 2027. Your renewal falls somewhere in between. That is the window.

Every other major regulated sector has dedicated AI governance infrastructure to meet this moment. Healthcare has the FDA and HIPAA. Financial services has banking examination and SOC 2. Education has nothing sector-specific.

Before Your Next Renewal

Start with the AI inventory. Document every AI system in use, classify each by risk level, record the vendor relationship and data flows. This single artifact answers questions across all three roles: actuaries see your exposure profile, lawyers can assess your representation accuracy, underwriters can evaluate your governance posture. Highest-leverage first step.

Then ask your broker one question: "Has our carrier filed or adopted AI exclusion endorsements on any of our coverage lines?"

If the answer is yes, or if the answer is "I don't know," you have a deadline you did not know about.

References:

[1] Purdy, R. J. (2025). "The Forcing Function: Insurance, Regulation, and the Urgency of AI Governance in Education." Memorandum No. 2, AI Governance in Education Series. Purdy House Publishing & Consulting. DOI: 10.5281/zenodo.18085507

[2] Park, T. M. and Bogen, M. (2025). "Who Insures AI: Understanding the Roles of the Private Insurance Industry and How They Can Shape AI Governance." Center for Democracy & Technology. Blog/summary: Full paper PDF: https://openreview.net/pdf?id=YOBJIwQlS7

[3] Purdy, R. J. (2026). "The Translation Problem: How the Same Governance Requirement Becomes Different Evidence Demands." Memorandum No. 3, AI Governance in Education Series. DOI: https://zenodo.org/records/18085646

[4] Purdy, R. J. (2026). "Beyond Self-Attestation: The Case for Independent AI Governance Assessment in Education." Memorandum No. 5, AI Governance in Education Series. DOI: https://zenodo.org/records/18519034

[5] Purdy, R. J. (2026). "The Liability Squeeze and the Governance Response: How Documentation Becomes Leverage." Memorandum No. 7, AI Governance in Education Series. DOI: https://zenodo.org/records/18519530

[6] Hunton Andrews Kurth LLP. (2025). "How Insurance Policies Are Adapting to AI Risk." https://www.hunton.com/insights/publications/how-insurance-policies-are-adapting-to-ai-risk

[7] Travelers Property Casualty Company of America v. International Control Services Inc., No. 22-cv-2145 (C.D. Ill. 2022). https://www.insurancejournal.com/news/national/2022/07/12/675516.htm Lockton analysis: https://global.lockton.com/us/en/news-insights/travelers-v-ics-underscores-need-to-respond-carefully-to-cyber-insurance

[8] Verisk Insurance Solutions. (2024). ISO Circular LI-CF-2024-175: Artificial Intelligence Liability Insurance Endorsements. https://www.independentagent.com/vu_resource/verisk-to-roll-out-new-general-liability-exclusions-for-generative-ai-exposures/

Ryan James Purdy is the founder of Purdy House Publishing & Consulting, specializing in AI governance for the education sector. His seven-memorandum research series on AI governance in education is available on SSRN and Zenodo. He has conducted public AI assurance assessments of school boards across Canada.

 
 
 

Comments


bottom of page