EU AI Act: One Year On

Procurement questions, risk registers, and audit-ready evidence

A school leader reviewing AI procurement documents with an IT colleague

Where it matters

A year on, the EU AI Act is less a “new rule for UK schools” and more a new gravity field around the edtech market. UK schools are not automatically regulated by EU law simply because they use AI. However, many vendors serving UK schools also sell into the EU, build products in the EU, or rely on EU-based sub-processors. That means their product design, documentation, and incident handling increasingly reflect EU requirements. For schools, the practical benefit is straightforward: you can ask sharper questions and expect better evidence.

Where it doesn’t matter is equally important. Don’t pretend your school is an EU “deployer” with EU legal duties if you are not. Avoid copying compliance language into policies as if it were binding. Instead, treat the AI Act as a high-quality framework for procurement and governance. If you already run annual checks on acceptable use and data protection, you can fold AI Act-style thinking into those cycles; see the annual AI acceptable use policy refresh checklist for a practical rhythm that won’t overwhelm staff.

A plain-English map

The most reusable AI Act ideas for schools are risk, role, and intended purpose. You can apply them without legal jargon.

Risk is about the impact if the system is wrong, biased, unavailable, or misused. In a school context, “high stakes” often means decisions about a pupil’s opportunities, safety, or access to support. Think of an AI tool that flags safeguarding concerns from written work, or one that recommends tiered interventions. Even if the tool only “suggests”, the influence can be real.

Role clarifies who does what. Vendors are typically the “provider” of the AI system. Schools are usually the “customer” and operational user. The point is not labels; it is accountability. Who monitors performance? Who can change settings? Who investigates incidents? Who notifies families if something goes wrong? Clarifying this early prevents the common failure mode where everyone assumes someone else is responsible.

Intended purpose is the single most useful phrase you can borrow. Ask: what does the vendor claim the system is for, and what are you actually using it for? A tool marketed for “lesson planning support” may be harmless until it is repurposed as an assessment feedback generator used at scale. Procurement should lock intended purpose into your contract and your internal guidance, so staff don’t drift into riskier uses by accident. If you want a quick way to evaluate new models and features as they arrive, adapt a rapid protocol like the GPT-5 release day school briefing and apply it to any major vendor update.

Procurement questions

Treat procurement as your first safety control. The goal is not to “catch out” suppliers, but to collect evidence you can file, revisit, and show to governors or auditors.

Here are 12 questions to ask vendors, with a sense of what good evidence looks like:

  1. What is the system’s intended purpose in education, and what uses are explicitly out of scope? Good evidence includes a clear product statement and examples of allowed and disallowed use.

  2. What data goes in, what comes out, and what is stored? Look for a data flow diagram, retention schedule, and a list of sub-processors.

  3. Is any pupil personal data used to train models? A strong answer is “no” by default, with a contractual commitment and technical controls.

  4. What safeguards exist for bias and accessibility? Expect evaluation summaries, known limitations, and guidance for inclusive use.

  5. How do you test performance and reliability in school-like conditions? Good evidence includes test methodology, not just marketing claims.

  6. What human oversight is assumed, and what happens if staff ignore it? Look for workflow guidance, prompts, and UI friction that prevents over-reliance.

  7. What logging is available to the school, and for how long? Strong evidence includes admin audit logs, export options, and role-based access.

  8. How do you handle incidents, including harmful outputs or data exposure? Expect an incident response plan, notification timelines, and a named contact route.

  9. What changes can you make without telling us (models, features, defaults)? Good evidence includes release notes, change control, and opt-out options.

  10. What security controls are in place (encryption, access controls, pen testing)? Ask for a current security overview and independent assurance where available.

  11. What documentation can you provide for our records? Look for a pack you can file: DPIA support, policies, and technical notes.

  12. Can you support an exit plan? Strong answers cover data export, deletion confirmation, and timelines.

If you want a structured way to test vendor claims in your own setting, borrow the spirit of a classroom evaluation protocol such as the Claude evaluation protocol, but run it as a procurement pilot with controlled data and clear success criteria.

Risk register in practice

A lightweight AI risk register is a living list of “what could go wrong, how we reduce it, and how we’ll know”. Keep it short enough that it gets used.

Template fields that work well in schools include: system name and version; owner (named role); intended purpose; user groups (staff/pupils); data categories; decision impact level (low/medium/high); key risks (privacy, safeguarding, bias, accuracy, over-reliance, security, assessment integrity); existing controls; required controls; evidence held (links to files); residual risk rating; review date; incident log link; and “change triggers” (for example, model update, new feature, expanded roll-out).

In practice, this looks like a single row for your AI marking assistant, noting that it must not be used for final grading, that staff must review outputs, and that prompts must avoid identifiable pupil details unless a DPIA supports it. Another row might cover an AI chatbot for homework help, highlighting age-appropriateness, content filtering, and how pupils report problematic responses. If you already run termly reviews of tools, you can connect this to an evidence cycle like the end-of-year AI audit evidence pack, so your register becomes a summary rather than a filing cabinet.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Documentation you can maintain

Audit-ready documentation is not about volume; it is about consistency. Aim for a small set of files that you can actually keep current.

File a one-page “AI system record” per tool (intended purpose, users, data, settings, oversight, and prohibited uses). Add the vendor evidence pack (contracts, security notes, sub-processor list, change logs). Keep your DPIA or privacy impact note where relevant, and a short staff guidance sheet that matches how the tool is used in your school. Finally, maintain an incident and change log: what changed, who approved it, and what was communicated.

Ownership matters. A workable model is: IT owns technical configuration and access; the DPO (or data protection lead) owns the DPIA and data processing terms; safeguarding leads own child-safety risks and reporting routes; curriculum or assessment leads own acceptable pedagogical use; governors or a delegated committee own oversight and challenge. Review cycles can be termly for high-impact tools and annually for low-impact ones, with immediate review triggered by major product changes. If you need a practical “privacy-by-default” roll-out pattern, the minimum viable back-to-school AI toolkit is a useful template to adapt.

Operational controls

Controls should fit the realities of schools: busy staff, mixed confidence, and rapid tool updates. Human oversight is your anchor. Make it explicit when AI can draft but not decide, and when a second-adult check is required. For example, if AI suggests safeguarding keywords from pupil writing, staff should treat it as a prompt to review context, not a verdict.

Logging is often overlooked until something goes wrong. Ensure admin logs show who accessed the tool, what settings changed, and when. For classroom-facing tools, consider whether you need activity logs for safeguarding and behaviour follow-up, and how long you will keep them.

Incident reporting needs a simple route. Staff should know what to do if an AI tool produces sexual content, hateful language, or discloses personal data. A short form linked from your safeguarding or IT helpdesk system is usually enough, provided someone triages it quickly and records outcomes.

Change management is the hidden risk. Many AI systems change behaviour when the underlying model updates. Require vendors to notify you of material changes, and set an internal rule: no new features switched on without a named approver and an updated “AI system record”. Even a brief, termly INSET micro-routine can keep practice aligned; the INSET day AI workshop micro-routines approach is a practical way to embed this without turning it into a compliance exercise.

Alignment with UK guidance

Although the EU AI Act is not your direct rulebook, your governance should still align with UK expectations. Assessment integrity is a clear example. Where qualifications or formal assessments are involved, your controls should support authenticity, transparency, and appropriate use of AI assistance. In practice, that means clear staff guidance on feedback boundaries, pupil declarations where required, and consistent handling of suspected malpractice.

Data protection alignment is equally central. An ICO-style approach expects clarity on lawful basis, minimisation, transparency, security, retention, and processor management. Your vendor questions and documentation set should make those points easy to evidence. Safeguarding alignment means age-appropriate access, content controls, reporting routes, and staff training that treats AI outputs as untrusted until checked.

If you are also mapping AI tools to curriculum and teaching expectations, keep governance connected to implementation planning rather than separate from it. The National Curriculum AI implementation pack can help you keep pedagogy, compliance, and procurement in step.

A 30-day plan

In 30 days, you can move from “we have AI tools” to “we can evidence safe use”. In week one, SLT should agree a short list of approved AI use cases and appoint owners for procurement, data protection, safeguarding, and assessment integrity. IT can inventory current tools and disable unmanaged add-ons where possible. In week two, run vendor questioning for your top three tools by usage or risk, and draft one-page AI system records for each. In week three, create the lightweight risk register and agree incident reporting routes, including what counts as a notifiable issue internally. In week four, run a short staff briefing on intended purpose, prohibited uses, and how to report concerns, then schedule the first review date and governor update.

Done well, this is not about chasing an EU label. It is about building the habit of asking better questions, keeping evidence you can find, and making it easier for staff to use AI confidently within clear boundaries.

To calmer procurement meetings and clearer audit trails ahead, The Automated Education Team

Table of Contents

Categories

Administration

Tags

Administration Ethics

Latest

Alternative Languages