
What it must do
In 2025–26, an AI Acceptable Use Policy (AUP) needs to do fewer things, better. It must set clear boundaries for learning and assessment, establish safe defaults for data protection, and give staff language they can use in real classrooms. It should also define what “integrity” looks like when AI is available everywhere, including at home. If your current AUP reads like a list of banned tools and vague warnings, it is likely doing too much of the wrong work.
What it must stop trying to do is “future-proof” every AI development. You cannot keep a policy accurate by naming every model, feature, or app. Instead, define categories of use (planning, feedback, drafting, revision, translation, accessibility), specify what evidence pupils must keep, and set a simple approval route for tools. If you want a quick way to ground this in practical boundaries, the approach in exam-season AI traffic lights is a useful reference point for policy language that teachers can actually apply.
A helpful reframe is to rename the document an “AI Use & Integrity Agreement”. “Agreement” signals shared responsibility, annual renewal, and a focus on behaviours rather than brand names.
Annual refresh checklist
Treat July/August as your annual refresh window. The goal is not a rewrite; it is a structured review that updates what matters and leaves the rest stable. Here are 12 items worth checking every year.
First, confirm your purpose statement: what AI is for in your school, and what it is not for. Second, update your definitions so staff and pupils share the same meaning of “generate”, “edit”, “summarise”, “translate”, and “coach”. Third, refresh your traffic-light boundaries for classroom tasks and assessments, including homework and remote study. Fourth, review your “evidence-of-process” expectations: what pupils must retain to show independent thinking over time.
Fifth, revisit your malpractice section so it is consistent with your behaviour and assessment policies, and so staff know the difference between misuse, misunderstanding, and deliberate deception. Sixth, update your tool approval list and your criteria for new tools, including what happens when a tool changes its terms. Seventh, check your minimum-data rules and your default settings for accounts, prompts, and sharing. Eighth, review retention: what is stored, where, for how long, and who can access it.
Ninth, refresh staff training expectations, including induction materials for new colleagues. Tenth, refresh pupil training, including a short “how we use AI here” routine that tutors can deliver. Eleventh, update parent/carer communications, focusing on what has changed this year and what you want families to do at home. Twelfth, run a quick governance check: who signs off, when, and what evidence you keep.
If you want a tidy way to capture outputs from this refresh, you can model it on an end-of-year evidence pack such as this AI audit action plan, even if you run it as a lighter-touch version.
Assessment integrity alignment
A policy becomes real when it matches assessment practice. Start with traffic lights, but make them specific. “Green” might include spelling support, translation for access, or generating quiz questions from class notes. “Amber” might include planning an essay structure with prompts, or using AI feedback on a first draft if pupils keep the original and annotate changes. “Red” should cover generating final answers for assessed work, impersonating a pupil’s voice, or using AI during closed conditions.
Traffic lights work best when paired with “evidence-of-process”. In a writing task, that might mean pupils submit a planning page, a paragraph-level outline, and a short reflection: what they accepted, rejected, and why. In maths or science, it might mean a photographed working-out trail plus a short oral check. In languages, it might mean a recorded speaking rehearsal and a vocabulary log. This is not about catching pupils out; it is about designing tasks where learning leaves traces.
For malpractice handling, write a simple flow that teachers can follow without needing a specialist. Clarify what counts as a concern, what evidence is appropriate (and what is not), and how you protect pupils from false positives. AI detection tools should not be your primary evidence. Your policy should say so plainly. For deeper practical examples of boundary-setting and scripts staff can use, these integrity check approaches can be adapted beyond exam season.
Data protection defaults
Most AI risk in schools is not “robots replacing teachers”. It is accidental oversharing, inconsistent tool use, and unclear retention. Your agreement should include minimum-data rules that default to safety: no pupil personal data, no safeguarding information, no medical details, and no identifiable case notes in public tools. Where accounts are required, prefer institution-managed accounts with clear admin controls and age-appropriate settings.
Tool approval should be a process, not a spreadsheet that nobody updates. Define who can approve a tool, what checks are required (data handling, age suitability, content controls, export and deletion), and what staff should do when a popular tool appears mid-term. If you are considering self-hosted or open models for tighter control, it helps to understand the trade-offs described in this decision pack on self-hosting and data protection.
Retention is where many policies stay vague. Be explicit: if staff paste text into an AI tool, assume it may be stored unless your agreement says otherwise. Set a retention default (for example, “do not store prompts or outputs containing pupil work”), and provide staff with safe prompt templates that avoid personal data. A practical line to include is: “If you wouldn’t put it in an email to a stranger, don’t put it in an AI prompt.”
Stakeholder sign-off
An “AI Use & Integrity Agreement” should have visible ownership. Split approval into sensible layers so you do not bottleneck everything with one person. Senior leaders should approve the overall stance, resourcing, and training expectations. Governors (or your equivalent oversight body) should approve the risk posture, monitoring approach, and annual review cycle. The DSL should sign off safeguarding-related sections, including how staff respond to harmful content and disclosures. Your DPO/IT lead should sign off data protection defaults, tool approval criteria, and retention rules.
Heads of department (or phase leaders) should approve assessment alignment within their subjects, because boundaries look different in art, computing, languages, and science. A short departmental annex can work well: one page of “green/amber/red examples” plus what evidence pupils must keep. This also makes it easier to brief new staff and to explain decisions when pupils move between subjects.
Pupil and parent/carer communications
The agreement will fail if families only hear about it after a problem. Produce a one-page summary written for pupils and parents/carers: what AI is, what it can be used for, what it cannot be used for, and what pupils must show as evidence. Keep it calm and practical. Include two or three examples, such as “using AI to generate practice questions is allowed; using AI to write your final homework answer is not”.
Alongside the summary, publish FAQs that address the predictable questions: “How will you know?”, “What about accessibility?”, “What if English is an additional language?”, and “What if my child uses AI at home?”. Each year, add a short “what’s changed this year” box. Even small changes matter, because they signal that the agreement is alive and that the school is paying attention.
If you want to build pupil voice into this, a simple listening cycle helps you find where rules are unclear or unrealistic. The structure in this student AI listening cycle can be run quickly early in term and used as evidence for your next annual refresh.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Monitoring that’s realistic
Monitoring should be light-touch, routine, and focused on improvement. Start with logging at the system level: which tools are approved, who owns them, and what training has been delivered. Then add spot-checks that feel like normal learning checks rather than investigations. For example, in an essay unit, a teacher might ask for a five-minute “talk me through your plan” conference with a few pupils each lesson. In a project, pupils might keep a short process journal with dated checkpoints.
Classroom routines do more than any technical control. A simple norm such as “AI is used after you have attempted the first step” reduces over-reliance. Another is “show your working, not just your answer”, which aligns with evidence-of-process. If you are supporting early-career teachers to embed these habits, you can borrow from micro-routine thinking in this first-term AI operating manual.
Finally, run a short incident review loop. When something goes wrong, capture what happened, what boundary was unclear, and what you will change: training, task design, or wording. Keep it non-punitive where possible. The aim is to reduce repeat incidents, not to create fear.
September implementation
Implementation needs a start-of-year rhythm. Begin with a staff briefing that does three things: reintroduces the agreement, models two or three safe prompts for planning and feedback, and rehearses the assessment boundaries with subject examples. Then move quickly to tutor-time rollout, using your one-page pupil summary and a short scenario discussion: “Is this green, amber, or red, and what evidence would you keep?”
A 30-day check-in is the difference between a policy and a practice. In week four, ask departments what is working, what pupils are confused about, and which tasks are generating integrity concerns. Update your FAQs and your examples, not your whole agreement. If new AI capabilities have landed over the summer, a rapid evaluation protocol like the one outlined in this release-day school briefing approach can help you respond without rushing policy changes.
An annual refresh is not bureaucracy; it is how you keep boundaries credible, data safe, and expectations fair. When your “AI Use & Integrity Agreement” is reviewed, signed, taught, and checked, it becomes part of the culture rather than a document in a folder.
May your September rollout be calm, clear, and consistently applied.
The Automated Education Team