
What’s new for 2025–26
For most schools, the biggest shift is not a brand-new rule. It is the expectation that you can show, quickly and confidently, how your approach to AI hangs together across assessment, safeguarding, curriculum and data protection. The 2025–26 guidance landscape pushes schools towards fewer vague statements (“AI may be used appropriately”) and more operational detail (“these tasks allow AI in these ways, and we can evidence authenticity”).
A second change is the growing emphasis on “evidence of process” for pupil work. Where coursework, NEAs and extended writing are involved, schools are being nudged to record how work was produced, not just what was submitted. If you already use drafting, conferencing, version history and short viva-style checks, you are ahead. If not, you will want to adopt a simple routine now, rather than firefighting in exam season. If you want a practical boundary model, the traffic-light approach in Exam-season AI boundaries is a useful starting point.
What has not changed is equally important. Teachers still make professional judgements about pupils’ learning. Malpractice is still malpractice, even if a chatbot did it. Safeguarding thresholds do not move because content was synthetic. And UK GDPR principles remain the spine of your data decisions: lawful basis, data minimisation, transparency and security.
One-page crosswalk
Think of 2025–26 as a crosswalk between five “owners” of expectations: DfE (school-wide approach and safety), Ofqual (qualification-level integrity), JCQ (centre arrangements and malpractice processes), RSHE/PSHE (pupil literacy and harms), and data protection (UK GDPR and procurement). They overlap, but each asks for different evidence.
DfE expectations are typically about your whole-school stance: staff training, consistent messaging to pupils, safeguarding routes, and sensible tool controls. Ofqual expectations focus on authenticity and fairness in regulated qualifications, so you need subject- and task-level clarity. JCQ expectations are operational: what your centre tells candidates, how you detect and handle suspected malpractice, and how you record decisions. RSHE/PSHE expectations are about teaching pupils to navigate AI-related risks: deepfakes, consent, manipulation and reporting. Data protection expectations are about governance: DPIAs where needed, contracts, retention, and being transparent with families.
If you are trying to reduce paperwork, aim for one joined-up “AI in school” policy suite with annexes, rather than separate documents that contradict each other. A good way to pressure-test coherence is to run a short internal audit; the structure in End-of-year AI audit can be adapted for August planning.
Assessment integrity updates
The practical 2025–26 move is to treat AI like any other tool that can help or hinder authenticity, and to design assessment routines that make authenticity visible. Schools are increasingly expected to do three things well: set clear boundaries, teach those boundaries explicitly, and keep simple evidence when boundaries matter.
For malpractice and authenticity, update candidate and parent communications so they are unambiguous about what counts as unacceptable assistance. Then make sure departments translate that into task instructions pupils actually read. A typical Monday-morning fix is to add a short “AI use statement” to the front page of relevant assignments: what is permitted (for example, planning prompts), what is not (generating final prose), and what must be acknowledged (for example, if AI was used to check spelling). The goal is not to catch pupils out; it is to remove plausible deniability.
For controlled assessment and coursework, build “evidence of process” into the workflow. In English, that might be a 10-minute in-class plan, a first paragraph written under supervision, and a short teacher conference where the pupil explains choices. In science, it might be annotated practical notes, a photo of results tables, and a brief oral check on method and variables. Where digital tools are used, agree what you will collect: version history screenshots, draft checkpoints, or a short reflection log. The approach described in Evidence-first writing instruction aligns well with this direction, because it makes the process teachable and assessable.
Finally, ensure your malpractice process is not theoretical. Exams officers and SLT should be confident about the threshold for suspicion, who investigates, how evidence is stored, and how decisions are recorded. A calm, consistent process protects staff as much as it protects standards.
Teaching and curriculum implications
You do not need to rewrite schemes of work to “add AI”. You do need to make AI literacy explicit in the places pupils already meet knowledge, sources and judgement. In history, that might be a short activity comparing a textbook paragraph with an AI-generated summary, then discussing omissions and bias. In languages, it might be using AI to generate practice sentences, followed by a teacher-led check for register and accuracy. In art and media, it might be a discussion of authorship and style, linked to practical work.
The simplest curriculum move for September is to agree three to five “AI literacy moments” per year group that departments can slot into existing units. Keep them small and repeatable: how to verify claims, how to cite assistance, how to recognise synthetic media, and how to protect personal data. If you want a structured way to gather pupil voice on what is actually happening, Student AI listening cycle offers a light-touch approach that can inform these moments without turning into a major project.
RSHE/PSHE and safeguarding
Safeguarding teams are likely to feel the 2025–26 changes most sharply around deepfakes, coercion and reputational harm. Pupils do not need a technical lecture; they need clear norms, clear language, and a clear route to help. In RSHE/PSHE, treat AI-enabled harms as a continuation of existing online safety themes: consent, power, exploitation, bullying, and reporting.
A practical classroom script matters. Tutors and PSHE teachers benefit from a short, agreed set of phrases that reduce panic and increase disclosure. For example: “If an image has been made or shared without consent, it is not your fault, and we will help.” Or: “If you are worried you might have shared something, tell us early; we can act faster.” Link this to your reporting routes, including anonymous reporting where available, and to staff responsibilities for recording and escalation.
Deepfakes deserve explicit coverage because they blur “real” and “fake” in a way that can destabilise trust. A short media-literacy sequence using safe, pre-selected examples can help pupils understand manipulation without inadvertently teaching them how to create harmful content. If you are exploring synthetic video in learning, Sora in the classroom includes a grounded discussion of workflows and safety that can inform staff training.
Data protection in practice
The Monday-morning version of data protection is not “do we like this tool?” but “can we justify this processing?”. Start with minimum-data rules: if a tool can work without pupil personal data, configure it that way. If accounts are required, use school-managed identities where possible, and avoid collecting sensitive data unless there is a clear educational need.
DPIAs should be routine for higher-risk tools, especially those that process pupil data at scale, generate profiles, or involve new vendors. Procurement should include contract checks for data processing terms, retention, sub-processors and international transfers. Logging and retention need decisions too: what usage logs are kept, who can access them, and for how long. Transparency to families should be plain English and practical, explaining what tools are used, what data is involved, and what choices exist.
If you are weighing hosted versus self-hosted options, or considering open models, the decision points in Meta Llama 4 decision pack can help you frame risk, cost and control without getting lost in jargon.
Operational changes before September
The most effective schools will make a small set of visible changes that staff can follow without a flowchart. Update your acceptable use and staff code of conduct so expectations for AI are explicit, then align department assessment statements with those boundaries. Ensure safeguarding training includes AI-enabled harms, and that staff know how to respond to disclosures involving synthetic media.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Alongside training, check tool settings. Turn off unnecessary data sharing, disable chat history where appropriate, and standardise age-appropriate access. Then communicate early: a short letter to families explaining your approach, plus a pupil-facing “AI rules of the road” that matches what teachers will say in class. If you want a fast, practical ramp-up plan for September, AI foundations sprint is a useful template for a two-week prep window.
Role-based action list
SLT should own the joined-up policy suite and ensure it is teachable: staff can summarise it in one minute, and pupils can repeat it back. They should also ensure governors receive a termly snapshot of compliance and incidents.
The DSL should integrate AI scenarios into safeguarding training and ensure reporting routes cover deepfakes, coercion and image-based abuse. The DSL also needs a clear stance on evidence handling: what to preserve, how to avoid re-sharing harmful material, and when to involve external agencies.
The exams officer should update candidate instructions, centre policies and staff briefings so malpractice processes explicitly include AI. They should also coordinate with HoDs on task-level boundary statements for coursework and controlled assessment.
The DPO/IT lead should set minimum-data defaults, run DPIAs where required, and ensure procurement and contracts meet UK GDPR expectations. They should also set and document logging/retention and access controls.
HoDs and subject leaders should translate policy into assessment design: add AI use statements, build evidence of process checkpoints, and teach short AI literacy moments within existing units. Tutors should reinforce the shared language in assemblies and tutor time, especially around reporting routes and consent.
Compliance checklist (printable)
Use this as an evidence pack for governors and inspection readiness. Keep it simple, dated, and easy to retrieve.
- A dated whole-school AI policy suite, including acceptable use, staff conduct, assessment integrity and safeguarding annexes
- Department statements for AI use in key assessment types (coursework/NEA, extended writing, homework)
- Candidate and parent communications on AI and malpractice, with dates and delivery method
- A recorded malpractice process flow, including roles, evidence handling and decision recording
- Examples of “evidence of process” in at least three subjects (templates, checkpoints, or moderation notes)
- RSHE/PSHE curriculum mapping showing where AI literacy, deepfakes and consent are taught
- Staff training records covering AI safety, assessment integrity and reporting routes
- DPIAs (where required), plus a register of approved AI tools and their data profiles
- Vendor contracts/data processing terms, retention decisions, and a plain-English family privacy notice update
- Tool configuration notes (settings, age restrictions, logging access) and an annual review date
Common pitfalls and red flags
A frequent pitfall is writing a strong policy but leaving staff to improvise task instructions. If one department allows AI planning and another bans all use without explanation, pupils will test the gaps. Instead, standardise a small set of permitted modes (planning, feedback, language support) and require acknowledgement when used.
Another red flag is relying on “AI detectors” as proof. They can be unreliable and can introduce bias. Use them, if at all, as a prompt for further investigation, not as a verdict. Prioritise process evidence, teacher knowledge of pupil voice, and short authenticity conversations.
Finally, watch for safeguarding drift: treating deepfakes as “just online drama”. Image-based abuse, coercion and harassment can escalate quickly. Staff need confidence to act, record and escalate, even when the media is synthetic.
May your September rollout be calm, consistent and well evidenced.
The Automated Education Team