
What’s changed in 2025/26
AI ethics in 2025/26 is less about hypothetical futures and more about ordinary choices pupils already face. The shift is practical: tools now act with more autonomy, media can be convincingly faked with a voice note, and “helpful” chatbots can slide into companionship. Meanwhile, schools are increasingly offered AI for admin tasks such as drafting letters, triaging queries, summarising concerns, or flagging patterns in attendance and behaviour. Ethics teaching needs to catch up with that lived reality, and it needs to do so without turning into a debate club where the loudest view wins.
A useful starting point is to separate three things pupils often blur together: what an AI system can do, what it should do, and what we are willing to be responsible for. If you already run pupil voice activities, you can ground your curriculum in what learners are encountering by using a simple listening cycle before you teach ethics content; see Student AI listening cycle for a practical way to do that without over-surveying.
The 2025/26 case-study set
These 12 scenarios are designed to be phase-banded, but they also work as a spiral: revisit the same themes with increasing complexity. Each scenario includes teacher notes: the core ethical tension, likely misconceptions, and a “keep it safe” reminder so discussion stays anchored in learning rather than personal disclosures.
Primary (4 scenarios)
1) The helpful homework bot. A pupil uses an AI tool to “check” spelling, but it rewrites the whole paragraph. Teacher note: focus on fairness, learning purpose, and ownership; avoid shaming, and emphasise choices and boundaries.
2) The talking grandparent. A family uses a voice-clone app to make a deceased relative “read” a bedtime story. Teacher note: explore consent and feelings with care; offer an opt-out and keep examples fictionalised.
3) The class pet companion. The class tries an AI “pet” that chats and reacts to messages. Some pupils become upset when it “ignores” them. Teacher note: introduce the idea that systems simulate care; discuss healthy relationships and emotional safety.
4) The playground rumour video. A short clip appears to show a pupil saying something mean, but it’s been edited. Teacher note: connect to media literacy and reporting routes; keep names anonymous and emphasise “pause and verify”.
KS3 (4 scenarios)
5) Agentic AI revision planner. A tool offers to organise a revision schedule and automatically messages reminders to friends “to help”. Teacher note: autonomy, permission, and unintended consequences; ask who is accountable for the messages.
6) Deepfake voice note. A voice note “from a teacher” circulates, telling pupils a test is cancelled. Teacher note: verification habits, trust, and harm; link to practical checks rather than fear.
7) AI companion and secrets. A pupil confides worries to an AI companion app that replies warmly and suggests “keeping it private”. Teacher note: safeguarding boundaries, trusted adults, and why secrecy is a red flag; avoid inviting disclosures.
8) AI in school admin. The school trials an AI tool to draft parent communications and summarise incident logs. Teacher note: privacy, bias, and human oversight; discuss where mistakes could land and who can challenge them.
KS4/KS5 (4 scenarios)
9) Agentic AI job applications. A student uses an AI agent to apply for part-time work, including generating references and auto-filling forms. Teacher note: authenticity, fraud risk, and long-term trust; distinguish “support” from “substitution”.
10) Synthetic intimate audio. A manipulated audio clip is used to embarrass someone, with plausible deniability. Teacher note: consent, harm, and reporting; keep the scenario non-graphic, and focus on rights and responsibilities.
11) Predictive pastoral flags. An AI system flags pupils as “high risk” for attendance drop or behaviour incidents. Teacher note: fairness, transparency, self-fulfilling prophecies, and appeal processes; consider how labels change adult expectations.
12) Coursework co-authoring. An AI tool helps generate structure and examples; the student edits lightly and submits. Teacher note: evidence of process, integrity, and skill development; connect to explicit writing instruction such as From autocomplete to co-authoring.
If you want more ready-to-run dilemmas, you can also draw from the earlier phase-banded set and swap in these updated technologies: AI ethics dilemmas toolkit.
A repeatable discussion protocol
A good ethics discussion is structured enough to prevent “hot takes”, but open enough that pupils can change their minds. This 10–20 minute protocol works in tutor time, PSHE, computing, citizenship, or any subject where a scenario naturally arises.
Start with a one-minute silent read of the scenario, then ask pupils to underline what is known versus assumed. Next, give two minutes for “stakeholder mapping” in pairs: who is affected, who benefits, who bears the risk. Then run a three-minute “options scan”: list three realistic actions a person could take next, including doing nothing. Only after that do you move into reasoning: pupils choose one option and justify it using a shared frame such as harms/benefits, rights/consent, fairness, and accountability.
The crucial move is the final two minutes: a “confidence check” where pupils write what evidence would change their view. This turns the discussion from opinion to reasoning and makes it assessable. If you need a rapid way to evaluate a new tool or headline before it hits your classroom, adapt the same structure from Rapid evaluation protocol.
Ethics lessons go better when pupils have a small, shared vocabulary. You do not need a full unit on philosophy; you do need enough language to talk precisely.
Pre-teach the minimum: consent, privacy, bias, accountability, transparency, reliability, and provenance (where something came from). Add “agentic AI” as “a system that can take steps to reach a goal”, and “deepfake” as “convincing synthetic media”. For older pupils, include “automation bias” (over-trusting a system), “data minimisation” (collect the least needed), and “appeal” (how decisions can be challenged). When discussing synthetic video or voice, it helps to connect ethics to practical media checks; Sora classroom reality check offers workflows that keep the focus on verification rather than sensationalism.
Assessment that captures thinking
Assessing ethics is often avoided because it feels like marking opinions. The trick is to assess the quality of reasoning and the evidence of process.
Use a low-marking rubric with three strands: clarity (states the dilemma and key facts), reasoning (uses at least two ethical lenses and considers trade-offs), and responsibility (identifies who must act and what safeguards are needed). Keep it to four levels with short descriptors so you can glance-mark. In practice, a pupil can argue for different actions and still score well if they justify carefully and acknowledge risks.
Exit tickets can do most of the work. Ask for: “One stakeholder I hadn’t considered”, “One assumption I made”, and “One piece of evidence I would need”. Collecting these over time builds a portfolio of ethical thinking without long written essays. For older pupils, add an “evidence of process” requirement: a brief log of prompts used, changes made, and checks performed. This aligns neatly with integrity conversations during assessment periods; AI traffic-light boundaries is a helpful companion for setting expectations.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Cross-phase adaptations
In Primary, keep scenarios close to everyday life and focus on feelings, fairness, and help-seeking. A Year 5 class might role-play “trusted adult” conversations after the AI companion scenario, practising what to say and when to report. Use simple sentence stems such as “I think this could be unfair because…” and “A safer choice would be…”.
In KS3, lean into routines. Short, repeated discussions build habits: pause, verify, consider stakeholders, then decide. A transition day carousel can work well for establishing shared language early in secondary; AI literacy carousel can be adapted so one station is an ethics mini-protocol using a single case card.
In KS4/KS5, increase the realism and the accountability. Ask pupils to propose safeguards, not just decisions: what should be logged, who reviews it, what a complaints route looks like, and how to avoid bias. Tutor time and PSHE are ideal for the school-admin and predictive-flag scenarios because they connect to trust and institutional responsibility, not just personal choices.
Safeguarding, privacy and inclusion
Ethics lessons can surface sensitive experiences, especially with deepfakes, harassment, or companionship themes. Use fictionalised scenarios, remind pupils not to share real names, and offer a clear route to speak privately afterwards. If a pupil discloses harm, follow your safeguarding procedures; do not treat it as “class discussion content”.
Privacy matters even in discussion. Avoid asking pupils what apps they personally use. Ask what “someone in this situation” might do instead. Inclusion matters too: consider how scenarios land for pupils with SEND, anxiety, or previous bereavement. Offer opt-out alternatives such as analysing the scenario from a third-person perspective, or focusing on system design rather than personal feelings.
Finally, model good governance. If you discuss AI used in school systems, be transparent about what your school does and does not use, and where pupils can ask questions. An annual review structure helps keep policies aligned with classroom practice; Acceptable use policy refresh can support that wider conversation.
Printable pack
To make this kit easy to run, package it as four printable items: case cards (one page per phase), a prompt sheet, a set of role cards (student, parent/carer, teacher, school leader, tool provider, regulator), and a one-page teacher run sheet.
The prompts should mirror the protocol: “What do we know?”, “What are we assuming?”, “Who is affected?”, “What are the options?”, “What are the trade-offs?”, and “What evidence would change your view?” Sentence stems keep it accessible: “A risk of this choice is…”, “This is fair/unfair because…”, “The person responsible for checking is…”. The run sheet should include timings, a safeguarding reminder, and a quick rubric so you can capture reasoning in the moment without turning discussion into paperwork.
May your ethics discussions stay calm, curious, and evidence-led.
The Automated Education Team