Exam-Season AI Traffic Lights for Schools

A one-page boundary system with scripts and checks

A classroom poster showing an AI traffic-light system for exam season boundaries

The exam-season problem

Exam season tends to expose the weakest point in many school AI approaches: inconsistent rules. One teacher encourages “use AI to revise”, another bans it outright, and a third quietly ignores it. Students then fill the gaps with guesswork. In the best case, that creates anxiety and arguments. In the worst case, it creates accidental malpractice: a student uses AI in a way that breaches assessment conditions, but genuinely believes it is allowed.

The difficulty is that “AI use” is not one thing. A student can use it to generate practice questions, to rephrase their own notes, to draft an essay, to translate a paragraph, or to produce an entire solution. The same tool can be safe in revision and unacceptable in coursework. Schools also can’t rely on detection tools to sort it out later. So the practical answer is a shared boundary system that makes expectations visible, repeatable, and easy to apply under pressure.

If you want a deeper look at teaching “evidence-first” writing that reduces outsourcing, see From autocomplete to co-authoring. For exam preparation routines that work alongside clear boundaries, this exam-board-aware revision workflow is a useful companion.

The AI traffic-light

A traffic-light system works because it replaces long policy documents with a simple classroom habit: Green means permitted, Amber means restricted and must follow conditions, Red means prohibited. The key is that the colour applies to the context (revision, homework, coursework, controlled assessment, exams), not to a particular app.

Green is “use AI as a study aid”. It supports understanding and practice, but it must not produce assessable work that is submitted as the student’s own. Amber is “use AI only in specific ways, with evidence”. It can help with planning, feedback, accessibility, or checking, but the student must keep proof of process and be able to explain decisions. Red is “no AI involvement”. That includes no prompts, no AI paraphrasing, and no “tidying” afterwards.

To introduce it in five minutes, you need one slide or poster and one line you repeat all season: “Green helps you learn, Amber helps you improve with proof, Red protects the assessment.” Then add a practical anchor: “If you can’t show your working and explain your choices, it’s not safe.”

Revision boundaries

Revision and independent study are usually Green-heavy, because the goal is learning rather than producing a graded artefact. The safest Green uses are those that keep the student thinking: generating retrieval questions from their own notes, creating spaced practice schedules, explaining misconceptions, or producing “worked examples” that the student then critiques.

A simple constraint that prevents revision drifting into answer-hunting is the “no answers” rule. Students can ask for hints, steps, or questions, but not final responses they intend to memorise. For example, a student revising science can paste their own summary and ask, “Ask me ten exam-style questions on this, then wait for my answers before giving feedback.” A literature student can ask, “Give me three alternative interpretations of this quotation and one counter-argument for each.” This keeps the cognitive load where it belongs: on the learner.

If you support younger pupils or mixed-ability groups, you may also want a “read-aloud and simplify” Green allowance, particularly for EAL learners and students with additional needs, provided the content comes from class materials. For a primary-specific take, this KS2 revision boundary toolkit may help you adapt the language.

Homework boundaries

Homework and classwork sit in Amber more often, because the output may be marked, used for feedback, or feed into predicted grades. The boundary here is the difference between help and outsourcing. Help supports the student to produce their own work. Outsourcing replaces the student’s thinking with the tool’s thinking.

A workable Amber rule is: AI may support planning, checking, and improving clarity, but it may not generate the core content. In practice, that means a student can use AI to create a plan from their own bullet points, to suggest ways to strengthen an argument they have already written, or to check for missing steps in a maths solution they attempted. It does not mean “write my response in the style of…” or “solve this, then I’ll copy it”.

In class, you can make this tangible by requiring a “process footprint”: one screenshot of the prompt, one screenshot of the feedback, and a short note on what the student changed and why. This is quick, and it shifts the norm from hiding AI use to documenting it responsibly.

Coursework boundaries

Coursework and NEA are where schools most need clarity, because the line between legitimate support and disallowed assistance can be thin, and the consequences can be serious. Treat most coursework as Amber by default, with explicit Red zones set by your subject requirements and task instructions.

In Amber coursework, the expectation is evidence. Students should keep version history (for example, document history or dated drafts), notes, sources, and a record of any AI interactions. Disclosure should be normalised: “If you used AI for feedback or language support, say so.” The goal is not to punish honesty; it is to protect students from later challenges about authenticity.

A practical evidence expectation is “three artefacts”: an early outline, a mid-draft with teacher feedback, and the final submission, plus a brief reflection on what changed. If a student claims AI only helped with structure, their drafts should show that development. If the final piece appears suddenly, with no trail, it becomes a safeguarding and integrity concern, not a technical one.

Controlled assessment boundaries

Controlled assessment and practicals are usually Red during the controlled window: no AI tools, no prompts, no paraphrasers, no “quick checks”. If the assessment is supervised, keep the environment simple and defensible. That includes clarifying what counts as AI: predictive writing features, grammar rewriters, and “smart” note apps can all introduce assistance.

There may be pre-approved Green or Amber preparation before the controlled period begins. For example, students might use AI at home to generate practice questions, to rehearse definitions, or to receive feedback on practice attempts that are not submitted. The important move is to name the boundary in time: “AI is allowed for practice up to this date; from this point, all preparation is from your own notes and teacher materials only.”

Exam boundaries

Exams are Red. On exam day, the simplest message is the safest: no AI tools, no AI-enabled devices, no prompts, no pre-written AI-generated “memory aids”, and no copying from AI outputs. Students also need to understand “AI residue” risks: if they have used AI to draft notes, flashcards, or model answers, they may accidentally reproduce phrases and structures they did not create. That can look like unusual style shifts or suspiciously generic responses, even when no device is present.

A helpful preventative approach is to make revision outputs “student-voiced”. If a tool generates flashcards, students must rewrite them in their own words and add a personal example from class. If a tool produces model paragraphs, students must annotate what each sentence is doing and then draft a new paragraph without looking.

For a structured revision routine that reinforces this, the 28-day exam sprint offers a practical sequence you can adapt.

Ready-to-use scripts

Teacher language works best when it is calm, repeated, and not accusatory. Try: “In this class, revision is Green: AI can quiz you and explain, but it can’t write answers for you.” For homework: “Homework is Amber: AI can help you plan and check, but the thinking and wording must be yours. Keep your prompt and a note of what you changed.” For coursework: “Coursework is Amber with evidence: you must keep drafts and be able to explain your choices. If AI helped, you disclose it.” For controlled assessment and exams: “This is Red: no AI tools, no rewriters, no smart assistance. If you’re unsure, ask before you do it.”

Student self-check language helps them pause before they cross a line. Teach a short script: “Did I create the ideas and structure, or did the tool? Can I explain every paragraph without looking? Do I have a draft trail? Would I feel confident showing my prompts to my teacher?” If any answer is no, they should step back and redo the work in their own voice.

A parent/carer message should be simple and non-technical: “We are using an AI traffic-light system during exam season. AI is encouraged for revision and practice, limited for homework and coursework with clear rules, and not allowed in controlled assessments or exams. Please support your child by focusing on learning routines rather than ‘getting answers’. If you are unsure what is allowed, ask the school before using any tool.”

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Integrity checks without detection

When you can’t rely on AI detection, you rely on process and professional routines. Start with checkpoints. Break longer tasks into small, dated submissions: a plan, a paragraph, a data table, a bibliography, a reflection. This makes sudden “perfect” work harder to produce without a trail, and it gives you more moments to coach.

Viva-style questions are a powerful, low-tech check. After a submission, ask two minutes of targeted questions: “Why did you choose this example?” “What does this key term mean in your own words?” “Talk me through how you got from this data to that conclusion.” A student who authored the work can usually answer, even if they are nervous. A student who outsourced it often cannot explain the decisions.

Process evidence can be normalised rather than treated as suspicion. Ask students to attach a short “making-of” note: what they changed after feedback, what sources they used, and what they found difficult. Occasional spot audits reinforce the norm: you randomly select a few students each week to show drafts, notes, and (if relevant) AI interaction logs. Keep it routine and fair, not punitive.

Implementation

Implementation lives or dies on staff alignment. Agree the traffic-light meanings as a department first, then translate them into one page that all teachers use. If your subjects differ, keep the colours consistent and vary only the examples. Students can cope with different tasks; they struggle with different language.

Brief students early and repeat it often. A two-minute reminder at the start of a lesson is more effective than a one-off assembly. Put the colours on the task sheet: “Homework (Amber): allowed uses… prohibited uses… evidence required…” This reduces last-minute panic and keeps expectations visible.

Finally, create a simple escalation pathway. If a teacher has concerns, the first step should be a learning-focused conversation and a short viva, not an immediate accusation. If concerns remain, the next step is a documented review of process evidence against the task rules, involving a designated lead. Students should know this pathway in advance; it reassures those who are honest and deters those who are tempted.

May your exam season be calmer, clearer, and fairer for everyone. The Automated Education Team

Table of Contents

Categories

AI in Education

Tags

Assessment Strategies Ethics

Latest

Alternative Languages