ECT/NQT AI First-Term Operating Manual

Safe defaults, clear red lines, lighter workload

An early career teacher using an AI assistant at a desk with lesson plans and a safeguarding checklist

What this guide is

Your first term is not the time to “innovate” with AI. It’s the time to build dependable routines that keep pupils safe and keep you sane. This is a small, repeatable set of defaults you can run each week: quick planning support, sensible scaffolds, feedback preparation, and low-drama admin. Think of AI as a junior assistant who drafts and organises, while you decide what is true, appropriate, and worth teaching.

It is not a licence to outsource professional judgement, nor a shortcut around school policy. If you want a broader view of where AI fits across subjects, you can dip into AI across the curriculum planning moves later. For now, we’re building a safe baseline.

AI as assistant

A helpful mental model is “draft, then verify”. AI can produce plausible text quickly, but it cannot see your pupils, your classroom context, or your school’s expectations. In practice, you’ll use it to generate options, tighten wording, and anticipate misconceptions. You will still choose the learning focus, the examples, and the checks for understanding. You will also be the person who notices when something feels “off”, whether that’s a biased example, an unsafe suggestion, or a mismatch with your curriculum intent.

The goal is sustainability. If AI creates extra checking, extra formatting, or extra meetings, it is not saving you time. The routines below are designed to keep the “verification tax” small and predictable.

Your non-negotiables

Before you write a single prompt, lock in four non-negotiables: pupil data, safeguarding, copyright, and policy alignment. These are not optional, even when you’re tired.

Pupil data means you do not paste identifying information into public tools: names, photos, addresses, unique behaviour details, medical information, SEND documentation, or anything that could identify a child indirectly. If your school provides an approved platform with a data agreement, follow your local guidance, but still minimise what you share. Use placeholders like “Pupil A” and describe needs in general terms.

Safeguarding means you do not use AI to handle disclosures, to “diagnose” wellbeing, or to generate advice that replaces your school procedures. AI can help you draft a neutral note for your own records or prepare questions for a pastoral check-in, but it cannot be the decision-maker. If you’re exploring this area, AI for student wellbeing conversations is a useful companion—read it with your DSL’s expectations in mind.

Copyright means you avoid asking AI to reproduce copyrighted materials, and you are cautious about copying in textbook pages, exam papers, or subscription resources. When in doubt, summarise in your own words and create original examples.

Policy alignment means you check your school’s AI guidance early, and you keep evidence of how you used AI for professional tasks. If policy is unclear, ask your mentor for a “minimum safe” interpretation and document it.

The ECT red lines

There are a few “don’t do it” moves that catch ECTs/NQTs because they sound efficient.

Don’t paste an education plan, behaviour log, or safeguarding concern into a chatbot and ask, “What should I do?” That crosses both data and professional boundaries. Don’t ask AI to write reports with named pupils or specific incidents. Don’t use AI to generate grades, predicted marks, or “target levels” from a paragraph of work. Don’t run a pupil’s writing through AI and then present the output as their attainment, even as a “rough idea”. Don’t set homework that invites AI completion unless you have designed it for process evidence, in-class verification, or a clear integrity boundary.

Also avoid the “mystery tool” trap: a colleague recommends a shiny app, you upload a class list, and only later realise you have no idea where the data went. If you need a quick way to compare tools, use a structured approach like the one in AI assistant showdown: teacher triage and stick to approved options.

Five default micro-routines

The safest way to use AI in your first term is to keep routines small, boring, and repeatable. Each of these should take about ten minutes, including your checks.

First, a “lesson spine” routine: you provide the learning intention and prior knowledge, AI drafts a sequence, and you adjust it to match your class. Second, a “scaffold sweep” routine: you ask for three scaffold options and choose one that supports independence. Third, a “misconception radar” routine: you ask for likely misconceptions and plan quick checks. Fourth, a “feedback prep” routine: you build a short comment bank and a reteach plan, then moderate against your department expectations. Fifth, a “comms draft” routine: you draft short, neutral messages with minimal data, then edit for tone and policy.

You’ll notice what’s missing: no routine asks AI to make high-stakes decisions. That is deliberate.

Lesson planning

AI is most useful when you give it curriculum intent and constraints, not when you ask it to “plan a lesson”. Start with what pupils must know or be able to do, plus the common stumbling blocks. Then ask for a usable sequence you can teach tomorrow.

A reliable prompt pattern is: context → outcome → constraints → checks. For example: “I’m teaching a 50-minute lesson on [topic]. Pupils already know [prior knowledge]. By the end, they should be able to [success criteria]. Include a short retrieval starter, explicit instruction, guided practice, independent practice, and an exit check. Keep language age-appropriate. Suggest two hinge questions and three misconceptions with fixes. Avoid any copyrighted texts.”

When you get the draft, do a fast quality check: does it match your curriculum order, your school routines, and your time? Are examples culturally respectful and accurate? Are the checks actually checking the intended learning? If you want more structured planning moves, borrow ideas from building AI workflows that stick and keep them simple in term one.

Differentiation and SEND

Differentiation with AI should look like scaffolds, not shortcuts. You are not trying to reduce challenge; you are trying to reduce unnecessary barriers. A practical approach is to ask for “three scaffold levels” for the same task: one that supports vocabulary, one that supports organisation, and one that supports working memory. Then you choose what fits your pupils and your SEND guidance.

Quality checking can be quick. Scan for lowered expectations disguised as “support” (for example, replacing analysis with copying). Check that sentence starters still require thinking. Check that vocabulary lists are accurate and relevant, not random synonyms. If the scaffold changes the task, it may be an alternative outcome and needs proper planning.

If you teach younger pupils, you might also find the framing in teacher-in-the-loop primary micro-routines helpful, even if you’re not in primary, because it keeps the adult firmly responsible.

Feedback and marking prep

AI can reduce marking load when you use it before you mark, not instead of marking. The key move is preparation: generate a short misconception list, a comment bank aligned to your success criteria, and a mini reteach plan. Then, when you look at books or scripts, you are matching real pupil work to a prepared set of responses, rather than writing everything from scratch.

Keep moderation first. If your department expects certain phrasing, codes, or feedback structures, align to that. If you’re tempted to paste pupil work into an AI tool, stop and check your data rules. Often you can get what you need by describing the task and the typical errors, rather than sharing the work itself. For a deeper look at safe efficiency, tackling the marking mountain with AI offers practical patterns you can adapt.

Assessment integrity

Assessment is where ECTs/NQTs can accidentally create big problems with small decisions. Use traffic-light rules: green for low-stakes practice where AI support is explicitly allowed and taught; amber for tasks where AI may be used for planning but not final wording, with process evidence required; red for any assessed work where AI use would compromise validity.

Build simple classroom scripts so expectations are unambiguous. For green tasks: “You may use AI to generate practice questions, but you must correct them and show your corrections.” For amber tasks: “You may use AI to plan ideas, but your final response must be written in your own words, and you’ll do a short in-class check.” For red tasks: “No AI tools. This is a check of your independent performance.”

If you want ready-to-use boundaries and scripts, exam season AI traffic-light rules is a strong reference point, even outside exam periods.

Communication and admin

AI is excellent for drafting neutral, professional messages—if you keep data to a minimum. Use it to tighten tone, remove ambiguity, and keep messages concise. For example, you can ask: “Rewrite this parent message to be calm, factual, and solution-focused. Remove any judgemental language. Keep it under 120 words.” Then you paste your own draft with no pupil names, or you use placeholders and fill names in later within your school system.

The same approach works for tutor time notes, meeting agendas, and handover summaries: draft structure first, then add specifics in the right place, using approved systems. If you are preparing for a tricky meeting, AI can help you rehearse phrasing and anticipate questions, but you should still follow your school’s agreed communication channels.

Wellbeing and workload

AI saves time when it reduces blank-page stress and standardises repetitive writing. It creates hidden work when you ask it for too much, too often, and then spend ages checking, reformatting, and second-guessing. In your first term, set a “verification budget”: if a task needs more than a quick scan and minor edits, it is not a good AI task yet.

A simple rule is to use AI for first drafts and option generation, not for final decisions. Another is to stop if you notice you are prompting repeatedly to get something “perfect”. That is usually a sign you need a template, not another prompt. If workload is already biting, the teacher workload crisis task map can help you decide what to automate safely and what to leave alone.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Your first-term checklist

In week 1, get clarity: read the school AI policy, ask your mentor what “approved tools” means in practice, and write your own one-paragraph safe-use statement. In week 2, set up your micro-routines: one planning prompt, one scaffold prompt, one feedback prompt, and one comms prompt. Keep them saved, and do not expand the list yet.

In weeks 3–4, run the routines consistently and track time saved. If a routine costs more than it saves, pause it and simplify. In weeks 5–6, add assessment boundaries: agree traffic-light expectations with your team and practise the classroom scripts so pupils are not guessing. In weeks 7–8, do a mid-term review with your mentor: what is working, what is risky, and what needs policy clarification.

Your “stop if…” triggers should be clear. Stop if you feel pressured to share pupil data to make the tool work. Stop if AI use is creeping into high-stakes assessment decisions. Stop if you are using AI to manage safeguarding issues rather than following procedure. Stop if you are spending longer checking AI output than doing the task yourself.

Appendix

Copy-and-adapt prompts work best when they are short and constrained. Keep a small bank: one lesson sequence prompt, one misconception prompt, one scaffold prompt, one comment bank prompt, and one comms tone prompt. Add a one-page “safe use” reminder to your planner: no pupil identifiers, safeguarding stays human-led, follow policy, and verify everything.

Finally, use a simple mentor check-in agenda once a fortnight: what AI you used, what data you did not share, what you verified, what you stopped doing, and what you need clarified. That turns AI from a private experiment into a professional, accountable routine—exactly what you need in your first term.

May your first term stay calm, compliant, and genuinely manageable.
The Automated Education Team

Table of Contents

Categories

Teacher training

Tags

Teacher Training Safety Lesson Planning

Latest

Alternative Languages