October half-term AI CPD in a box

Two routes, one evidence pack, privacy by default

A teacher planning half-term CPD with an AI safety checklist and a simple evidence portfolio

October half-term is one of the few windows where you can think clearly without the daily churn. The risk is that “doing CPD” turns into reading threads, watching webinars, and saving prompts you never use. This guide is a practical, low-friction alternative: a short sprint that ends with a micro-credential evidence pack you can show your line manager, keep for appraisal, or use to support a departmental approach. If you want a wider, term-length arc after half-term, you can pair this with our summer-to-September impact roadmap, but this article stands alone.

Who it’s for

This plan is for three common roles, with the same end point. Classroom teachers will leave with two ready-to-run classroom routines and a safe workflow for planning and feedback. Middle leaders will leave with a small pilot they can scale, plus a lightweight way to check quality and consistency. Senior leaders will leave with a governance-friendly evidence trail: what was tried, what data was used (ideally none), what changed, and what needs policy support.

By the end of either route, you will have a tidy portfolio containing two to four artefacts (real documents you can reuse), two short reflections, and two impact notes based on something you actually did. The goal is implementation without extra workload: you are producing things you would have needed anyway, just more deliberately and more safely.

The sprint rules

The sprint works because it has three rules: minimum-data, human sign-off, and one tool stack.

Minimum-data means you assume you should not paste personal data, pupil work, assessment marks, safeguarding details, or anything identifiable into an AI tool. You design workflows that function with anonymised, synthetic, or teacher-written inputs. If you want a deeper run-through of privacy-default thinking, our minimum viable back-to-school AI toolkit is a useful companion.

Human sign-off means you decide, in advance, where professional judgement must sit. For example: AI can draft a success-criteria checklist, but you approve it. AI can generate question variants, but you check for misconceptions, bias, and accessibility. AI can suggest feedback stems, but you decide what is appropriate for your pupils and context. This keeps you in control and makes the work auditable.

One tool stack discipline means you pick one primary AI tool (plus your usual document tools) and stick with it for the sprint. Tool sprawl is the fastest way to waste half-term. If you are evaluating a new model release, treat it as a controlled swap rather than an additional platform; our rapid evaluation protocol shows how to do that without losing the week.

Your evidence pack

Think of the micro-credential as a small folder with a table of contents. It should be easy for someone else to audit in ten minutes: what you did, what you produced, what you learned, and what changed.

Your required artefacts can be modest, but they must be real. A strong set includes a “minimum-data prompt sheet” you can reuse; one redesigned lesson resource (for example, a model answer plus a misconception check); one assessment-support resource (for example, feedback stems or a rubric clarification); and one communication artefact (for example, a parent-facing explanation of how AI is used safely, or a staff briefing slide).

Your reflection prompts should be short and specific, not diary-style. Write 150–250 words each on: what you asked the tool to do and why; what you changed after reviewing the output; what risks you noticed (privacy, bias, over-reliance); and what you will do differently next time. If you are supporting early-career colleagues, you might also point them to the ECT/NQT AI operating manual for extra scaffolding.

Your impact notes are not “AI saved me three hours” (tempting, but hard to evidence). Instead, capture something you can reasonably observe: fewer misconceptions in exit tickets, improved clarity in success criteria, reduced duplication in planning, or more consistent feedback language across a team. Keep it honest: impact can be “no measurable change yet, but the routine is now stable”.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Route A: 5-day plan

Route A is a focused week: 60–90 minutes per day, each day ending with a tangible output. The rhythm is learn, build, test, record.

Day 1 is your baseline and boundaries. You choose your one tool stack, write your minimum-data rule at the top of a document, and create a two-paragraph “AI use statement” for yourself: what you will use AI for, and what you will not. Output: a one-page safe workflow note plus a folder structure for your evidence pack.

Day 2 is lesson design with constraints. Take a lesson you already teach next half-term and ask the tool to propose three alternative explanations, then three hinge questions, using only generic context. You then select, edit, and add your own checks for common misconceptions. Output: a revised lesson segment (explanation + questions) with a short note explaining what you changed and why.

Day 3 is feedback and assessment support. Create a bank of feedback stems aligned to your success criteria, then rewrite them for clarity and tone. If you teach writing-heavy subjects, you can anchor this in an evidence-first approach to drafting and redrafting, as explored in autocomplete to co-authoring. Output: a feedback stem bank plus a “teacher checks” box (accuracy, tone, accessibility).

Day 4 is a mini-pilot. Use one element from Day 2 or 3 with a class (or, if you cannot access pupils, run a tabletop simulation with last year’s anonymised misconceptions list). Capture what happened in three bullets: what worked, what didn’t, and what you adjusted. Output: one impact note and a revised version of the resource.

Day 5 is packaging and appraisal-readiness. You compile your artefacts, write two short reflections, and create a one-page summary that a line manager can read quickly. Output: a complete micro-credential folder, ready to share.

Route B: 10-day plan

Route B spreads the load: 30–45 minutes per day, with spaced practice and one small pilot. It suits teachers who need shorter sessions, or leaders who want time to consult colleagues.

On Days 1–2 you set boundaries, pick your tool stack, and draft your safe workflow note. On Days 3–4 you create and refine one lesson artefact, deliberately revisiting it on a different day to catch errors and overconfident phrasing. On Days 5–6 you build an assessment-support artefact and run a quick bias and accessibility review (for example, checking reading load, idioms, cultural assumptions, and whether examples stereotype).

Days 7–8 are your pilot window: you try one small routine in a live lesson or departmental planning meeting. Keep the pilot narrow: one class, one topic, one resource. Days 9–10 are for consolidation: you write reflections, finalise your impact notes, and produce a one-page “keep/stop/change” summary to guide next steps. If you want a structured way to turn that into a term plan, the after-action review framework is designed for exactly this moment.

Safe practice checklist

Print this section and keep it next to your laptop. It is intentionally blunt.

Privacy: do not input personal data, pupil work, safeguarding information, or unique identifiers. Use synthetic examples or anonymised templates.

Safeguarding: never use AI to make decisions about risk; keep it for drafting resources and professional thinking, with human judgement.

Integrity: be explicit with pupils when AI has supported a resource; ensure assessments follow your school’s rules and are fair.

Copyright: treat AI output as potentially derivative; avoid copying protected texts, and cite your sources for factual content.

Accessibility: check reading age, layout, and clarity; ensure alternatives for SEND and EAL learners are not tokenistic.

Bias checks: scan for stereotypes, deficit language, and narrow cultural references; adjust examples to be inclusive.

If you are refreshing policy alongside practice, our acceptable use policy refresh checklist can help align classroom routines with whole-school expectations.

Pick-your-strand reading

To stop reading becoming procrastination, pick one strand only: planning, feedback, or governance. Map it to your days and keep it tight. For example, if you are leading staff CPD, borrow the structure of three micro-routines and a safety protocol and turn your half-term outputs into a short INSET segment. If your focus is assessment integrity, you may also want to align boundaries with exam-season traffic light rules, even if you are not in an exam window right now.

Submitting your micro-credential

Keep submission simple: one folder, consistent names, and a summary page. Use a naming convention such as YYYY-MM_half-term_AI-CPD_[YourName] with subfolders for Artefacts, Reflections, and Impact. Title each artefact with a date and purpose, for example 2025-10-20_HingeQuestions_Y8Fractions.docx.

Your appraisal-ready summary should be one page with four headings in prose: what you built, how you kept it safe (minimum-data and checks), what you piloted, and what you will do next. If you need to evidence this at scale later in the year, you can extend the same structure into an end-of-year AI audit evidence pack.

Troubleshooting

Tool sprawl usually starts as curiosity and ends as confusion. Fix it by writing down your one tool stack and parking everything else until the sprint ends. Over-sharing data happens when you are tired and rushing; fix it with a “copy-paste pause” rule and the printable checklist above. “Polish over learning” shows up when you spend 40 minutes making a worksheet look perfect, but never test it. Fix it by piloting earlier with something intentionally small, then iterating based on what pupils actually do.

Half-term to Monday

The first week back should be calm and repeatable. Choose two micro-routines: one planning routine (for example, generating three explanations and then selecting the best) and one feedback routine (for example, using your stem bank to speed up consistency). Add one check-in with a colleague or line manager: ten minutes to look at your evidence pack and agree one next step. Finally, choose one measure you can maintain for a fortnight, such as a misconception tally from exit tickets or a quick pupil confidence rating. The point is not perfection; it is stability.

Towards smoother, safer AI routines that actually stick, The Automated Education Team

Table of Contents

Categories

Professional development

Tags

Professional Development Teacher Training AI in Education

Latest

Alternative Languages