
Summer CPD often fails for one predictable reason: it stays as ‘learning about’ rather than becoming ‘doing differently’. AI makes that gap even wider because tools change quickly and advice can feel contradictory. This roadmap is designed to help you finish summer with a calm, realistic plan for September: one clear focus, two small pilots, and evidence you can discuss confidently with line managers and senior leaders. If you want a practical structure for turning ideas into routines, you may also find it useful to pair this with Building AI workflows that stick, which focuses on making small changes durable.
Who this is for
This is for teachers and middle leaders who want to improve practice without sacrificing rest. It works particularly well if you have felt overwhelmed by AI ‘must-dos’, or if you’ve tried a tool once, found it fiddly, and quietly abandoned it. It is also for anyone who anticipates September conversations about impact: appraisal targets, faculty priorities, or a departmental push to be more consistent.
What to stop doing this summer is just as important. Stop collecting dozens of tabs ‘to come back to’. Stop signing up for every free webinar. Stop trying to build a whole-school AI policy on your own. Instead, aim for a narrow, defensible implementation plan: small enough to start in Week 1, clear enough to explain in two minutes, and safe enough to run without drama.
Pick your one focus
Choose one September focus and let it govern everything else you select. If you try to optimise workload, learning quality, inclusion, and assessment integrity all at once, you will end up with a plan that is too vague to act on.
Workload is a strong focus if your pinch-point is planning, admin, or repetitive communications. Learning quality is best if you’re targeting better explanations, better modelling, or more responsive teaching. Inclusion is the right focus if you want more accessible materials and clearer scaffolds for a wider range of learners; the ideas in Minimum viable inclusion stack can help you define what ‘better’ looks like without creating extra marking. Assessment integrity is the best focus if your context is anxious about AI misuse, or if you need clearer boundaries and scripts for students; Exam season AI traffic-light boundaries offers a sensible way to communicate expectations.
Write your focus as a single sentence beginning with ‘By the end of Week 3 in September, I will…’. Keep it observable. For example: ‘By the end of Week 3, I will reduce planning time for one unit by 25 minutes per lesson while keeping task quality consistent.’
Selecting AI courses
Pick one course only. Your aim is not to become an AI expert; it is to gain enough understanding to run a small pilot safely and explain your decisions.
A quick quality filter helps. First, credential value: will the provider’s certificate or badge be recognised in your setting, and does it clearly state learning outcomes? Second, transfer to classroom: does the course include classroom-ready routines, prompts, and examples, or is it mostly conceptual? Third, safety fit: does it cover privacy, bias, and appropriate use with pupils, and does it help you write boundaries you can actually follow on a busy day?
If you want a structured way to compare options, start with February half-term AI CPD courses and credentials and apply the same thinking to summer choices. When you enrol, decide in advance what ‘done’ looks like: for instance, ‘complete Modules 1–3 and produce a one-page classroom pilot plan’.
Conferences and webinars
Choose one event: a conference day, a virtual summit, or even a single high-quality webinar. The trick is to attend for implementation, not inspiration. Before you go, write three questions linked to your one focus. If your focus is workload, ask, ‘What exactly did you stop doing?’ If your focus is learning quality, ask, ‘What changed in pupil work?’ If your focus is integrity, ask, ‘What scripts and boundaries reduced ambiguity?’
During the event, capture notes in two columns: ‘ideas’ and ‘actions for September’. Force yourself to write at least three actions, each phrased as a classroom move. After the event, do one follow-up within 48 hours: email a speaker for a slide, join the shared resources folder, or message a colleague with your planned pilot. This small follow-up is often what turns an event into real change.
If you need to sense-check a tool you hear about, a quick comparison read like AI assistant showdown 2025 can stop you wasting time on something that looks impressive but doesn’t fit your constraints.
Reading strand
Build a reading strand that is deliberately small: three books or ten articles. The point is not to ‘cover the field’, but to build a pathway from principles to practice. Choose one strand that supports your focus, then add a simple routine: after each chapter or article, write one ‘try this in September’ action.
For example, if your focus is learning quality, you might read about modelling and co-authoring, then plan one lesson where pupils use AI to critique a model answer against a rubric. If your focus is inclusion, you might read about accessibility and language simplification, then plan one adapted resource set for a specific class. For curated suggestions, Holiday reading: best AI in education books can help you choose texts that won’t feel like a slog in August.
Two small practice projects
Pick two pilots only, and make them small enough to run with one class each. Think ‘two weeks, two lessons minimum, one simple measure’. Choose from planning, feedback, differentiation, retrieval, parent comms, or admin.
A planning pilot could be: use AI to generate three alternative explanations and one hinge question for an upcoming lesson, then select and adapt the best. Your evidence might be the original prompt, your edited plan, and a short reflection on what improved. A feedback pilot could be: use AI to draft success-criteria-aligned comments for one assignment, then edit for accuracy and tone before giving them to pupils. Capture time spent and a sample of before-and-after comments. If differentiation is your focus, trial generating two scaffolded versions of the same task and compare pupil completion and quality.
For retrieval, you might generate a five-question low-stakes quiz aligned to last week’s learning, then check it for misconceptions and run it twice. For parent comms, you might draft a clearer message about a homework routine, then track whether fewer follow-up questions arrive. For admin, you might create a standard template for trip letters or resource requests and track time saved.
Personal Learning Plan template
Your PLP can fit on one page. Start with your goal sentence and add time boxes: one hour a week maximum, plus one longer session if you choose. List the tools you will use, but include safeguards: what data you will not enter, what you will anonymise, and how you will store prompts and outputs. Add success criteria that match your focus, such as ‘reduce planning time’, ‘increase completion’, or ‘improve clarity of pupil responses’.
Finally, specify evidence you will collect. Keep it lightweight: one before-and-after artefact per pilot, one sample of pupil work, one time-on-task estimate, and a short reflection log after each attempt. If you want to make the ‘keep/kill/scale’ decision explicit, End-of-year AI audit evidence pack provides a helpful framing you can reuse in September.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Evidence without heavy data
In the first three weeks back, you do not need a full research project. You need credible, explainable evidence that links your action to a plausible benefit.
Before-and-after artefacts are the simplest. Save the original worksheet and the revised one, or the first draft of feedback and the edited final version. Time saved is also legitimate evidence if you measure it honestly: note start and finish times for planning one lesson with and without AI support, then describe what you did differently. Pupil work samples matter most when they show quality shifts, such as more precise use of key vocabulary, clearer structure, or fewer incomplete responses. A short pupil survey can be one question on a slip of paper: ‘What helped you most today?’ Keep it anonymous and quick.
Reflection logs should be brief and consistent. Three prompts are enough: ‘What did I try?’, ‘What changed?’, and ‘What will I do next time?’ This is the kind of professional evidence that travels well into appraisal conversations.
Week-by-week summer schedule
A rest-friendly schedule is one you can keep even if family life, travel, or pure tiredness takes over. In Week 1, decide your one focus and draft your PLP. In Week 2, enrol on your single course and complete the first module only. In Week 3, select your event and write your three implementation questions. In Week 4, start your reading strand and capture two ‘try this’ actions. In Week 5, outline your two pilots and prepare any templates you’ll need, such as prompt banks or feedback frames. In Week 6, do a final safety check, organise your evidence folder, and then stop. Protect at least the last week before term starts for genuine rest and practical life admin.
September launch plan
In Week 1 back, run Pilot A with one class and collect your first artefacts. Keep everything small and reversible. In Week 2, run Pilot B, again with one class, and collect the same evidence types so comparisons are easy. In Week 3, review both pilots using a simple decision: keep, kill, or scale. ‘Keep’ means it saved time or improved learning with manageable risk. ‘Kill’ means it added friction or created confusion. ‘Scale’ means you can repeat it with another class or share it with a colleague, with your safeguards intact.
If you want a structured way to run that review conversation, Term 2 AI after-action review framework offers prompts you can adapt for early September, even if you’re doing this solo.
May your September start with calm pilots, clear evidence, and no unnecessary reinvention.
The Automated Education Team