
Three years on
Three years into LGR22, day-to-day practice in many schools has become more explicit. Teachers talk more readily about progression, about what “quality” looks like in pupil work, and about aligning tasks to long-term goals. In classrooms, this has often meant tighter success criteria, more deliberate modelling, and clearer routines for checking understanding. Yet the lived experience is that LGR22 did not magically reduce workload. If anything, the push for clarity and documentation has made the “invisible” work more visible—and therefore heavier.
What has not changed is the basic constraint: teachers still have the same number of hours, the same classroom complexity, and an even wider range of learner needs. The result is predictable. When planning time is squeezed, differentiation becomes blunt. When literacy demands rise across subjects, access gaps widen. When communication expectations increase, documentation expands to fill whatever time is left. This is where AI can help, but only if it is used as a set of bounded workflows rather than a general-purpose shortcut. If you are building a stable approach, the framing in September 2025 stability thinking is a useful lens: start small, stabilise routines, then scale.
The friction map
Across LGR22 implementations, the same pain points recur. They show up in staff feedback, planning meetings, and late-night admin. In practice, most schools report some combination of the following: lesson and unit planning that takes too long; differentiation that is hard to sustain across a whole class; literacy access issues in non-language subjects; producing high-quality questions and answer keys; adapting materials for SEND without lowering cognitive demand; supporting multilingual learners while keeping the academic bar high; parent and student communication that is consistent and timely; documentation for development talks and reporting; and the “paper trail” problem—finding evidence quickly when you need it.
The important move is to treat these as system gaps rather than individual failings. If a teacher is rewriting the same instructions three times for different reading levels, that is not a motivation problem. It is a tooling and workflow problem. If a team cannot maintain cross-curricular coherence because unit plans live in scattered documents, that is not a professionalism problem. It is an information architecture problem.
A useful way to introduce AI without chaos is to map each friction point to one defined workflow, with inputs, outputs, and a clear “human check” step. This keeps expectations realistic and makes evaluation easier.
For planning, a Unit Planner workflow can generate a first-pass sequence: key concepts, formative checks, vocabulary, and likely misconceptions. A Lesson Planner workflow can then turn one lesson into a structured plan with modelling, guided practice, and exit tickets. The teacher’s job is to correct, contextualise, and simplify—because AI tends to overproduce. For differentiation, a Difficulty Adjuster workflow can produce parallel tasks that preserve the same concept and success criteria but vary scaffolding, language load, or step size. For literacy access, Reading Comprehension and Lesson Accessibility routines can rewrite instructions, pre-teach vocabulary, add glossaries, and generate “read-aloud-friendly” versions while keeping the thinking demand intact.
For assessment quality, an Answer Key workflow can create mark schemes, worked solutions, and common error notes. For communication, Development Talk (Student), Student Communication, and Parent Communication workflows can draft clear, respectful messages that match school tone and reduce the “blank page” burden. For documentation and audit trails, the key is to standardise templates and store prompts and outputs in a consistent place, aligning with the kind of evidence pipeline discussed in report-writing and audit trails.
Worked examples
In a Year 7 science class, a teacher sets a short quiz on particle theory. The Answer Key workflow takes the quiz questions and produces a mark scheme with one mark per key idea, plus “acceptable alternatives”. It also adds a short list of misconceptions (for example, “particles expand” rather than “spaces between particles increase”). The teacher reviews for accuracy and adjusts language to match the class. Used well, this does not replace assessment literacy; it standardises it and saves the teacher from reinventing the same structures each time.
In maths, the Difficulty Adjuster workflow can take a single problem type—say, solving two-step equations—and create three versions: one with worked examples and sentence stems, one standard, and one with a twist that checks transfer. The red line is important: you are not creating “easy work” and “hard work”. You are creating different routes to the same learning intention, then deciding who needs which route today.
For literacy access, a Reading Comprehension routine can take a short humanities text and produce: a vocabulary preview, three literal questions, three inferential questions, and one evaluative question. The Lesson Accessibility routine can then rewrite the task instructions in simpler syntax, add a glossary, and provide an alternative response format (for example, sentence starters or a structured table). If your school is trying to reduce tool sprawl, it helps to position this inside a minimum inclusion stack, as outlined in the accessibility consolidation guide.
Language and inclusion
Multilingual learners often get either too much simplification (which lowers the academic bar) or too little support (which blocks access). A strong AI workflow sits between those extremes. For example, in a geography lesson on urbanisation, you can generate a “second-language vocabulary pack” that includes subject-specific terms, student-friendly definitions, example sentences, and a quick retrieval quiz. Crucially, the core task stays the same: analysing causes and consequences using evidence. The support is in the language scaffolding, not in reducing the thinking.
A practical classroom pattern is to keep the same success criteria on the board for everyone, then offer optional language supports: a glossary, sentence frames, and a model paragraph. AI can draft these quickly, but the teacher decides what is culturally and linguistically appropriate for the learners in front of them.
Communication and documentation
Communication becomes a workload trap when every message is bespoke. The goal is not to automate relationships; it is to reduce repeated drafting so teachers can spend time on the human part of the interaction. A Development Talk (Student) workflow can turn brief bullet notes into a structured summary: strengths, next steps, and a specific strategy the student can try. A Student Communication workflow can draft a short message that is firm but encouraging after a missed deadline. A Parent Communication workflow can produce a clear, non-jargon update that avoids blame and offers a practical next step.
If you already run parent meetings with a consistent structure, you can pair these routines with a one-page brief like the approach in the AI parent consultation workflow. The governance point is simple: never let AI invent facts. It can shape language; it cannot create the record.
Planning at scale
Planning “at scale” is where many LGR22 efforts wobble. Unit plans exist, but they drift. Teams start with shared intentions, then diverge as the term gets busy. AI can help by producing a consistent planning skeleton across subjects: concepts, vocabulary, progression, checks for understanding, and links to prior learning. The win is coherence, not perfection.
However, stability matters more than sophistication. If staff are still building confidence, use a small number of workflows repeatedly, and keep the same templates for prompts and outputs. The stability-first approach in September 2025 stability thinking applies here: fewer tools, clearer routines, better uptake.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Risk assessment
A minimum-data pattern should be your default. Use anonymised or synthetic examples when testing. Avoid entering identifiable pupil data unless you have an approved system, a clear purpose, and an audit trail. Store prompts and outputs where your school already stores planning and documentation, with version history enabled.
Audit trails matter because LGR22 documentation is often reviewed later, under time pressure. You need to be able to show what was generated, what was edited, and what was ultimately used. For a practical policy rhythm, many schools benefit from an annual refresh such as the acceptable use policy checklist.
“Do not use AI for this” red lines should be explicit. Do not use AI to make high-stakes judgements about pupils, to generate sensitive safeguarding content, to diagnose learning needs, or to create reports that have not been checked against real evidence. Do not paste confidential case notes into general tools. If you are working in an EU context, align practice with the expectations summarised in the EU AI Act explainer.
Workload maths
Time-savings claims are often inflated, so here is a realistic, conservative model. Assume a teacher uses AI to create first drafts, then spends time editing and contextualising. Assume adoption is partial, not universal.
| Workflow | Frequency | Minutes saved each | Annual saving |
|---|
| Lesson planning first draft | 2 per week | 15 | 19.5 hours |
| Differentiated task variants | 1 per week | 10 | 13 hours |
| Answer keys/mark schemes | 1 per week | 8 | 10.4 hours |
| Literacy/accessibility adaptation | 1 per fortnight | 15 | 6.5 hours |
| Parent/student messages | 2 per week | 5 | 13 hours |
| Development talk notes | 3 per term | 30 | 4.5 hours |
| Total (illustrative) | | | 66.9 hours |
Assumptions: a 39-week school year; savings are net of checking and editing time; meetings and whole-school deadlines are unchanged. In practice, many teachers will “spend” some of this time on better resources rather than stopping work. That is still a win if it improves quality without increasing total hours.
What has not worked
Some failure modes are now predictable. The first is overproduction: AI generates long plans that look impressive but are unusable at 8:30 on a Tuesday. The second is misalignment: activities that do not match your progression or assessment approach, because the prompt was vague. The third is equity drift: the “simplified” version becomes a different task, and expectations quietly drop. The fourth is tone mismatch in communication, where a drafted message sounds polished but not like you, which can damage trust. The fifth is data creep: staff paste more pupil information than intended because it feels convenient.
There are also places where AI can make things worse. If you use it to generate endless worksheets, you may increase marking. If you rely on it to decide interventions, you may undermine professional judgement. If you treat outputs as finished, errors will slip into resources and spread across teams. The simplest stop-doing list for next term is this: stop asking for “a full lesson plan” without constraints; stop generating differentiated tasks without shared success criteria; stop sending AI-written messages without reading them aloud first; and stop using any tool that cannot support your school’s privacy and audit needs.
90-day next steps
A small plan beats a big launch. In the first 30 days, choose three workflows only: one planning routine, one differentiation/accessibility routine, and one communication routine. Agree shared prompt templates, where outputs are stored, and what “good enough” looks like. Capture evidence lightly: a baseline time estimate, two examples of before-and-after resources, and a short staff reflection.
In days 31–60, scale within one year group or one subject team. Add one more workflow only if the first three are stable. Check for unintended effects: are tasks becoming narrower, are expectations dropping, are messages becoming too generic? In days 61–90, run a review checkpoint. Compare time spent, staff confidence, and pupil access indicators (for example, completion rates on reading-heavy tasks). Decide what to keep, what to rewrite, and what to stop.
If you treat AI as a set of bounded routines that fill specific LGR22 gaps, you can reduce friction without lowering standards. The aim is not to do more. It is to do what matters, more consistently, with less waste.
Here’s to calmer planning, clearer communication, and fewer late-night rewrites.
The Automated Education Team