AI Across the Curriculum: 8 Lesson Moves

A practical, drop-in approach for any subject

A teacher planning cross-curricular lessons with AI-supported teaching moves

What it should mean

“AI across the curriculum” should mean that teachers share a small repertoire of reliable teaching moves that work in English, maths, science, the arts and beyond. The focus is pedagogy first: AI supports thinking, language and feedback, but it does not replace curriculum intent or teacher judgement. When it is done well, departments don’t need to rewrite schemes of work. They simply agree where one or two moves fit naturally into existing lessons, routines and homework.

What it should not mean is “everyone must use the same tool”, “students can outsource tasks”, or “every lesson needs AI”. It should not become a badge of innovation that adds workload. If you cannot explain how the move improves learning, reduces friction, or strengthens evidence, leave it out. A helpful way to frame it is: one move, one purpose, one boundary, one piece of evidence.

If you want a broader set of classroom patterns, the four-channel multimodal AI classroom playbook is a useful companion. For the values and habits that sit underneath, the digital citizenship and AI guide helps you make expectations explicit.

The non-negotiables

Before you pick any move, agree the non-negotiables. This single checklist is designed to be printed and used in planning conversations.

  • Safeguarding and age-appropriateness: Use approved tools only; avoid open, unmoderated chat for younger learners; never ask for personal data; teach students how to report worrying outputs.
  • Privacy and data: Do not paste identifiable student information; anonymise work; prefer school-managed accounts; check retention settings and data processing terms.
  • Accessibility and inclusion: Provide non-AI alternatives; ensure tasks work with screen readers and translation tools; check reading level; avoid “speed wins” that reward those with better devices.
  • Bias and representation: Ask for multiple perspectives; check stereotypes; require students to cite sources beyond the model; build in “challenge the output” routines.
  • Assessment integrity: State what AI can and cannot do; collect process evidence; use in-class checkpoints; design tasks that require local context, personal reasoning, or live performance.
  • Teacher oversight: Treat AI as a draft generator, not an authority; verify facts; keep exemplars; log prompts for repeatability.
  • Wellbeing and workload: Use moves that reduce marking or improve feedback quality; avoid adding steps unless they replace something else.
  • Equity of access: If AI use is optional, ensure non-users are not disadvantaged; provide shared devices or structured in-class time where needed.

For deeper thinking about what “original” means in 2024 and beyond, see redefining originality assessment 2024.

The planning template

This is a copy-and-use one-page template. Keep it tight; the discipline is the point.

Lesson move: (choose one of the eight below)
Subject/topic:
Learning goal (one sentence):
Why AI helps here (one sentence):

Inputs you will provide: (key text, data, image, model answer, rubric, vocabulary list)
Student task (what they produce): (e.g., paragraph, solution, explanation, design, performance plan)
AI role (allowed): (e.g., generate examples, question prompts, feedback on structure)
AI role (not allowed): (e.g., writing the final answer, solving an assessed problem, creating sources)

Prompt (teacher-written):
Success criteria: (2–4 bullet points)
Teacher checks: (fact check, bias check, misconception check, accessibility check)
Safeguarding/privacy note: (what must not be entered; tool/account rules)
Integrity evidence: (what you will collect: drafts, annotations, oral check, screenshots, prompt log)
Adaptations: (SEND/EAL, extension, low-tech alternative)
Exit check: (one question/task to confirm learning)

If you teach multilingual learners, the AI for EAL/ESL beyond translation article provides strong adaptation ideas that fit neatly into this template.

8 repeatable lesson moves

Each move below includes a quick prompt you can adapt and a “teacher check” to keep it safe, accessible and instructionally sound. The goal is not perfect prompting. It is repeatable routines.

1) Vocabulary front-loading

Use AI to generate student-friendly definitions, examples and non-examples, then teach them explicitly. This works best when you supply the topic and the intended meaning, because many terms are context-sensitive.

A quick prompt: “Create a vocabulary set for [topic] with: a definition in 12 words, one example in [subject context], one non-example, common confusions, and a short retrieval quiz. Reading age: [x].”

Teacher check: Verify subject precision and remove misleading “near-synonyms”. In science, for instance, “theory” and “hypothesis” are often mishandled.

2) Misconception surfacing

Use AI to generate plausible wrong answers that mirror real misconceptions, then ask students to diagnose and correct them. This makes thinking visible without putting a student on the spot.

A quick prompt: “List five common misconceptions about [concept]. For each, write a student-style explanation that sounds confident but is wrong, and add a brief teacher note explaining the error.”

Teacher check: Ensure misconceptions are genuinely plausible and aligned with your curriculum sequence. Avoid introducing ideas you have not taught yet.

3) Worked-example fading

Start with a fully worked example, then progressively remove steps so students complete more of the reasoning. AI can help you generate multiple parallel examples at the same difficulty.

A quick prompt: “Create four worked examples for [problem type]. Example 1 fully worked; Example 2 missing the final step; Example 3 missing two key steps; Example 4 is a problem only. Keep numbers realistic and difficulty consistent.”

Teacher check: Check each example matches your method. In maths, AI often switches strategies mid-sequence, which undermines learning.

4) Critique-and-improve

Give students an AI-generated draft that is intentionally imperfect. Their job is to improve it against a rubric. This keeps the cognitive work with the learner and reduces the temptation to submit AI output as final work.

A quick prompt: “Write a [genre] response to [question] that is ‘almost good’ but includes: weak evidence, one logical leap, and vague vocabulary. Provide a rubric with three criteria so students can improve it.”

Teacher check: Ensure the draft is safe, respectful and age-appropriate. Remove any content that could be harmful or culturally insensitive.

5) Data-to-argument

Students move from data (table, graph, results, match statistics, survey findings) to a claim with reasoning and limits. AI can propose multiple claims, but students must choose, justify and qualify.

A quick prompt: “Given this data: [paste table], propose three possible claims. For each, write reasoning, one counterargument, and what extra data would strengthen the claim.”

Teacher check: Validate that claims actually follow from the data. Require students to reference specific numbers and uncertainty.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

6) Question laddering

Use AI to generate a sequence of questions from recall to application to evaluation, then use them for hinge checks, cold call, or homework. It is a fast way to create progression without guessing.

A quick prompt: “Create a 10-question ladder on [topic]: 3 recall, 3 understanding, 2 application, 2 evaluation. Add model answers and common wrong answers.”

Teacher check: Remove questions that test trivia rather than your learning goal. Ensure the “wrong answers” are not so persuasive that they confuse novices.

7) Feedback triage

AI can help you give faster, more consistent feedback if you feed it your rubric and a short student extract. The key is to keep feedback focused and actionable, not verbose.

A quick prompt: “Using this rubric: [paste] and this student work: [paste anonymised extract], give: one strength, one priority improvement, and a 3-step action plan. Do not rewrite the work.”

Teacher check: Scan for tone and accuracy. Ensure the feedback matches what you would actually reward in marking.

8) Explanation translation

Students often fail because they cannot access the language of a task, not the concept. AI can rephrase instructions, produce dual-coded explanations, or generate sentence stems—without reducing challenge.

A quick prompt: “Re-explain [concept/task] in three versions: (1) concise, (2) with an everyday analogy, (3) with sentence stems for a written explanation. Keep the maths/science meaning identical.”

Teacher check: Confirm the rephrasing has not changed the meaning. In humanities, watch for softened claims that remove nuance.

Subject-specific examples

In English, vocabulary front-loading can target analytical verbs such as “juxtaposes” or “conveys”, with non-examples that show vague alternatives like “shows”. Critique-and-improve works well with a “nearly there” paragraph that has quotations but weak analysis; students annotate where the reasoning breaks and then redraft. For integrity, keep the improvement in class and collect the annotated draft as evidence of process.

In maths, worked-example fading is the dependable win. Use it for algebraic manipulation, geometry proofs, or statistics methods, but keep the method consistent with your department. Misconception surfacing can be used for common errors like distributing negatives or misreading inequality symbols. A quick oral checkpoint (“talk me through step three”) helps confirm the student owns the method.

In science, data-to-argument fits naturally with practical results, required graphs, or case studies. Ask AI for three claims, then make students select the strongest and justify with specific results and limitations. Vocabulary front-loading supports precision with terms like “accuracy”, “precision”, “reliability” and “validity”, which are frequently muddled.

In humanities, question laddering helps build from knowledge to interpretation without losing rigour. Use it on a source, a map, or a short extract, then finish with an evaluative question that demands a justified judgement. Bias checks matter here: require students to identify perspective, missing voices, and what evidence would challenge the narrative.

In languages, explanation translation supports access without turning the task into English-first learning. Use AI to generate sentence stems, model dialogues and controlled practice, then have students perform live or write under timed conditions. For EAL learners across subjects, the principle is the same: support language, not shortcuts.

In computing, misconception surfacing is powerful for debugging thinking. Ask for “plausible wrong code” and have students explain why it fails, then repair it. Feedback triage can be used on code comments against a style guide, but ensure students still run and test their own programmes.

In the arts, critique-and-improve can focus on artist statements, composition rationales, or peer feedback prompts. AI can generate alternative compositions or interpretations, but students should justify choices using subject language and their own intent. Keep originality by requiring process photos, drafts, or rehearsal notes.

In PE and DT, data-to-argument works with performance statistics, training logs, or design testing results. Students can use AI to propose training adjustments or design improvements, then must justify them with their own data and constraints. Accessibility matters: provide non-digital ways to record evidence if device access is uneven.

Assessment integrity by design

Integrity is strongest when it is designed in, not policed afterwards. Start by setting boundaries in plain language: what AI may do, what it must not do, and what will be checked. Then capture process as routine evidence. A simple pattern is “plan, draft, justify”: students submit a plan (or prompt log), a draft, and a short justification explaining key choices. In class, use small oral checks—one question about why they chose a method, example, or interpretation. This does not need to be adversarial; it is simply confirming ownership.

Marking fairly becomes easier when you separate product from process. You can reward the final performance while also crediting planning, iteration, and reflection. Where AI is allowed, assess the human decisions: the selection of evidence, the quality of reasoning, the handling of counterarguments, and the explanation of limitations. If AI is not allowed, use controlled conditions and live components that naturally reduce outsourcing.

Common pitfalls

Over-scaffolding is the quiet killer. If AI provides too much structure, students stop grappling. Keep scaffolds temporary and fade them, just as you would with worked examples. Bias and hallucinations are predictable, not surprising. Treat them as teachable moments: build in verification and “show me the source” habits, especially when facts matter. Tool sprawl creates confusion and inequity, so agree a small set of approved tools and a shared prompt bank for the eight moves.

Inequity appears when AI becomes homework-only. If access varies, schedule AI-supported work in class or provide a parallel route that meets the same goal. Finally, watch for “AI does the thinking”. If the move results in students copying, swap it for critique-and-improve, misconception diagnosis, or in-class justification—moves that force active reasoning.

A 2-week rollout plan

In week one, run a short department huddle to agree the non-negotiables checklist and choose two lesson moves to trial. Keep the choice pragmatic: one that supports explanation (such as vocabulary front-loading or explanation translation) and one that strengthens reasoning (such as worked-example fading or data-to-argument). Teachers plan a single lesson using the one-page template and teach it within five days, collecting one piece of process evidence per class.

In week two, meet again for 30 minutes and compare evidence: not just student work, but also time saved, misconceptions revealed, and any safeguarding issues. Tighten the prompt, refine the teacher checks, and agree one consistent boundary statement to use with students. Then scale lightly: add one more move, not a new tool. By the end of the fortnight, you should have a shared mini playbook: three moves, three prompts, three examples, and a clear integrity routine that aligns with your policy.

May your next curriculum meeting end with clarity, not clutter.
The Automated Education Team

Table of Contents

Categories

Education

Tags

AI in Education Teaching Strategies

Latest

Alternative Languages