LGR22 grading criteria: AI model answers

A moderation-first workflow with three worked exemplars

Teachers moderating student work with AI-supported model answers and criteria notes

LGR22 invites us to judge quality in the round. That is a strength, but it also raises a practical question: how do departments moderate reliably when the descriptors are intentionally broad? Many teams respond by rebuilding checklists—often unconsciously—because tick-boxes feel safe. The risk is that you end up rewarding surface features rather than genuine understanding. If your next moderation cycle is coming up, it’s worth revisiting how you approach criteria language alongside your wider assessment routines (see assessment design that supports learning).

This post sets out a moderation-first workflow for using AI to help you build model answers at E/C/A without turning LGR22 into atomised micro-skills. The aim is not to “let AI grade”, but to help staff agree what “observable evidence” looks like in your subject, for your tasks, at your year group.

What changed

LGR11-era practice often drifted towards lists of features: “mentions X”, “uses two examples”, “includes a conclusion”. Those lists can be useful as teaching prompts, but they can mislead moderation because they treat quality as additive. Students learn to “collect points” rather than communicate understanding.

LGR22’s holistic descriptors push us to look at the overall coherence, relevance, precision, and development of ideas. Two answers can both include “two examples”, yet one may show causal reasoning while the other is essentially a list. If you convert holistic statements into a strict checklist, you may accidentally narrow the construct you are assessing. That is especially likely when staff are under time pressure and want fast consistency—exactly when a structured, shared workflow helps.

Qualitative progression

A useful way to read LGR22 E/C/A is as a progression in the quality of thinking and communication, not simply the quantity of content. Words such as grundläggande and välutvecklat are not invitations to count features; they are prompts to look for depth, connectedness, and precision.

In practice, that progression often shows up in three “observable” shifts that can be moderated without becoming tick-boxes. At E, students usually show basic relevance and some correct ideas, but with gaps, generalisations, or weak links between points. At C, the same ideas are more securely connected: explanations are clearer, examples are better chosen, and reasoning is sustained across the response. At A, the answer tends to be both more precise and more purposeful: it anticipates counterpoints, weighs significance, and uses subject language accurately without sounding like a glossary.

If you want AI to support this, the key is to ask it for evidence patterns and justification notes, not for “criteria lists”. This aligns well with careful, teacher-led use of AI tools in planning and moderation (see practical AI for teachers).

Moderation-first method

Here is a workflow that keeps moderation in charge and uses AI as a drafting assistant.

Start with the task, not the descriptors. Bring the exact question, the expected curriculum content, and any source materials students had. Then agree, as a team, what a successful response must do (for example, “explain at least two causes and show how they contributed”, rather than “mention two causes”). Only then map that onto E/C/A language.

Next, generate candidate model answers with AI, but treat them as “proposals for discussion”. Ask for three levels, then ask the tool to annotate each with short justification notes that reference LGR22 language. Your moderation discussion should focus on whether the proposed differences are genuinely qualitative, and whether they match your teaching.

Finally, refine this into a departmental exemplar pack. Keep the models short enough to be usable, and include a brief “what to look for” paragraph rather than a checklist. If you are building a bank over time, this is where consistent prompts and version control matter (see building reusable AI prompt routines).

Tool 1: Answer Key

Below is a fully worked exemplar you can copy and adapt. The task is typical of Year 8 History, but the workflow applies across ages.

Year 8 History task

Question: Explain the main causes of the First World War. You should include at least two causes and explain how they contributed to war in 1914.

AI prompt (teacher-facing):
“Draft three model answers (E/C/A) for the question. Keep each answer 140–180 words. Then add 3–4 justification notes per level in plain language, linking to LGR22-style progression (basic/developed/well-developed). Do not invent obscure facts; stick to widely taught causes: alliances, militarism, imperialism, nationalism, assassination at Sarajevo. Avoid moralising.”

E-level model answer (example)

The First World War started because there were lots of tensions in Europe. One cause was alliances. Countries had promised to support each other, so when one country got involved, others joined too. Another cause was militarism. Countries built up their armies and navies, which made them feel ready to fight. Also, nationalism made people proud of their country and they wanted to be powerful. When Archduke Franz Ferdinand was killed in Sarajevo in 1914, Austria-Hungary blamed Serbia and declared war. Because of the alliances, other countries got pulled in and the war spread quickly.

Justification notes (E): The answer is relevant and mostly accurate, but explanations are general. Causes are named with limited detail about how they connect. The Sarajevo trigger is included, but reasoning is mostly sequential rather than analytical.

C-level model answer (example)

Several long-term tensions made Europe unstable before 1914. Alliances turned a local conflict into a wider war because countries were committed to support partners. For example, when Austria-Hungary moved against Serbia, Russia felt pressure to back Serbia, and Germany then supported Austria-Hungary, widening the crisis. Militarism also increased the risk of war: governments invested heavily in armies and planned for conflict, so leaders were more willing to use force quickly. Nationalism added to this by encouraging rivalry between states and conflict in places like the Balkans. The assassination of Franz Ferdinand was the immediate trigger, but it mattered because it set off decisions inside this tense system, making escalation more likely.

Justification notes (C): Clearer causal links and a more developed chain of escalation. Uses an example to show how alliances functioned. Distinguishes long-term causes from the trigger, showing more secure understanding.

A-level model answer (example)

The war broke out in 1914 because a fragile international system turned a crisis into escalation. Alliances mattered not simply because they “pulled countries in”, but because they shaped leaders’ choices: Austria-Hungary believed it could act firmly against Serbia with German backing, while Russia saw retreat as a loss of influence in the Balkans. Militarism increased the speed and confidence of mobilisation, making compromise harder once plans were set in motion. Nationalism intensified rivalry and instability, especially in the Balkans, where competing national claims made conflict more likely. The Sarajevo assassination was the trigger, yet the decisive factor was how states interpreted it through fear, honour, and strategic calculation. In that context, diplomacy failed and mobilisation became the default response.

Justification notes (A): More precise language and stronger focus on significance and interpretation. Explains mechanisms (choices, speed of mobilisation) rather than listing factors. Sustains a coherent argument about why escalation happened, not just what happened.

Tool 2: Quiz Generator

A moderation-friendly quiz is not just “harder questions”. It is a set that reveals different kinds of understanding, from recall through application to explanation. Used well, it also helps departments agree what “progression” looks like in student thinking (and it supports retrieval routines without narrowing the curriculum; see AI-supported retrieval practice).

Year 8 Chemistry focus

Topic: Atomer och kemiska reaktioner (atoms and chemical reactions)

AI prompt (teacher-facing):
“Create 10 questions on atoms and chemical reactions for Year 8. Order them from recall to analysis. For each question, add one sentence explaining what the question reveals about understanding. Keep language clear. Avoid requiring specialist national exam formats.”

Question set (with what each reveals):

  1. What is an atom? This checks basic definitions and whether students confuse atoms with molecules.
  2. Name two parts of an atom. This checks core vocabulary (nucleus/electrons) without overloading.
  3. What is the difference between an element and a compound? This checks classification thinking, not memorised examples alone.
  4. What does a chemical equation represent? This checks whether students see equations as “recipes” or as particle-level changes.
  5. In a reaction, what happens to atoms: are they created, destroyed, or rearranged? This checks conservation thinking.
  6. Explain why mass is conserved in a closed system during a reaction. This checks reasoning beyond the slogan “mass is conserved”.
  7. A student says, “The atoms disappear when something burns.” How would you correct them? This checks misconception handling and explanation clarity.
  8. Choose one reaction (rusting, burning, neutralisation) and describe it using particle language. This checks translation from everyday phenomena to models.
  9. If a reaction produces a gas and the container is open, why might the measured mass seem to decrease? This checks boundary conditions and measurement reasoning.
  10. Two different equations both balance. How can you tell which one matches an experiment? This checks evaluation: connecting evidence to models.

In moderation, you can use these questions to triangulate writing judgements. If a student’s extended response seems “strong”, but they cannot handle questions 6–9, that is a useful professional conversation about whether the writing is masking gaps.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Tool 3: Difficulty Adjuster

Stepping an answer down responsibly is harder than it sounds. If you simply “remove detail”, you risk producing an unnatural, patronising response or—worse—an inaccurate one. The safer approach is to keep the same core claim and adjust the quality of reasoning and precision, while logging changes so teachers can see what moved.

Starting point: A-level answer (Chemistry)

A-level version:
“In a chemical reaction, atoms are conserved but rearranged as bonds break and form. A balanced equation reflects conservation of each element’s atoms, not necessarily conservation of volume or appearance. Mass is conserved in a closed system because the total number and type of atoms remain constant; apparent mass loss in open reactions often occurs because gaseous products escape. This is why careful experimental design matters when testing conservation laws.”

AI prompt (teacher-facing):
“Rewrite the answer at C level, then E level, for a younger year group. Keep it scientifically correct. Maintain the same core ideas. After each rewrite, provide a short change log describing what you simplified (vocabulary, sentence structure, reasoning depth). Do not add new facts.”

C-level version:
“In a chemical reaction, the atoms do not disappear. They are rearranged into new substances. A balanced chemical equation shows that the same number of each type of atom is on both sides. Mass is conserved if you measure in a closed container, because nothing can escape. Sometimes it looks like mass has decreased in an open container because a gas is made and leaves the container. That can make results confusing unless the experiment is set up carefully.”

Change log (to C): Shorter sentences and simpler vocabulary (“rearranged” kept; “bonds break and form” removed). Keeps explanation of balanced equations and closed/open measurements. Reasoning remains connected but less abstract.

E-level version:
“In a chemical reaction, atoms are still there. They move around and join up in different ways to make new substances. A balanced equation shows the same atoms before and after the reaction. If you do the reaction in a closed container, the mass stays the same. In an open container, the mass can seem to go down if a gas is made and escapes.”

Change log (to E): Removes most abstract phrasing and reduces the “why” detail. Keeps the key ideas: atoms remain, equations show this, and gases escaping explain apparent mass changes.

Used in department meetings, this tool helps staff agree what “basic” versus “developed” explanation looks like without turning it into a scoresheet.

Quality assurance

AI will sometimes produce confident nonsense, especially when asked for “exam-style” answers without boundaries. The simplest safeguard is to build “teacher sign-off” into your prompts and routines. Ask the tool to state what it is assuming, to flag where it is uncertain, and to keep within the knowledge students were actually taught. When working from sources (for example, a History textbook extract), require the model answer to quote or paraphrase only from that material.

It also helps to separate drafting from validation. Draft with AI, then validate with humans: a quick subject check for factual accuracy, and a moderation check for level alignment. If your team is building confidence with this approach, it fits naturally alongside a wider conversation about safe classroom use (see AI safety and classroom boundaries).

Standardisation pack

To make this sustainable, store exemplars as a small “standardisation pack” rather than a sprawling folder. Each task should include the question, any source material, three levelled models, and short justification notes. Keep a dated version history so you can improve without losing what was agreed.

A 30-minute moderation can be tight but effective if it is structured. Spend the first ten minutes aligning on the construct (“what does success look like here?”), the next fifteen minutes comparing two or three student scripts to the models, and the final five minutes recording any adjustments to the exemplar wording. Over time, this creates consistency because staff are repeatedly calibrating to shared anchors, not to personal mental checklists.

Prompt templates

Below are copy-and-adapt templates you can keep in your department pack.

Template A (English):
“Here is a student task: [paste task]. Here is the taught content boundary: [paste bullet summary]. Draft E/C/A model answers (120–180 words each). Keep facts within the boundary. Then add 3–4 justification notes per level describing observable evidence of quality (coherence, precision, development), without turning it into a checklist. End with: ‘Teacher sign-off: verify facts, level alignment, and tone.’”

Template B (Swedish):
“Här är en uppgift: [klistra in]. Här är innehållsgränsen (det eleverna har arbetat med): [klistra in]. Skriv modellsvar på E/C/A (120–180 ord). Håll dig till innehållsgränsen och hitta inte på nya fakta. Lägg sedan till 3–4 korta motiveringsnoter per nivå som beskriver observerbara kvaliteter (sammanhang, precision, utveckling) utan att göra en krysslista. Avsluta med: ‘Lärarens sign-off: kontrollera fakta, nivå och språk.’”

One-page checklist (for teachers): Keep the model answer aligned to the task; keep facts within taught content; make E/C/A differences qualitative (links, precision, development); add brief justification notes; run a quick hallucination check; then moderate against real scripts and revise the exemplar.

May your next moderation feel clearer, fairer, and faster.
The Automated Education Team

Table of Contents

Categories

Assessment

Tags

Lgr22 Assessment Moderation Ai assistants

Latest

Alternative Languages