LGR22 Investigations: From Aims to Riskbedömning Fast

One brief in, audit-ready investigation pack out

A science teacher preparing an investigation pack with lab equipment and a laptop

LGR22 places real emphasis on pupils doing systematiska undersökningar—planning, carrying out, documenting, and evaluating investigations, not just following recipes. In practice, many teachers find the “science bit” is manageable, but the paperwork around it grows arms and legs: instructions, data capture, evaluation prompts, differentiation, and a riskbedömning that would stand up to scrutiny. A documentation-first workflow flips that burden. Instead of writing a full lab from scratch, you write one clear brief, then use AI to generate a complete pack as editable starting points, not “finished” materials. If you already use AI for planning, you may find it helpful to connect this approach to a wider AI lesson planning workflow so your investigations sit neatly alongside the rest of your curriculum planning.

What LGR22 demands

When LGR22 refers to systematiska undersökningar, it is asking for more than practical activity. Pupils should be able to pose or refine a question, form a hypothesis, choose variables, use equipment safely, record observations in a structured way, and evaluate reliability and sources of error. The workload spike often happens at the joins: translating syllabus language into pupil-friendly steps, designing a data table that captures the right evidence, and writing evaluation prompts that actually reveal thinking. Add risk assessment and accessibility, and a “simple practical” can become an evening’s work.

A documentation-first workflow treats those joins as the core deliverable. The experiment is still hands-on and meaningful, but the pack is built to make learning visible and assessable. This aligns well with the idea of assessment as evidence gathering rather than extra tasks; if you are refining how you capture learning, it is worth pairing this with approaches to formative assessment with AI so your prompts and exit questions stay focused.

The one-brief input

The teacher brief is the only part you write from scratch. It should be short, specific, and written as if you are briefing a capable colleague. In one paragraph, you clarify the investigation focus, the class context, the constraints, and what “good evidence” will look like.

A strong brief might include: year group (åk), topic, the key concept, available equipment, time available, any known sensitivities (e.g. fragrance allergies), and the kind of data pupils should produce (table, graph, annotated diagram, paragraph evaluation). It also helps to state what you want pupils to practise: controlling variables, repeated trials, or evaluating uncertainty.

What you should not include is personally identifying information about pupils, or anything you would not want copied into a document. Avoid names, medical details, and behaviour notes. Keep it general: “one pupil has a nut allergy” is rarely needed for a chemistry indicators practical; “several pupils are sensitive to strong smells” might be relevant if you were considering vinegar. If you are setting expectations for responsible use, you can also connect this workflow to a clear AI policy for schools so staff and pupils share the same boundaries.

Output 1: Experiment walkthrough

Your first AI output is the pupil-facing practical, written clearly enough to run with minimal teacher rewording. Here is an example for syror och baser med indikatorer (åk 7), generated from a brief and then edited by the teacher.

The investigation question is: How can indicators help us classify household solutions as acidic, neutral, or basic? Pupils begin by writing a hypothesis such as, “If a solution is acidic, universal indicator will turn red/orange.” The method then runs in short, numbered steps: set up a spotting tile or small cups; label samples A–F; add a fixed volume of each solution; add two drops of indicator; compare against a colour chart; record the colour and inferred pH range; rinse equipment between samples to avoid contamination. The teacher version includes a note on sensible sample choices that are widely available and safer in schools, such as lemon juice solution, bicarbonate solution, soapy water, and plain water.

Crucially, the walkthrough also tells pupils what counts as careful work. For example, it prompts them to keep drop size consistent, to use the same lighting when judging colour, and to repeat one sample to check reliability. This is where AI drafts can be helpful: they often remember the “boring but important” steps that make results interpretable—provided you sanity-check them against your equipment and classroom reality.

Output 2: LGR22 mapping table

The second output is a mapping table that links LGR22 language to what pupils actually do, and what evidence they produce. This is the piece that often saves the most time when you are asked, “How does this practical meet the syllabus?”

A useful table has three columns. The first lists the relevant aims or centralt innehåll phrased in teacher language. The second column links each item to a concrete step in the method, such as “identify variables” linked to “keep indicator volume constant” or “plan a fair test” linked to “use the same sample volume for each solution”. The third column specifies the evidence pupils produce: a completed data table, a short hypothesis statement, a labelled diagram of the set-up, and an evaluation paragraph that addresses reliability.

This mapping also makes assessment cleaner. If you are using an E/C/A progression, you can note where the evidence differentiates: an E-level response might correctly classify solutions from colour; a C-level response might explain why rinsing prevents contamination; an A-level response might discuss uncertainty in colour judgement and propose improvements. If you want to tighten that progression further, it can help to use AI to draft success criteria and rubrics that match the evidence you are already collecting.

Output 3: Riskbedömning that is usable

A riskbedömning should be practical, not performative. The third output is therefore a staff-facing table that you can actually use in a prep room: hazards, severity, likelihood, mitigations, consequences, and location.

For the indicators practical, hazards might include mild irritants (indicator solution, some household samples), glassware breakage, slips from spills, and ingestion risk. Severity and likelihood should be realistic for your setting; AI will sometimes overstate hazards or suggest inappropriate PPE, so this is a key edit point. Mitigations should be concrete: goggles for all pupils, small volumes only, no tasting, immediate wipe-up protocol, teacher-controlled distribution of indicator, and clear disposal instructions. Consequences should describe what to do if it goes wrong (rinse eyes, inform staff, follow your school’s first-aid procedure), and location notes might specify “science lab, benches cleared, sinks available”.

If you are using AI to draft risk documentation, treat it like any other draft: check it against your local policies and your own professional judgement. AI can help you not forget a category, but it cannot see your room.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Output 4: Lesson plan wrapper

The fourth output wraps the pack into a teachable lesson with timings, routines, and three-tier objectives. A tight wrapper reduces cognitive load for pupils and makes the investigation run smoothly.

A typical 60-minute structure might open with a five-minute retrieval prompt about acids, bases, and neutral solutions. The next ten minutes introduce the question, model one sample, and explicitly teach what “systematic” means today (constants, careful recording, repeating). Twenty-five minutes are for practical work and data capture, with a mid-point pause where pupils compare one result to spot anomalies. The final fifteen minutes focus on evaluation prompts: “Which result are you least confident about and why?” “What would you change to make colour judgement more reliable?” “How could you present this data to make patterns clearer?”

Three-tier objectives can be written in investigation terms rather than content terms. For example, E: record observations in a table and classify solutions; C: explain how you controlled one variable and why it matters; A: evaluate reliability using examples from your own data and propose a justified improvement. If you are building these kinds of prompts regularly, you may also find value in generating question banks with AI so each practical ends with strong, varied evaluation questions.

Quality gates

AI speeds up drafting, but quality gates keep you safe and credible. Accuracy comes first: check any scientific claims, pH ranges, and indicator colour interpretations against a trusted reference. Feasibility is next: does the method match your equipment, time, and class size, or has it assumed resources you do not have?

Inclusion is not an afterthought. Scan for barriers: colour-blind accessibility (add labels like “pink/orange” plus pH ranges, or allow digital colour sampling), reading load (short steps, key words), and motor demands (use droppers rather than pouring). Finally, run a “teacher judgement remains” check. Ask yourself: would I sign my name to this as safe, appropriate, and aligned to my aims? If not, edit until you would.

Store and reuse packs

The final advantage of documentation-first work is reuse without duplication. Store the pack as a set of components: pupil instructions, teacher notes, mapping table, riskbedömning, and assessment prompts. When you move from Chemistry to Physics, Biology, or Teknik, you reuse the structure and swap the content. A forces investigation might keep the same evaluation prompts and evidence table format, while the riskbedömning template remains consistent but with new hazards. Over time, you build a library of audit-ready packs that are easy to adapt, rather than a folder of one-off worksheets.

To keep this sustainable, name files consistently, keep the teacher brief inside the document for future edits, and note what you changed after teaching it. That way, the next time you need an investigation, you are not starting again—you are improving a living resource.

May your next investigation run smoothly, with paperwork that finally feels proportionate. The Automated Education Team

Table of Contents

Categories

Teaching

Tags

Lgr22 Investigations Science teaching

Latest

Alternative Languages