LGR22 Digital Competence: An AI Evidence Pack

Teach and assess AI within LGR22—without an extra unit

A teacher guiding pupils through digital competence tasks using AI tools across subjects

Digital competence in LGR22

LGR22 frames digital competence as more than “using devices”. Pupils are expected to use digital tools to investigate, create, communicate, and solve problems, while also understanding how digital systems influence what they see and believe. That includes source criticism, safe and responsible behaviour, and an awareness of how information can be shaped by technology. AI fits naturally within this because it is now one of the most common “digital systems” pupils encounter: search results are ranked, feeds are personalised, and text, images, and audio can be generated quickly.

The key move is to treat AI as a tool and a phenomenon, not as a separate topic. You can teach the subject content you already teach while collecting small pieces of evidence that show pupils can use tools effectively, explain limitations, and verify outputs. If you want a wider cross-subject thread for inspection-ready documentation, the approach here aligns well with a throughline model for LGR22: short routines, repeated often, with clear evidence.

A simple progression model

A workable progression from mellanstadiet to Åk 8 is to shift from “using tools with support” to “using tools with judgement”. In mellanstadiet, pupils can practise structured prompts, basic spreadsheets, and simple programming concepts, while learning that AI can be wrong and must be checked. By Åk 7–8, they should be able to explain trade-offs (bias, missing context, data privacy), compare sources, and document a transparent workflow that shows what they did and why.

Across subjects, aim for three recurring competencies. First, tool fluency: using spreadsheets, editors, and simple coding environments to produce something specific. Second, critical understanding: explaining how algorithms and AI can influence information. Third, responsible practice: minimum-data habits, attribution, and verification. The micro-artefacts below are designed to be dropped into ordinary lessons so that progression is visible without creating an “AI week”.

Micro-artefact 1: Spreadsheet routine

In Matematik or Teknik, a spreadsheet task provides clear evidence of digital tool use and mathematical reasoning. The twist is an “Excel Guru” routine: pupils use AI as a coach, but they must still build and check the spreadsheet themselves.

Set up a small dataset that fits your current topic: pulse rates after exercise, daily temperatures, or measurements from a simple build in Teknik. Pupils enter data, calculate the mean, and then calculate the standard deviation (or a simpler measure of spread in mellanstadiet, building up to standard deviation by Åk 8). The AI role is tightly framed: it may suggest formulae and explain what they mean, but pupils must paste the final formula into the sheet, annotate it in plain language, and verify it with a manual check on a small subset.

The evidence you collect is straightforward: a screenshot of the sheet showing the formulae, plus a short reflection answering, “What did the AI help with, and how did you check it?” This maps neatly to centralt innehåll around using digital tools for calculations and presenting results, while also building the habit that AI output is a starting point, not a final answer.

Micro-artefact 2: Programming knowledge check

In Teknik/Matematik, you can assess algorithmic thinking without adding a full programming project by using a short “knowledge check” quiz that pupils partly generate. In Åk 6, pupils can use AI to create a ten-question multiple-choice quiz on key concepts you have taught: sequences, loops, conditions, variables, debugging, and what an algorithm is. The constraint is that the pupil must supply the concept list and difficulty level, and then must correct any errors in the AI’s questions.

Run it as a paired task. One pupil generates and edits the quiz; the other pupil attempts it and flags unclear items. Then they swap roles. The assessment evidence is the final quiz plus a short “debug log” where pupils note at least two improvements they made (for example, fixing a misleading distractor or adding a code snippet). This shows centralt innehåll progression from recognising and describing algorithms in mellanstadiet to explaining and refining them by Åk 8. If you are building staff routines, the micro-routine approach mirrors the kind of implementation described in an INSET day AI workshop plan, but here it sits within your normal Teknik lessons.

Micro-artefact 3: Källkritik concept map

In Samhällskunskap, a concept map can make “how misinformation works” visible and assessable. Pupils create a concept map that links: misinformation, disinformation, algorithms, engagement, filter bubbles, deepfakes, and verification strategies. AI can help generate candidate links, but pupils must justify each connection with an example and a source.

To map directly to centralt innehåll, keep the language anchored in what pupils already encounter: how information spreads, how opinions can be influenced, and how digital platforms shape public conversation. For mellanstadiet, the concept map can focus on “what makes something trustworthy” and “why people share”. By Åk 8, add deeper links: recommender systems, synthetic media, and how incentives can distort content. The evidence is the concept map plus a short oral or written explanation where pupils use at least five key terms accurately. If you want ready-made discussion structures for the ethical side, an AI ethics classroom kit can provide case prompts that fit neatly into this map.

Micro-artefact 4: 60-minute fake news

A single 60-minute lesson can generate rich evidence if you plan the “capture points” in advance. Use a fake-news case (text plus image) that is plausible but not politically sensitive. Pupils work in groups with roles: verifier, summariser, source-hunter, and sceptic. The AI tool is allowed only for two purposes: generating verification questions and summarising what the group has already found. It is not allowed to “decide if it’s true”.

Set E/C/A objectives that are observable. At E level, pupils identify at least three warning signs and use one verification method (reverse image search, checking the original publisher, comparing with two reputable sources). At C level, pupils explain how platform algorithms and emotional language can increase spread, and they document a repeatable checking process. At A level, pupils evaluate uncertainty, explain what evidence would change their judgement, and reflect on how AI could both help and harm verification.

Build evidence capture into the workflow: a shared document with a checklist, two screenshots of sources used, and a final claim statement with a confidence level and justification. This is the moment to make your platform expectations explicit, and it pairs well with a school-wide refresh such as an annual acceptable use policy checklist, because pupils are practising the policy rather than merely hearing it.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Micro-artefact 5: Writing with tools

In Svenska, the most assessable shift is from “a finished text” to “a transparent process”. Pupils can absolutely use digital tools, including AI, to plan and revise, but they must leave a trail that shows authorship decisions. Ask for three artefacts: an outline, a draft with tracked changes (or version history), and a short transparency note.

The transparency note is simple: what tools were used, what prompts (or instructions) were given, what was accepted or rejected, and what was verified. Verification checks can be age-appropriate: in mellanstadiet, pupils might check names, dates, and whether a quote really exists. By Åk 8, add register and style checks, fact-checking against two sources, and a short paragraph explaining how they avoided copying. This supports centralt innehåll around creating texts with digital tools and adapting language to purpose and audience. If you want a broader approach to showcasing proof of learning when AI is involved, a proof-of-learning playbook offers formats that work well for writing portfolios.

Assessment and documentation

A “digital competence evidence pack” works best when it is light. You are not collecting everything pupils do; you are collecting a few high-signal artefacts that show progression. For each micro-artefact, save the final product and one short reflection. Add a simple teacher note (two or three sentences) on what was observed: independence, checking behaviour, and use of subject vocabulary. Over time, you build a cross-subject record that is easy to explain to pupils, colleagues, and leadership.

What not to save matters too. Avoid storing raw chat logs that include personal data, and do not require pupils to paste full conversations. Save outputs, prompts only when necessary for understanding, and reflections written by the pupil. If you need a practical privacy-first rollout mindset, a minimum viable AI toolkit is a useful companion.

Responsible AI routines

Responsible use becomes teachable when it becomes routine. Minimum-data is the baseline: no personal identifiers, no sensitive details, and no uploading of pupil work unless your tool and policy explicitly allow it. Attribution is next: pupils label AI support clearly, in the same way they cite a website or a book. Finally, “teacher-in-the-loop” checks keep the pedagogy sound: pupils must verify facts, test formulae, and justify decisions, and you sample-check a small number of steps rather than trying to police everything.

Where compliance questions arise, it helps to separate classroom practice (what pupils do) from governance (which tools are approved). If your school is aligning AI use with wider regulation, an EU AI Act explainer for Swedish schools can support leadership conversations without derailing teaching.

Common pitfalls

The first pitfall is tool-led planning: lessons where the AI feature is the point, not the learning. The fix is to start with centralt innehåll and assessment evidence, then choose the smallest AI use that supports it. The second pitfall is overclaiming: pupils presenting AI-generated work as their own thinking. The fix is process evidence, transparency notes, and quick oral checks that focus on reasoning. The third pitfall is weak source integrity: pupils trusting confident outputs or citing AI as a source. The fix is to teach “AI is not a source”, require two independent sources for factual claims, and make uncertainty an acceptable outcome when evidence is incomplete.

Used well, these micro-artefacts keep AI within digital competence, where it belongs, while strengthening subject learning across the timetable.

For calmer verification moments and more trustworthy pupil work, The Automated Education Team

Table of Contents

Categories

Teaching

Tags

Lgr22 Digital competence AI in Education

Latest

Alternative Languages