EU AI Act Meets LGR22: What Swedish Schools Must Know

A values-led, practical compliance guide for everyday AI

A Swedish teacher reviewing an AI tool with a simple compliance checklist

Why this matters in Sweden

AI is already woven into school life: drafting letters to guardians, adapting lesson explanations, summarising notes from student support meetings, and speeding up administrative writing. The EU AI Act arrives at the same moment LGR22 asks schools to live their values in a digital world. The result can feel like a squeeze: innovate, but also prove you are safe, fair and lawful.

The good news is that “compliance” does not need to become a legal project. For most school use cases, the practical aim is to avoid risky automation, keep humans in charge, and be honest about where AI is used. If you already run an annual review of acceptable use, you can fold much of this work into that rhythm rather than creating a parallel process. The structure in an annual AI acceptable use policy refresh is a helpful model for keeping governance light but real.

EU AI Act in plain language

The EU AI Act is risk-based. It does not treat all AI the same; it asks what the system is used for and what harms might follow. For schools, the phrase that raises eyebrows is “high-risk”, because some AI used in education can affect learners’ opportunities, access to support, or how they are treated.

In everyday school terms, “high-risk” is more likely when AI is used to do any of the following: make or strongly steer decisions about admissions, grouping, progression, special support, safeguarding escalation, or formal assessment outcomes. Even if a tool is marketed as “just recommendations”, it can become high-impact if staff feel pressured to follow it, or if it becomes the default basis for decisions.

By contrast, many common classroom and administrative uses are lower risk when handled well: generating practice questions, rephrasing texts, translating communications, or drafting feedback that a teacher edits before sharing. The risk changes with context. A tool used for brainstorming lesson starters is not the same as a tool that predicts attainment and nudges intervention lists.

LGR22 as a practical test

LGR22’s fundamental values are not a poster on the wall; they can be a working test for AI use. When staff are unsure whether a tool “counts as compliant”, values give a shared language that is meaningful in schools.

Democracy

Democracy in AI use looks like informed participation and contestability. Learners and guardians should not feel that decisions are being made by an invisible machine. Staff should be able to explain, in ordinary language, what the tool does and what it does not do. In practice, this means you avoid “black box authority”: if a tool produces a recommendation, the school can question it, override it, and record why.

A simple classroom example is using AI to generate differentiated reading questions. Democracy is supported when the teacher chooses which questions to use, adapts wording for the class, and invites learners to critique the questions. Democracy is undermined when the AI output is treated as “correct” and unchallengeable.

Human rights

Human rights in school AI use is about dignity, non-discrimination, and protection of personal data. In day-to-day terms, you minimise sensitive information, you are careful with vulnerable learners’ details, and you avoid systems that could systematically disadvantage groups.

A common risk pattern is “profiling by convenience”: staff paste pastoral notes into a chatbot to “get advice”. Even if the intent is supportive, it can expose sensitive data and create a shadow record outside school control. Another risk pattern is biased language in AI-generated reports. If a tool tends to describe some learners more negatively, the harm is cumulative and real.

If your school is improving its reporting workflow, look at how audit trails and data protection can be built into the process rather than bolted on afterwards. The thinking in AI-assisted report writing with audit trails transfers well to Swedish contexts, even if your templates differ.

Ethics

Ethics is where LGR22 becomes practical: it is not just “legal or illegal”, but “right or wrong for our learners”. Ethical AI use in schools means proportionate use, clear boundaries, and professional judgement. If AI saves time but reduces relationship-based understanding, it is not automatically a win.

A good ethical habit is to ask: “If this output were wrong, who would be harmed?” If the answer is “a learner’s support, opportunities, or reputation”, the workflow needs stronger human oversight and better documentation.

Three non-negotiables

Most school AI governance can be organised around three non-negotiables that map neatly to both the EU AI Act’s intent and LGR22’s values: transparency, human oversight, and data minimisation.

Transparency

Transparency means you can tell people when AI is used, what data went in, and what came out. It also means you can explain limitations. In practice, schools can adopt short “transparency notes” on AI-assisted outputs, such as a line in a report-writing system or a footer in a drafted letter that staff remove or keep depending on context.

Transparency is also internal. Staff need a shared understanding of which tools are approved and what “approved use” means. A lightweight toolkit approach, with clear privacy defaults and routines, can help. The minimum viable back-to-school AI toolkit is a useful reference point for organising this without overwhelming colleagues.

Human oversight

Human oversight means AI does not make decisions about learners. It can suggest, draft, summarise, or offer options, but a professional remains responsible for the final judgement. Oversight is easiest to evidence when workflows are designed so that staff must review and edit before anything is saved, shared, or acted upon.

In everyday terms, this can be as simple as: “AI drafts; staff approve.” For higher-stakes areas, it may be: “AI proposes; a named role reviews; a second check for safeguarding-sensitive items.” The key is that the workflow forces a pause for judgement rather than relying on goodwill.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Data minimisation

Data minimisation is the most immediately actionable principle. It means you only use the data you need, for the shortest time, with the least identifiability. In schools, it also means resisting the urge to paste whole documents “because it’s quicker”.

A practical example: if you want AI help to rewrite an email to a guardian, you rarely need names, personal identity numbers, medical details, or a full behaviour log. You can use placeholders and keep the prompt focused on tone and clarity. Over time, this becomes a staff habit, supported by templates and “never paste” lists.

Evidence in everyday workflows

Schools often write strong policies, then struggle to show what happens on Tuesday afternoon when someone is tired and in a hurry. Evidence does not need to be heavy; it needs to be consistent.

Start by choosing two or three common AI workflows and making them “model workflows”. For instance: drafting weekly newsletters, producing lesson scaffolds, and drafting student-facing feedback. For each, define the permitted inputs (ideally anonymised), the required human check, and where the output can be stored. If you already run staff training days, you can teach these workflows as routines rather than rules. A practical structure is outlined in an INSET day AI workshop with micro-routines, which can be adapted to your context.

Then, keep a simple tool register and a short decision record for each new tool. The purpose is not bureaucracy; it is to show you have thought about risks, values, and safeguards.

Automated Education as an example architecture

One way to stay aligned with the EU AI Act’s “high-risk in education” logic is to choose architectures that reduce the chance of hidden profiling and automated decisions.

Using Automated Education as an example architecture, the compliance-friendly design choices are straightforward to explain to staff and stakeholders. The system can be used without persistent student profiles, so learners are not silently tracked over time. It can be configured so it does not make automated decisions; it supports drafting and organisation, but staff remain the decision-makers. Inputs and outputs are clear, which supports transparency and review. Human-in-the-loop is built into the workflow: staff generate, edit, and approve before anything is shared. GDPR-aligned defaults support data minimisation by nudging users towards anonymised or limited prompts and avoiding unnecessary retention.

The point is not that one product “solves compliance”. The point is that schools should prefer tools whose design makes good practice easier than bad practice.

Vendor checklist for Swedish schools

Procurement questions do not need legal language; they need clarity. When you speak to vendors, you are looking for signs that they understand education risk and can support your documentation.

Ask vendors to show, not just tell, how the product handles identity, storage, and oversight. You can use a short checklist such as:

  • What data is stored, where, and for how long? Can we control retention?
  • Is any student profiling or persistent tracking used by default?
  • Can the tool be used with anonymised inputs and still be useful?
  • Does the tool ever make automated decisions or rank learners in ways that could steer outcomes?
  • What audit trail exists for staff actions and AI outputs?
  • How do you handle model updates and changes that affect output quality?
  • Can we get clear guidance for staff on “safe prompts” and prohibited inputs?

Red flags include vague answers about retention, “we improve the model with your data by default”, an inability to explain sub-processors, and claims that the tool is “GDPR compliant” without specifics. If a vendor cannot support transparency and oversight, your school will carry that burden alone.

Documentation pack

A lightweight documentation pack helps you evidence good governance without turning it into a legal project. Keep it short, reusable, and tied to real workflows.

For procurement, record the tool’s purpose, intended users, data categories, retention, and a summary of risks and mitigations. For pilots, add a short evaluation note: what worked, what went wrong, and what safeguards were needed. For ongoing use, keep a change log: new features, new use cases, and any incidents.

If you want a rhythm for reviewing these documents, align it with existing cycles. Many schools find it easier to run a short “AI foundations sprint” at the end of term, so September starts calm and organised. The structure in a summer-term AI foundations sprint can make this feel like school improvement rather than compliance theatre.

A 30-day plan

In the first week, pick your top three AI use cases and write “approved workflow cards” for each: permitted inputs, required checks, and where outputs can be saved. At the same time, create a one-page transparency note for staff to use when AI has supported communication or feedback.

In the second week, build your tool register and add the tools already in use. This is often the moment you discover “shadow AI” accounts. Treat it as a learning moment, not a blame exercise. Agree which tools are paused until reviewed.

In the third week, run a short staff session that practises anonymising prompts and doing human checks. Use real examples: rewriting a sensitive email, drafting a support plan paragraph without identifiers, or generating differentiated questions from a generic text.

In the fourth week, run a mini-audit: sample a handful of AI-assisted outputs and check whether your three non-negotiables are visible. Update your workflow cards based on what you learn, then set a review date.

Appendix: copy-and-adapt templates

A tool register can be as simple as a shared table with fields for tool name, owner, purpose, user group, data categories, retention, sub-processors, approved use cases, prohibited use cases, and review date. Add one column for “LGR22 values check”, where you note any democracy, rights, or ethics concerns and how you mitigated them.

A transparency note can be a short paragraph staff can paste into internal documentation or, where appropriate, external communications: “AI was used to support drafting and language clarity. A member of staff reviewed and edited the final text. No automated decisions were made.” Keep it plain, and keep it honest.

A staff “never paste” list should be short enough to remember. Include personal identity numbers, full names with sensitive context, health information, safeguarding details, and any documents containing special category data. Pair it with a positive alternative: use placeholders, summarise without identifiers, or use approved internal tools designed for protected data.

Towards steadier, more values-led AI use in your school day. The Automated Education Team

Table of Contents

Categories

Administration

Tags

Lgr22 Eu ai act Ai governance

Latest

Alternative Languages