
Why AI belongs inside digital citizenship
Digital citizenship has long covered online safety, respectful behaviour and critical evaluation of information. AI now threads through all of these. Pupils do not just browse the internet; they co-write with chatbots, generate images and receive personalised feedback from algorithms. Treating AI as a separate “tech topic” leaves a gap between the rules we teach and the tools pupils actually use.
Framing AI as a core strand of digital citizenship helps in three ways. First, it keeps the focus on behaviour and judgement, not just on how clever the technology is. Second, it anchors AI in familiar themes such as privacy, plagiarism and bias. Third, it makes space for progression: what responsible AI use looks like in Year 4 is very different from Year 12.
Many schools are already exploring AI literacy more broadly, for example through work like AI literacy in schools. The next step is to weave AI into your digital citizenship curriculum so pupils practise making good choices with real tools.
From rules to routines
Telling pupils “do not copy and paste from AI” or “check if it is biased” rarely changes behaviour. Responsible AI use becomes real when it shows up in everyday routines. Pupils need simple, repeatable habits they can use in any subject.
For example, a lower secondary class might adopt a three-step routine whenever they use AI for schoolwork: state your goal, show your prompts, and highlight what you changed. A primary class might use a traffic light system to decide whether a prompt is safe or needs adult help. An upper secondary group might routinely add a short “AI use statement” to coursework, explaining how they used or chose not to use AI.
These routines align closely with debates about when AI helps or harms learning, explored further in When AI helps vs harms learning. The aim is not to ban AI, but to help pupils use it thoughtfully, transparently and within clear boundaries.
Age-banded learning goals
Primary (ages 7–11)
At primary level, the focus is on simple concepts and concrete examples. Pupils learn that AI is made by people, can make mistakes and does not “know” them personally. They practise asking safe questions, spotting when AI has got something wrong and explaining why they should not share personal details.
A realistic goal is for pupils to articulate, in their own words, one way AI can help them learn and one risk they need to watch for.
Lower secondary (ages 11–14)
Here pupils can handle more nuance. They explore how AI tools generate responses, what training data means and how bias can creep in. They begin to distinguish between using AI as a thinking partner and using it as a shortcut to avoid work.
By the end of a mini-unit, pupils should be able to describe how they used AI in a task, identify at least one limitation of the output and suggest ways to improve or double-check it.
Upper secondary (ages 14–18)
Older pupils tackle questions of ethics, academic integrity and long-term digital footprints. They examine how AI might influence future study and work, and how their own data could be used. They practise writing honest declarations of AI use and debating grey areas, such as whether grammar correction counts as cheating.
By this stage, pupils should be able to argue for a balanced position on AI in education, drawing on examples, policies and their own experience. Linking to discussions from AI is not cheating can help frame these conversations.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Classroom Activity Set 1: Understanding how AI works
A good starting point is to demystify AI without getting too technical. Pupils do not need to understand neural networks in detail; they need an intuitive sense of patterns, data and limitations.
In primary classes, you might play a “human chatbot” game. One pupil leaves the room while the class secretly agrees on a topic. The returning pupil answers questions as the “AI”, but can only use information written on slips of paper by classmates. Afterwards, you discuss how the “AI” was limited by the data it had.
Lower secondary pupils could compare answers from an AI tool with a textbook or trusted website. Working in pairs, they highlight where the AI is accurate, vague or wrong, then annotate with questions: “How do we know?” “What is missing?” This links well with source evaluation skills from units like Teaching source evaluation in the AI era.
Upper secondary pupils might explore prompt engineering and failure modes. Ask them to deliberately design prompts that expose weaknesses: ambiguous questions, biased wording or requests for very recent events. They then write short reflections on what these failures reveal about how the system works.
Classroom Activity Set 2: Academic integrity and ‘honest help’
Academic integrity with AI is about more than catching plagiarism. Pupils need a clear sense of what counts as “honest help”. You can frame this as a spectrum, from acceptable support (idea prompts, language feedback) to unacceptable substitution (AI writing the whole essay).
In primary, keep it simple. Pupils might work with teacher-prepared AI outputs, then practise “making it their own” by adding personal examples, drawings or oral explanations. Emphasise that their teacher wants to hear their ideas, not the computer’s.
Lower secondary pupils can role-play different scenarios: one pupil used AI to generate a plan, another to fix spelling, another to write the whole homework. Groups decide which examples are fair, unfair or “it depends”, and justify their choices. Together, you then draft a class charter for honest AI use in schoolwork.
Upper secondary pupils can analyse real or fictional case studies of AI misuse in coursework. They might compare institutional policies, discuss grey areas and then draft their own personal AI use statement for the year. Linking back to arguments from AI is not cheating can deepen the discussion.
AI tools often invite pupils to share information about themselves, even when that is not necessary. Digital citizenship needs to address this explicitly.
Primary pupils can sort example prompts into “safe to ask” and “ask an adult first”. For instance, “Explain fractions using pizza” versus “Help me tell my friend a secret”. You can create simple posters with three rules: do not share personal details, do not upload photos of others without permission, and ask an adult if unsure.
Lower secondary classes might examine the sign-up pages or privacy statements of popular tools (screenshotted or teacher-curated if sites are blocked). Pupils highlight what data is collected and discuss where it might go. They then rewrite the key points in plain language for younger pupils.
Upper secondary pupils can debate longer-term digital footprints. How might their interactions with AI be stored? What could future employers infer from their data? This can lead into practical steps such as using school accounts, avoiding sensitive topics and logging out on shared devices.
Classroom Activity Set 4: Bias, fairness and human impact
Bias and fairness are often taught abstractly. AI offers tangible examples pupils can investigate.
Primary pupils might look at AI-generated pictures of “a scientist” or “a nurse” (prepared by the teacher). They count who appears: how many women, men, people with different skin tones or visible disabilities. This opens discussion about stereotypes and why diversity matters.
Lower secondary pupils can test AI with prompts that risk bias, such as “Write a story about a programmer” or “Describe a good leader”. They then rewrite prompts to encourage more inclusive results, and reflect on which version feels fairer.
Upper secondary pupils can explore real-world cases where AI has affected people’s lives, such as hiring tools, predictive policing or exam grading algorithms. They research one case, identify who was harmed or helped, and propose guidelines that might have prevented problems.
Low- and no-device options
Responsible AI education does not require a device per pupil. Many activities work with a single teacher device and projector, or even entirely unplugged.
You can simulate AI using card sorting, role-play and “if this, then that” decision trees. For instance, pupils design a simple “homework help bot” on paper, deciding what it should say when a question is too personal, unsafe or unclear. They then test each other’s bots by following the flowcharts.
Where connectivity is limited, you might use printed screenshots of AI conversations. Pupils annotate them with highlighters, identifying good prompts, risky disclosures or misleading answers. The key is the thinking, not the live technology.
Building school-wide consistency
Individual lessons are powerful, but pupils need consistent messages across subjects and year groups. This means aligning classroom practice with school policies and home communication.
Consider agreeing on a small set of whole-school principles, such as “Be honest about how you used AI”, “Protect your personal information” and “Check important facts with more than one source”. Different departments can then adapt these to their context, while keeping the core language the same.
It also helps to involve families. Share simple explanations of how the school approaches AI, along with suggested questions parents can ask at home, such as “How did AI help you with this task?” rather than “Did you use AI?”
To embed AI within digital citizenship, you might create a short, reusable toolkit for your classroom or department. This could include a one-page checklist pupils use whenever they work with AI: Did I share any personal information? Did I check the facts? Did I make the work my own? Did I record how I used AI?
Discussion prompts can live on your wall or slide deck: “What could go wrong if…?”, “Who might be affected by this AI decision?”, “How could we do this more fairly?” Over time, pupils start asking these questions themselves.
Finally, pupil agreements make expectations concrete. These might be short, age-appropriate statements pupils sign or revisit each term, covering honesty, safety and respect. Combined with the mini-units above, they help move AI from a mysterious add-on to a normal part of digital citizenship: something pupils can navigate thoughtfully, safely and with growing independence.
Happy navigating!
The Automated Education Team