KS3/KS4 AI Exploration Week

A timetable-ready half-term project sprint

Students collaborating on an AI exploration project with notebooks and a shared device

Overview

AI Exploration Week is a focused, five-lesson (or five-day) project sprint where pupils investigate a real question, test ideas, and build something shareable. The key move is cultural: AI is treated as a tool for research, checking, brainstorming, and prototyping — not as a shortcut for writing finished paragraphs. Pupils are expected to show their workings: what they asked, what they checked, what they rejected, and what they can justify with sources.

It is also not a ‘free-for-all’. The scheme below is deliberately bounded: tight project briefs, minimum-data rules, short mini-lessons that teach the risks, and an evidence-first assessment pack. If you want a broader foundation in responsible use, pair this week with your wider digital citizenship work, such as digital citizenship and AI, so expectations are consistent across subjects.

Pre-week set-up

Before pupils start, make the boundaries visible and practical. Share a simple policy slide: what AI tools are allowed, what is not allowed (for example, generating a full submission), and what must be recorded in the evidence pack. If your school already has guidance, translate it into pupil-friendly language and add two concrete examples: a ‘good’ use (generating questions to investigate) and a ‘not allowed’ use (pasting an essay).

Sort access early. If you have 1:1 devices, plan for paired work anyway, because it reduces friction and improves talk. If devices are limited, set up rotating ‘AI stations’ where one device supports a table group while others use printed sources and note-making. Consider accounts: if pupils need logins, create them centrally or use a whole-class access method that avoids personal data. Keep to minimum-data rules: no full names, no personal stories, no photos of pupils, no identifiable information about others, and no uploading of student work unless approved.

Send a short parent/carer note explaining the purpose, the safeguards, and how you will assess learning. Emphasise that pupils will be taught to verify outputs and cite sources, and that the final grade comes from evidence of thinking. If you need language to frame the integrity piece, the evidence-first approach in from autocomplete to co-authoring aligns well with this week.

Week-at-a-glance timetable

This structure assumes a 50–60 minute lesson each day. The rhythm stays the same: enquiry question, mini-lesson (10–15 minutes), studio time, then a checkpoint that produces evidence.

Day 1: Define and plan. Enquiry question: What problem, claim, or phenomenon will we investigate, and why does it matter? Mini-lesson: how AI works (roughly) and why it can be useful for early-stage research. Studio: choose a project brief, write a scope statement, and build a ‘question ladder’ (broad to narrow). Checkpoint: team plan + initial source list (at least two non-AI sources).

Day 2: Gather and verify. Enquiry question: What do reliable sources say, and where do they disagree? Mini-lesson: hallucinations and verification routines. Studio: collect evidence, extract key facts, and run ‘claim checks’ (AI suggestions must be verified). Checkpoint: annotated notes with a clear source trail.

Day 3: Bias and fairness. Enquiry question: Who benefits, who is harmed, and what might be missing? Mini-lesson: bias, fairness, and representation, linking to real-world examples. Studio: test outputs with varied prompts, compare perspectives, and record limitations. Checkpoint: a bias log (what changed, what stayed, what you did about it). For quick dilemmas to spark discussion, dip into phase-banded AI ethics dilemmas.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Day 4: Build and communicate. Enquiry question: How can we communicate our findings honestly and clearly? Mini-lesson: citations and source trails (and what to do when AI cannot cite). Studio: create the product (explainer, audit, prototype, investigation) with captions that distinguish ‘source-based facts’ from ‘our interpretation’. Checkpoint: draft product + citations page.

Day 5: Showcase and reflect. Enquiry question: What have we learned about the topic and about using AI responsibly? Mini-lesson: prompt hygiene and integrity (what to record; what not to do). Studio: rehearse, finalise evidence packs, and prepare Q&A. Checkpoint: showcase + personal reflection.

Enquiry questions bank

To keep this week subject-agnostic, offer prompts that work across curriculum areas and allow pupils to choose a context they care about. Useful starters include: What claim do people repeat that might be wrong? What does ‘good evidence’ look like here? What would change someone’s mind? What are the trade-offs? What would a fair solution consider?

For KS3, keep questions concrete and local: How does a school canteen choice affect health and waste? What makes a news headline misleading? How do recommendations shape what we watch and buy? For KS4, invite complexity: How should a community balance safety, privacy, and convenience? What does the evidence say about an intervention’s impact, and how strong is it? When does a ‘helpful’ system become discriminatory?

Mini-lessons (10–15 mins)

Keep these fast, practical, and repeatable. In ‘how AI works’, explain that models predict likely text based on patterns, not understanding. A simple classroom analogy helps: it is like an autocomplete engine trained on massive text, which can sound confident without being correct. Then show why that matters for research: it can generate leads, questions, and summaries, but it cannot replace checking.

For bias and fairness, use a short scenario: two pupils ask for ‘a typical scientist’ and get different descriptions depending on wording. Discuss how training data, stereotypes, and prompt framing shape outputs, then set a rule: pupils must test at least two alternative prompts and record differences.

For hallucinations, demonstrate with a safe, curriculum-neutral example: ask for a citation for a made-up statistic. When the tool invents a source, pupils learn the core habit: ‘If I can’t find it outside the AI, I can’t use it as a fact.’

For citations and source trails, teach a simple three-step routine: find a claim, trace it to a primary or reputable secondary source, and cite that source rather than the AI. If pupils use AI to locate sources, they still cite the website, book, report, or dataset they actually read. The evidence pack should show screenshots or copied links that prove the trail.

For prompt hygiene, emphasise boundaries: no personal data, no private school information, no copying in assessment tasks, and no prompts that ask for harmful instructions. Model a ‘clean prompt’ that includes purpose, audience, constraints, and a request for uncertainty (for example, ‘List what you’re unsure about’).

Project briefs (choose-one menu)

A menu keeps choice high but outcomes comparable. A research explainer asks pupils to answer an enquiry question with a one-page brief or a three-minute talk, supported by a source trail and a ‘what we checked’ section. A myth-busting audit starts with a popular claim from social media or playground talk; pupils test it, classify it (true/false/misleading/uncertain), and explain why.

A local impact case study examines how an AI-related issue plays out nearby: transport, retail, health messaging, school life, or community services. Pupils should include at least one stakeholder perspective and one potential unintended consequence. A prototype/service design asks pupils to design a simple service that uses AI responsibly, such as a homework planner that protects privacy, or a library search helper with bias checks. The key is not building software; it is designing rules, user journeys, and safeguards.

A media literacy investigation has pupils compare how different outlets report the same issue, then use AI to generate questions, identify missing context, and suggest what evidence would strengthen the story. If you want a parallel, lower-device-friendly project structure, the showcase approach in Easter AI learning project menu adapts neatly.

Differentiation

For KS3, provide narrower scopes, sentence starters, and a shorter evidence pack. For KS4, require stronger sourcing, a limitations paragraph, and an explicit evaluation of reliability. SEND scaffolds work best when they reduce executive load: a pre-formatted evidence pack, a ‘choose from’ question bank, and timed check-ins. EAL supports can include dual-language keywords, model answers that show structure, and permission to record oral explanations with brief written captions.

Low-device options are entirely workable if you treat AI like a station rather than a constant companion. One group can use AI to generate questions and counter-arguments while others read printed sources, extract quotes, and build the storyboard. Stretch tasks should deepen thinking, not just add volume: require pupils to compare two AI tools, test prompt variants systematically, or create a small ‘uncertainty map’ showing what is known, contested, and unknown.

Assessment

Assess the evidence pack, not the polish. Require: the original enquiry question and scope statement; a prompt log (selected prompts and outputs, with notes on what was used or rejected); an annotated source list with at least three non-AI sources; a verification page showing two claims checked; a bias log; and a short reflection on what AI helped with and what it hindered.

A simple rubric can weight: quality of enquiry (clear question and scope), evidence and verification (reliable sources, checks, honest uncertainty), reasoning (makes justified claims), communication (clear and audience-appropriate), and integrity (complete logs, appropriate use). Integrity checks can be light-touch: ask pupils to point to where a key fact came from, explain why they trust it, and show the note where they verified it.

For quick conferencing, use a consistent script: ‘Show me your question and what you’ve ruled out. Pick one claim you’re using and walk me to the original source. What did AI get wrong or oversimplify? What have you done to check for bias or missing perspectives? What will you do next lesson?’

Showcase

Keep the showcase simple so it works with mixed access. A ‘gallery walk’ with A3 posters and QR codes to evidence packs is reliable. Alternatively, run a three-minute presentation carousel: one speaker, one evidence navigator, one Q&A lead. Audience questions should reward thinking, not theatrics: ‘Which source changed your mind?’, ‘What did you decide not to include, and why?’, ‘What would you need to know to be more confident?’, and ‘Who might disagree with you?’

Celebrate without the ‘AI wow-factor’ by praising verification, careful sourcing, and honest limitations. A small award set can reinforce values: ‘Best source trail’, ‘Best bias check’, ‘Most improved question’, and ‘Strongest uncertainty statement’.

After-action review

End with a short student reflection: what they learned about the topic, one habit they will keep when using AI, and one risk they will watch for. Collect staff notes too: where pupils got stuck (usually scoping, sourcing, or over-trusting confident outputs), which mini-lesson landed, and which brief produced the best evidence. Decide what to scale next half-term: perhaps a shared evidence-pack template across subjects, a common citation routine, or a rotating ‘verification station’ that becomes normal classroom practice.

May your next half-term start with sharper questions and stronger evidence. The Automated Education Team

Table of Contents

Categories

AI in Education

Tags

Teaching Ethics Assessment

Latest

Alternative Languages

  • Eesti: KS3/KS4 AI uurimisnädal

    AI uurimisnädal on viiepäevane, õpilaste juhitud projektisprint, mis käsitleb AI-d uurimis- ja …

  • Svenska: KS3/KS4 AI-utforskningsvecka

    AI-utforskningsveckan är en fem dagar lång, elevledd projektsprint som behandlar AI som ett …

  • Suomi: KS3/KS4 AI-tutkimusviikko

    AI Exploration Week on viiden päivän, oppilaslähtöinen projektisprintti, jossa AI nähdään tutkimus- …