
Credible showcases
A student AI project showcase is most powerful when it proves learning, not just performance. AI can accelerate polish: cleaner slides, sharper images, smoother code. Without guardrails, judging drifts towards presentation skills and tool fluency, and away from reasoning, subject knowledge and ethical choices. A ‘credible’ showcase makes the invisible visible: what students intended, what they tried, what they checked, and what they changed.
If you’ve already experimented with evidence-first routines in writing, you’ll recognise the same principle here: process is the product. The difference is that a showcase invites a real audience, which raises the stakes for integrity, consent and safety. You can borrow structures from evidence-first writing instruction and apply them across subjects, from science explainers to community-history chatbots.
The best format is the one that matches your time, space and staffing, while still allowing every student to explain their thinking. An exhibition with table displays works well when projects are varied; a gallery walk suits classes that need shorter, structured interactions; lightning talks can be brilliant for older students who can rehearse and keep to time. Demo stations are ideal for interactive tools, but need tighter safeguarding. Digital portfolios help when physical space is limited, and hybrid events can include families who cannot attend.
A practical approach is to run two layers at once: a calm gallery walk for most projects, and a small number of scheduled demos in a supervised corner. If you’ve ever run student-led project weeks, the flow will feel familiar; the key difference is to make the evidence pack the ‘ticket’ to present, not an optional extra. For inspiration on student-led showcases with citations and bias discussion, see AI exploration week projects.
The ‘Evidence Pack’
Treat the evidence pack as non-negotiable and keep it to one page per project. One page forces clarity and makes moderation realistic. Students can still bring extra appendices, but judging should be possible from the one-pager plus a short conversation.
Structure the page under five headings: intent, process, verification, reflection and impact. ‘Intent’ is the problem and audience: for example, ‘a revision helper for Year 9 geography that explains command words’. ‘Process’ captures choices: tools used, prompts attempted, data sources, and the division of labour in a group. ‘Verification’ lists checks, such as factual cross-referencing, bias review, and testing against edge cases. ‘Reflection’ is what they would do differently and what they learned about the subject and the tool. ‘Impact’ is the real-world effect, even if small: a peer trial, a teacher review, or a change made after feedback.
This mirrors the kind of evidence pack you might already use for themed events; World Book Day AI evidence packs translate particularly well into creative projects where originality and attribution matter.
Showcase-ready artefacts
Students often ask, ‘What counts as evidence?’ Aim for artefacts that show decisions over time, not just final screenshots. A prompt trail is useful only if it includes intent and iteration. Encourage students to annotate prompts with why they changed them, what went wrong, and what they learned about the topic. A decision log can be as simple as five dated entries: ‘We swapped from image generation to an infographic because the first outputs stereotyped our community.’
Source trails matter even for ‘non-essay’ projects. If a group builds a climate explainer, ask for the two most influential sources and a note on how they checked them. If a group builds a revision quiz, ask for the specification points and where each question came from. Test cases are especially helpful for tools: ‘We tried three accents in our speech-to-text demo’, or ‘We tested the chatbot with a misconception question.’ Finally, ‘what we changed’ redrafts are gold. A pair of before/after paragraphs, code snippets, or storyboard frames, with the reason for the change, demonstrates learning far better than a perfect final product.
Where students have used AI to generate text or media, make it routine to show at least one human rewrite: what they kept, what they removed, and what they corrected. If you want a simple boundary script for students, adapt ideas from exam-season traffic-light AI boundaries so they can explain what was allowed and why.
Judging that rewards thinking
A moderation-friendly rubric should make ‘integrity and learning’ visible and scoreable. Keep categories broad, and anchor them with observable evidence from the one-page pack and a two-minute pupil explanation. A workable set is: process, domain knowledge, integrity, inclusion and communication.
Process covers iteration, decision-making and sensible tool use. Domain knowledge checks that the project is not just plausible-sounding output: students should be able to explain key concepts and answer a follow-up question. Integrity rewards transparent attribution, clear boundaries on AI use, and verification habits. Inclusion looks at accessibility of the product and the team process, such as roles that allow everyone to contribute. Communication assesses whether the student can explain the project to a non-expert without over-claiming what AI did.
To keep judging consistent, give judges a short script: one question for each criterion. For example, ‘Tell me one thing you verified and how’, or ‘What did you change after feedback?’ This also reduces bias towards confident speakers, because everyone is asked the same prompts. If you want a wider framework for AI use across subjects, AI across the curriculum lesson moves can help you align criteria with everyday classroom practice.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Safeguarding and consent
AI showcases often involve media, live demos and third-party tools, so safeguarding needs to be explicit. Start with permissions: decide what can be photographed, what can be filmed, and what can be shared online. Use clear lanyard stickers or table signs to indicate ‘photos allowed’ or ‘no photos’. For anonymisation, encourage students to blur faces in screenshots, remove full names from interfaces, and avoid displaying personal data in training sets or examples.
Live demos need a plan for unpredictable outputs. Where possible, switch to ‘guided demo mode’: pre-prepared inputs, offline copies, or screen recordings rather than live generation. If live generation is essential, use filters, teacher-controlled accounts, and a ‘pause’ protocol so a student can stop immediately if something inappropriate appears. Handling third-party tools also means checking terms of use, minimum age requirements, and whether accounts store content. If you are working with AI video or image tools, the media literacy and safety routines in Sora classroom reality-check workflows are a helpful model.
Equity and accessibility
A proof-of-learning showcase should not become a device showcase. Low-device options can still be rich: paper prototypes, storyboard panels, printed prompt trails, and ‘human-in-the-loop’ demonstrations where students role-play the system’s logic. Paired roles help, too. One student can lead the explanation while another manages the evidence pack and verification notes, then swap for the next visitor. Multimodal evidence matters for SEND-friendly participation: audio reflections, photographed whiteboard planning, or short captioned videos can replace long written accounts.
Build inclusion into the rubric so it is not an afterthought. For instance, reward projects that include alt text on images, clear fonts and contrast, simplified language options, or culturally responsive examples. If students are exploring ethics, bias and representation, you can also draw on scenarios from phase-banded AI ethics dilemmas to prompt reflection that is age-appropriate and concrete.
Running the event
Smooth events are mostly about roles and rhythm. Assign a lead for safeguarding, a lead for tech, and a timekeeper for talks. Give students simple signage templates: project title, ‘Ask me about…’, and a QR code to the digital portfolio if you are using one. Audience scripts help visitors interact well: a short card at the entrance can invite them to ask, ‘What did you verify?’ and ‘What would you improve next?’ That keeps the tone celebratory but thoughtful.
A simple risk checklist prevents last-minute panic: confirm consent status, check displays for personal data, ensure any logins are teacher-controlled, and test Wi‑Fi where demos will run. If you run multiple events in a year, you may find it helpful to adapt an operations workflow like AI event ops for trips and sports day, but tuned to AI-specific risks.
After the applause
Don’t let the learning evaporate when the tables are packed away. Capture evidence for reports by photographing the one-page packs (where permitted) and collecting a short student reflection: ‘One skill I improved’, ‘One integrity habit I used’, and ‘One next step.’ Provide feedback in the same language as the rubric so it connects to assessment, not just applause.
Finally, run a quick keep/stop/scale review with staff and a small student group. What evidence pack sections worked? Where did verification feel tokenistic? Which safeguarding routines were easy to maintain? You can turn that into next year’s plan using an end-of-year audit approach like keep/stop/scale evidence pack planning. Over time, your showcase becomes more than a celebration; it becomes a culture statement: in this school, we value careful thinking, honest process, and real impact.
To confident, safe and evidence-rich showcases ahead!
The Automated Education Team