Education

Meta Llama 4 decision pack for schools

July 24, 2025

Meta’s Llama models have made “open” AI feel within reach for schools, but the practical decision is rarely about hype. It is about governance, safeguarding, data protection, and whether your team can operate a model safely over time. This decision pack helps school leaders choose between adopting a vendor-hosted Llama 4 service, commissioning a managed private instance, self-hosting, or waiting. It includes minimum-data patterns, total cost of ownership factors, classroom-safe use cases, a clear decision matrix against proprietary models, and a “Llama 4 watch” checklist if Llama 4 has not launched yet.

End-of-Year AI Audit: Evidence Pack

May 29, 2025

An end-of-year AI audit helps schools move from scattered pilots to clear, defensible decisions. This guide shows how to produce an “evidence pack” for governors and SLT: a simple register of every AI trial, a keep/stop/scale decision for each, and the minimum evidence needed to justify it. You’ll also leave with a summer-ready action plan, with owners, timelines, procurement steps and policy updates. The aim is to protect staff time, improve pupil outcomes, and tighten safeguarding and data protection without slowing innovation.

AI event ops for Sports Day and trips

May 16, 2025

Sports Day and school trips are high-impact, high-risk events: lots of moving parts, tight timings, and many stakeholders. AI can help by generating first drafts of run sheets, staffing rotas, kit lists, accessibility adjustments and parent communications, saving hours of admin. But it cannot “know” your site, your pupils, or your policies, and it must never be allowed to automate safeguarding decisions. This article shares a copy-and-adapt workflow that keeps data minimal, builds in inclusion, and finishes with a structured human sign-off so nothing important is approved blindly.

Phase-banded AI ethics dilemmas toolkit

April 24, 2025

AI ethics can feel abstract, yet pupils meet its effects daily: recommendations, image filters, chatbots, and “too-good-to-be-true” videos. This phase-banded toolkit offers short, story-led dilemmas for Primary, KS3 and KS4, designed for tutor time, PSHE and computing without needing technical detail or real pupil data. Each scenario uses a consistent, safe discussion protocol that helps learners reason about fairness, privacy, consent, deepfakes and ownership.

One year of Sora: a classroom reality check

April 21, 2025

A year on from Sora-style video generation entering mainstream conversation, teachers are asking a practical question: what actually works in a classroom, and what still causes problems? This reality check focuses on the shifts you’ll notice most—better coherence, improved text handling, and more usable editing controls—alongside predictable failure modes like continuity glitches, broken physics, biased portrayals, and unsafe outputs. You’ll find low-stakes use cases, a media literacy sequence, safeguarding boundaries, workload-aware workflows, and a 30-day pilot plan with clear “keep/kill” criteria.

Minimum viable inclusion stack: SEND tech update

April 14, 2025

This term-ready update brings together what’s genuinely new (and useful) in built-in accessibility across Google, Microsoft, Apple and Chromebooks, then adds a practical layer: a “minimum viable inclusion stack” you can standardise across classrooms. You’ll also find ten low-effort AI micro-routines that scaffold learning without replacing it, plus guidance on making assistive tech and AI work together rather than clash. Finally, there’s a SEND-specific procurement and safeguarding checklist, and a simple two-week pilot plan with staff briefing points and a one-page parent/carer note.

From Autocomplete to Co-authoring

April 10, 2025

In 2024–2025, AI writing tools shifted from simple autocomplete to document-aware co-authoring spaces that can draft, rewrite and reorganise whole texts on command. That change has made “did they use AI?” the wrong question for assessment. Instead, teachers need routines that capture visible decision-making: prompt logs, revision rationales, source trails and short in-class checkpoints. This guide explains the new risks (over-polish, voice drift, hidden outsourcing) and offers practical ways to redesign writing instruction so students can use AI while still producing assessable evidence of thinking, craft and integrity.

GPT-5 release day school briefing

April 3, 2025

GPT-5 will arrive with headlines, hot takes and rapid product changes, but schools need a calm, repeatable way to judge what actually matters. This release-day protocol gives you a one-page briefing and a 60–90 minute comparative “bake-off” against your current model and workflows. You’ll test planning, feedback, accessibility, safeguarding and assessment using a minimum-safe environment, then make an adopt/pilot/park decision with clear evidence thresholds. It ends with the smallest policy tweaks leaders should make in week one, plus ready-to-send staff and parent messages that avoid hype.

Easter AI Learning Project Menu

March 24, 2025

This Easter-themed project menu offers short, family-friendly AI learning challenges that work even when devices are limited. Teachers can set clear, bounded outcomes while pupils choose a pathway that suits their age and interests. Each project has printable and offline variants, plus paired-role options for sharing a single device. You’ll also find a simple home–school safety script, minimum-data rules for privacy, and an optional mini showcase rubric that rewards process (planning, prompts, evidence, reflection) over polish.

AI Across the Curriculum: 8 Lesson Moves

March 18, 2025

“AI across the curriculum” works best when it is a small set of repeatable lesson moves, not a wholesale rewrite of schemes of work. This article offers eight AI-supported teaching moves you can drop into any subject, with quick prompts, teacher checks, and subject-specific examples. You’ll also find a copy-and-use one-page planning template, plus a single checklist covering safeguarding, privacy, accessibility and assessment integrity. The goal is simple: better learning habits, clearer evidence, and consistent boundaries—without tool sprawl.

End-of-Term Grading: A Batch Marking Pipeline

March 17, 2025

End-of-term grading can feel like a sprint you didn’t train for. Used well, AI can reduce the admin burden without becoming a grade-decider. This article offers a practical ‘batch marking pipeline’ that keeps teachers firmly in control: how to structure anonymised evidence packs, generate rubric-aligned comment banks, run consistency and bias checks, and produce student-facing next steps. The focus is on minimum-data prompting, clear boundaries, and repeatable routines that support reliable, fair grading while respecting data protection.

Student Perspectives on AI in Class

February 27, 2025

“Student voice on AI” should do more than collect opinions. Done well, it protects trust, surfaces equity issues, and produces practical classroom norms students understand and will follow. This post sets out a 2–3 week “student AI listening cycle” using a safe survey, small focus groups, and quick classroom trials. The goal is a one-page, student-authored AI classroom agreement plus a short set of policy-ready insights on assessment, privacy, trust, and access— without turning decision-making into a popularity contest.

Four-Channel Multimodal AI Playbook

February 24, 2025

Multimodal AI can feel messy in a classroom: pupils jump between text, images, audio and video, and teachers worry about privacy, plagiarism, and losing track of who did what. This playbook offers a repeatable “four-channel” routine that deliberately moves learning through text, image, audio and video—then back to text—so you gain accessibility and differentiation without sacrificing assessment integrity. You’ll find quick set-up guidance, prompt frames that travel across subjects, six ready-to-run lesson moves, and practical safeguards that keep control with the teacher.

AI Analytics for MIS Early Intervention

February 19, 2025

Many schools already hold rich attainment, behaviour and attendance data in their MIS, but it is often messy, inconsistent and hard to act on quickly. This practical blueprint shows how to integrate AI analytics in a sensible, governed way, turning existing data into a small set of trustworthy early-intervention signals. It focuses on standardisation, transparent indicators, and human sign-off, rather than black-box “risk scores”. You’ll also find clear prompts for data protection, fairness checks and a low-workload rollout plan.

What GPT-5 Might Mean for Schools

February 17, 2025

“GPT-5” is less a single product announcement and more a stress test for how ready schools are to procure, govern and use fast-improving AI safely. This article maps plausible capability jumps—longer context, stronger reasoning, more reliable multimodal understanding and early agentic actions—to everyday school processes that could be disrupted. It then offers a practical 30/60/90-day readiness plan, plus a leader-friendly checklist to update policy, vendor questions, staff training and classroom routines without committing to any one tool.