INSET AI Workshop: Three Micro-Routines

From awareness to agreed routines in one INSET

Staff collaborating during an INSET AI training workshop with a shared safety protocol on screen

AI training on an INSET Day often lands in one of two places: inspiring but vague, or practical but inconsistent across teams. This workshop is designed to avoid both traps. By the end, staff will have three small, repeatable routines they can use tomorrow, aligned to policy and bound by a single safety protocol. If you want the broader “why now?” framing for staff who are still sceptical, pair this session with a short pre-read such as Summer term AI foundations sprint, then use the INSET time for decisions and practice.

Workshop goals

The goal is not to turn teachers into prompt engineers. It is to agree how your school will use AI today for three high-frequency tasks, with consistent guardrails and a shared language.

By the end of the session, staff will have produced: a one-page safety protocol; a prompt pack with school-approved examples; three micro-routines (planning, feedback preparation, parent/carer communications) with clear “human sign-off” points; and a 30-day implementation and evidence plan. Keep the bar deliberately low and repeatable. A micro-routine should fit on a sticky note.

Non-negotiables

Start with “what AI is for today—and what it is not”. In practical terms, AI is for drafting, structuring, simplifying, generating options, and preparing materials that a professional will check. It is not for uploading personal data, making safeguarding decisions, replacing teacher judgement, or producing final communications without review.

If your staff need a tight boundaries script for integrity and assessment contexts, borrow language from Exam season AI traffic-light boundaries and adapt it into your protocol, so expectations are consistent across classrooms and corridors.

Pre-INSET set-up

Do the boring bits before the day. If you leave accounts, access, and data rules until the workshop, you’ll lose momentum and create uneven practice.

Set up a “safe sandbox” environment: either a school-approved AI tool with an education/workspace setting, or a controlled set of devices where staff can practise using only non-sensitive inputs. Share minimum-data rules in advance. A simple version is: no pupil names, no unique identifiers, no medical or safeguarding details, and no copying whole pieces of pupil work into a public tool. Provide a short list of safe substitutes: “Pupil A”, “Year 8”, “reading age approx. 10”, “EAL beginner”, and anonymised excerpts that cannot be traced back to an individual.

If you’re still choosing tools, avoid turning INSET into a product debate. Use a quick triage approach like AI assistant showdown: teacher triage to select one default tool for the pilot, plus a clear “if in doubt, don’t” rule.

Choose-your-length agenda

You can run this as 60, 90, or 120 minutes. The difference is not the outcomes; it’s how much practice and red-teaming you can fit in.

60 minutes focuses on agreement and a minimum viable routine: safety protocol (10), task triage (10), build the three micro-routines (25), evaluation and next steps (15). 90 minutes adds a prompt hygiene clinic and a short red-team cycle: safety protocol (10), task triage (10), prompt hygiene (15), build micro-routines (30), red-team (15), evaluation and next steps (10). 120 minutes gives time for modelling, revision, and tighter evidence planning: safety protocol (15), task triage (15), prompt hygiene (20), build micro-routines (35), red-team (20), evaluation and next steps (15).

If your school is already running a workload pilot, you can align the evidence plan to it using the structure from Teacher workload crisis: AI task map, so you’re not inventing new measures.

Deck outline

Below is a copy-and-edit slide sequence. Keep slides sparse and use speaker notes to maintain pace. Where timings differ, prioritise the activities over the “AI overview”.

Slide 1 (2 mins): Why today, why small Speaker notes: “We are leaving with three routines we all use the same way. Small, safe, repeatable beats clever.”

Slide 2 (3 mins): Outcomes and artefacts Notes: “Safety protocol, prompt pack, three micro-routines, 30-day plan.”

Slide 3 (5 mins): What AI is for / not for Notes: “Drafting and options, not decisions. Support, not substitution.”

Slide 4 (7 mins): Shared safety protocol (preview) Notes: “One protocol for everyone. No ‘my version’.”

Slide 5 (10 mins): Activity 1 task triage Notes: “Pick tasks that are frequent, low emotional stakes, and easy to check.”

Slide 6 (10–15 mins): Activity 2 prompt hygiene clinic Notes: “Inputs, constraints, checks, versioning. We standardise how we ask.”

Slide 7 (25–35 mins): Activity 3 micro-routines Notes: “Planning, feedback preparation, parent/carer comms. Decide sign-off points.”

Slide 8 (15–20 mins): Activity 4 red-team Notes: “Assume failure. Find it before it finds you.”

Slide 9 (5 mins): Confidence check and artefact checklist Notes: “If we can’t point to the artefacts, we haven’t finished.”

Slide 10 (5–10 mins): 30-day plan and evidence Notes: “Owners, drop-ins, what we’ll measure, when we’ll review.”

Shared safety protocol

Keep the protocol to one page, written in plain language, and used everywhere. It should cover privacy, safeguarding, accuracy, copyright, and integrity.

For privacy, the rule is simple: only use the minimum necessary data, anonymise by default, and never paste anything you would not put on a public noticeboard. For safeguarding, AI can help you draft neutral language, but it cannot be the decision-maker. If a prompt touches safeguarding, staff stop and follow the normal safeguarding route. For accuracy, treat AI as a fast first draft: verify facts, check dates, and cross-check any claims. For copyright, avoid requesting or reproducing proprietary resources; instead, ask for original examples and cite sources you actually used. For integrity, be explicit about what is allowed for staff work and what is allowed for pupil work.

Staff scripts help consistency. For example: “I can use AI to draft, but I must check it against our policy and my professional judgement,” and “I will not enter personal data; I will anonymise and summarise instead.” If you need a quick protocol for evaluating new model features safely, adapt the process from GPT-5 release day school briefing so experimentation stays controlled.

Activity 1: Task triage

In pairs, staff list the five tasks they do most often that feel repetitive. Then they score each task quickly: frequency, time cost, risk level, and ease of checking. The sweet spot is high frequency, medium time cost, low risk, and easy to check. A concrete example: a teacher might choose “turning a unit overview into three lesson outlines” over “writing a sensitive safeguarding email”.

The output is a short, agreed shortlist of tasks worth piloting. This prevents AI use drifting into whatever is most novel.

Activity 2: Prompt hygiene clinic

This clinic standardises how staff “set up” an AI request so outputs are usable and safe. Model a simple structure: context, role, constraints, output format, and checks. Then practise improving weak prompts.

For instance, instead of “Write a lesson plan on fractions”, staff practise: “You are a maths teacher. Create a 50-minute lesson outline for introducing equivalent fractions to 11–12-year-olds. Assume mixed attainment and include one scaffolded worked example, two hinge questions, and an exit ticket. Use British English. Do not reference specific pupils. Provide the plan as headings with timings.” The final step is versioning: staff label prompts v1, v2, and note what changed, so good practice spreads.

Activity 3: Build micro-routines

Now staff build the three routines as short sequences with explicit human sign-off points.

For planning, a micro-routine might be: define lesson objective and misconceptions; ask AI for three activity options and a short explanation; choose one; then the teacher checks alignment to curriculum, inclusion, and resources available. For feedback preparation, AI can generate comment banks linked to success criteria, or suggest next-step questions, but the teacher decides what applies and avoids generic praise. If you want a stronger evidence-first approach to writing and feedback, link this routine to From autocomplete to co-authoring, so staff keep learning and quality central.

For parent/carer communications, the routine should prioritise tone and clarity: staff draft bullet points first, then use AI to turn them into a concise message in plain language, with a final human check for sensitivity, accuracy, and policy compliance.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Activity 4: Red-team outputs

Red-teaming is a structured “try to break it” review. Give each group another group’s outputs and ask them to look for predictable failure modes: hallucinated facts, culturally insensitive phrasing, deficit language about pupils, overconfident tone, missing reasonable adjustments, or implied promises the school cannot keep.

A simple classroom example makes this real. If an AI draft email says, “Your child is falling behind,” the red-team might suggest a more inclusive, evidence-based line: “We’ve noticed your child is finding this unit challenging; here are two specific strategies we’re using and how you can support at home.” The point is not perfection; it is building a shared instinct for risk.

Evaluation for 30 days

End the workshop with a quick confidence check (a one-minute self-rating and one sentence: “I feel confident using the safety protocol because…”). Then confirm the artefact checklist: each team submits the three micro-routines, plus at least six approved prompts (two per routine).

For impact, keep measures light but meaningful over 30 days: time saved on the chosen tasks, consistency of communications, and staff confidence. Add one quality measure per routine, such as “plans include a hinge question” or “parent messages meet our tone criteria”. If you already use review cycles, you can fold this into a termly reflection using Term 2 AI after-action review.

Follow-up implementation pack

Your follow-up pack should name owners and make support visible. Assign one “routine owner” per micro-routine to collect examples, answer questions, and update the prompt pack. Schedule two short drop-ins during the month, and one 20-minute share-back at the next staff meeting.

Evidence capture should be low-friction: a shared folder with anonymised before/after examples, a simple time log (two minutes weekly), and a short reflection prompt. If you want a tidy way to package evidence for leadership and governors, align your artefacts to the structure in End-of-year AI audit evidence pack, even if you’re not at year-end.

Appendix: Prompt pack, templates, FAQs

Include a copy/paste prompt pack staff can use immediately, plus templates for each micro-routine and short FAQs for staff and families. Keep FAQs calm and concrete: what the school uses AI for, what it never uses it for, how privacy is protected, and how staff check accuracy.

A small but powerful addition is a one-paragraph “family-facing” statement that mirrors your safety protocol in plain language. This reduces misunderstandings and reassures carers that professional judgement remains central.

Towards smoother, safer AI routines across your school. The Automated Education Team

Table of Contents

Categories

Professional development

Tags

Teacher Training Professional Development Safety

Latest

Alternative Languages