
What teachers need
In 2025, the most useful question isn’t “Which assistant wins?” It’s “Which assistant helps me finish this task safely, with the least data, in the least time?” Teachers are juggling tighter safeguarding expectations, more diverse needs in every class, and growing pressure to evidence decisions. AI can help, but only if it fits real classroom workflows and respects the boundaries of professional responsibility.
Think of ChatGPT, Claude and Gemini as different colleagues. Sometimes you need speed and breadth; sometimes you need careful reasoning; and sometimes you need seamless handling of documents or images. The skill is triage: choosing the right tool for the job, feeding it the minimum information, and knowing exactly when to stop and take over. If you’re building routines that staff will actually keep using, it helps to frame AI as a set of repeatable “micro-processes” rather than a magic box. For more on that approach, see Building AI workflows that stick and keep this article as your weekly playbook.
Minimum-data rules
Before any comparison, set the floor: the safest workflow is the one that never needed sensitive data in the first place. The minimum-data rule set below applies regardless of tool, plan, or device.
Never paste anything that identifies a child or adult. That includes names, photos, addresses, unique incidents, medical details, safeguarding notes, behaviour logs, SEND plans, or anything that could be pieced together to identify someone. Avoid uploading student work that contains names or distinctive personal references. If you must use real work for feedback practice, anonymise aggressively and change details. When in doubt, rewrite a short excerpt yourself and remove identifiers.
Instead, work with abstractions: year/age range, subject, topic, time available, class profile in general terms (“mixed attainment, 3 EAL learners, 2 students who need reduced writing load”), and the success criteria you intend to assess. If you want the AI to “see” a worksheet, describe it rather than upload it, unless your school has an approved account model and you understand what data is stored and where.
Evaluation rubric
To keep this grounded, judge each assistant against six classroom criteria: reliability, pedagogy fit, safeguarding, citations, speed, and cost. Reliability is not just “sounds confident”; it’s whether outputs stay aligned to your constraints across multiple turns. Pedagogy fit is whether it can express strategies you actually use—modelling, retrieval, checks for understanding, worked examples—without drifting into generic activities. Safeguarding is how well it avoids risky suggestions and supports cautious phrasing and escalation. Citations matter because hallucinated sources waste time and can damage trust. Speed is practical: can you get a usable draft during a lunch break? Cost is not only subscription price, but whether access works for staff at scale.
If you want a broader refresh of what’s changed recently, AI tools refresh 2025 gives a helpful landscape view. Here, we’ll stay focused on teacher triage.
Workflow 1: Lesson planning
Lesson planning is where teachers often over-share. You don’t need pupil names or prior incidents; you need the objective, time, constraints, and what good looks like.
A minimum-data workflow is to start with a clean brief: learning objective, prior knowledge assumptions, misconceptions to address, resources available (e.g., mini-whiteboards, textbooks, devices), and any broad class constraints. Then ask for a sequence with checkpoints, followed by a separate request for resource drafts (questions, model answers, hinge questions). Keep planning and resource generation in separate steps so you can sanity-check the structure before you generate materials.
A prompt pattern that works across tools is: “Plan → justify → adapt”. Ask for a plan, then ask it to justify the pedagogy briefly, then ask for two adaptations (shorter time, lower literacy demand). Claude is often strong at explaining reasoning and producing coherent sequences, especially if you request explicit modelling and worked examples; if you want that style, Claude extended thinking worked examples is worth a read. ChatGPT is typically fast at generating multiple variants and resource banks. Gemini can be particularly handy when your planning is tied to Google Workspace artefacts and you want a smoother jump between draft and document, and when multimodal inputs matter; see Google Gemini 2.0 multimodal classroom potential.
The hand-off point is non-negotiable: you must check curriculum alignment, appropriateness for your cohort, and factual accuracy. You also own the lesson’s assessment logic. If the AI suggests an activity, you decide whether it genuinely evidences the objective or merely fills time.
Workflow 2: Differentiation
Differentiation is where AI can save time, but it can also quietly lower expectations or produce unhelpful “simplifications”. The safest approach is to define the same success criteria for everyone and ask for access routes, not different goals (unless your context explicitly requires it).
A minimum-data workflow begins with the core task and success criteria in plain language, plus broad needs: reduced writing load, vocabulary support, chunking, extra challenge. Ask for scaffolds (sentence stems, partially completed examples), stretch (deeper prompts, constraints, extension problems), and language supports (key vocabulary with student-friendly definitions, visuals you can create). For SEND/EAL-friendly variants, request “same concept, fewer moving parts” rather than “easier work”.
A prompt pattern that works well is “Same goal, three routes”. Ask for three pathways: supported, core, stretch, each with identical success criteria and a quick teacher check question. Claude often shines when you ask for careful scaffolding and explicit cognitive load management. ChatGPT is strong for generating multiple versions quickly and offering alternative representations. Gemini can be useful when you’re adapting existing slides or resources already in your Google ecosystem, especially if you’re iterating in-doc.
The hand-off point is equity and dignity. You must review whether the supported route still respects the learner and whether the stretch route is meaningful rather than just “more”. You also need to check accessibility: font size, reading level, and whether any “support” accidentally gives away answers.
Workflow 3: Feedback
Feedback is high impact and high risk. AI can help you draft comment banks and align to success criteria, but it must not become a substitute for reading the work. The minimum-data approach is to avoid pasting whole scripts with names. Instead, use anonymised excerpts or, better, ask the AI to generate a comment bank mapped to your rubric.
A practical workflow is: write or paste your success criteria, common misconceptions, and the tone you want (warm, direct, specific). Ask for a bank of comments that each include “what went well”, “even better if”, and a next step. Then, when you mark, you select and lightly edit comments, adding one personalised sentence that only you can write because you saw the work.
A strong prompt pattern is “Criteria → examples → tone”. Provide criteria, then one or two anonymised example responses (short), then specify tone and length limits. ChatGPT is often effective for tone control and generating many options. Claude tends to produce thoughtful, less repetitive phrasing and can be good at keeping alignment to criteria. Gemini can be convenient if you are drafting comments inside a platform integrated with your documents, but you still need to be careful about what student text you input.
The hand-off point is professional judgement: you must verify the comment matches the work, avoid false praise, and ensure next steps are teachable. Never allow AI to infer reasons for performance (motivation, home context, needs). That is a safeguarding and ethics boundary, not just a quality issue.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Workflow 4: Safeguarding checks
AI is not a safeguarding decision-maker. What it can do is help you check your own writing for risky phrasing, missing escalation steps, or overly specific detail in a record. Treat it as a second pair of eyes for process, not for judgement.
A minimum-data workflow is to use templates and hypotheticals. Instead of pasting a real incident, describe it generically: “A student disclosed harm at home; I need to write a factual, neutral note and identify escalation steps.” Ask for a checklist of what a good record includes (time, date, exact words where possible, actions taken, who informed), and ask it to rewrite your draft to be factual and non-interpretive—after you remove identifiers.
A prompt pattern that helps is “Policy-first, then language”. Ask the AI to produce a neutral template and a “red flags” list, then ask it to review your anonymised draft for speculative language, judgemental descriptors, or missing actions. Claude is often strong at careful language and risk-aware tone. ChatGPT is quick for template generation and checklists. Gemini can be useful if your workflow lives in shared documents and you need consistent formatting, but again, only with anonymised text.
The hand-off point is immediate: any real concern must follow your school’s safeguarding policy and designated leads. AI cannot validate risk, decide thresholds, or replace logging procedures. If you are unsure, escalate to a human, not a model.
Workflow 5: Citations
Citations are where AI can waste time by inventing sources. The minimum-data workflow is to use AI for search terms, summaries of sources you already have, and formatting references—never as the sole source of truth.
A safe approach is: ask for a list of likely keywords and reputable organisations, then do the actual searching yourself in trusted databases or official sites. If you paste a source excerpt (from a report you already have), ask the AI to summarise it and suggest how to cite it in your preferred style. If it provides a citation, treat it as a draft and verify every element.
A prompt pattern is “No new sources”. Tell the assistant: “Do not invent references. If you cannot verify, say so.” ChatGPT is helpful for formatting and quick paraphrase checks. Claude is good at careful summaries and flagging uncertainty when prompted. Gemini can be efficient if you are working within a browser and document workflow, but you still need to click through and verify.
The hand-off point is verification. If you cannot locate the original source, do not cite it. If a claim matters, read the primary document.
Pricing and access
School access is often the hidden deciding factor. Free tiers can be useful for individual experimentation, but they may have tighter limits, fewer features, and less predictable availability. Paid plans can improve capacity and features, but procurement questions matter: account ownership, data retention, admin controls, audit logs, and whether staff can use a managed workspace identity.
When evaluating, ask: can we provision accounts centrally, can we restrict data sharing, can we control integrations, and can we support staff who use different devices? Also consider equity: if only a few staff can access paid features, does that create inconsistent practice? For a two-week trial, it may be better to standardise on one plan level for a small group than to run a messy mix.
Best use cases map
You can print the decision tree below and keep it near your desk. It won’t pick a “winner”; it will help you pick a workflow.
Weekly decision tree (printable)
If the task involves identifiable student information, don’t use any assistant. Anonymise, or complete the task without AI. If the task is high-stakes (safeguarding, formal reporting, grades), use AI only for templates, neutral language checks, or criteria alignment, then hand off to a human decision-maker. If you need careful step-by-step reasoning, structured explanations, or well-scaffolded worked examples, trial Claude first. If you need fast iteration, multiple variants, tone-controlled drafts, or broad idea generation, trial ChatGPT first. If you need tight integration with your documents, slides, or multimodal classroom materials, trial Gemini first, especially when you’re adapting resources already in your workflow.
Classroom prompt pack
For lesson planning, try: “Create a 50-minute lesson sequence for [topic] for [age range]. Objective: [objective]. Prior knowledge: [list]. Misconceptions: [list]. Resources: [list]. Include modelling, guided practice, independent practice, and two checks for understanding. After the plan, list the exact teacher questions I should ask.” Then add a human check: verify content accuracy and that each activity evidences the objective.
For differentiation, try: “Using the same success criteria, create supported/core/stretch versions of this task: [task]. Constraints: supported version must reduce writing, include sentence stems and a worked example; stretch must deepen thinking without adding length. Provide one quick diagnostic question for each.” Human check: confirm the supported route is not a different goal, and that stretch is genuinely deeper.
For feedback, try: “Here are the success criteria: [criteria]. Generate a comment bank of 12 comments: 4 for common strengths, 4 for common misconceptions, 4 for next steps. Each comment must be under 25 words, specific, and in a warm but professional tone.” Human check: only use comments you can evidence from the work.
For safeguarding language checks, try: “Rewrite this anonymised incident note to be factual, neutral, and free of interpretation. Keep it under 120 words. Then list any missing factual fields I should add (date/time, exact words, actions taken). Do not advise on risk thresholds.” Human check: follow your policy and escalate to designated leads as required.
For citations, try: “I will paste an excerpt from a source I already have. Summarise it in 3 bullet points and suggest how to cite it in Harvard style. If any citation details are missing, ask me for them rather than guessing.” Human check: locate and verify the original document before using the reference.
Implementation checklist
A two-week trial works best when it is boringly consistent. Choose two or three workflows you want to improve, pick one assistant as your default for each workflow, and write your minimum-data rules at the top of every prompt. Keep a simple log: time saved, quality rating, and any risks spotted. In week one, focus on planning and differentiation, because the stakes are lower and the benefits are immediate. In week two, add feedback language support and citations, keeping safeguarding strictly template-only.
At the end, decide what to keep by asking staff a practical question: “Would you use this on a Wednesday evening in week six?” If the answer is no, simplify the workflow, tighten the prompt, or drop the use case.
May your planning feel lighter and your professional judgement stay firmly in the driver’s seat.
The Automated Education Team