
World Mental Health Day
World Mental Health Day (10 October) often brings assemblies, tutor-time themes and posters. Those matter, but the lasting impact usually comes from systems: how pupils ask for help, how adults notice patterns, and how quickly concerns move to the right people. AI can support those systems, but it cannot replace them. If your school is considering chatbots for wellbeing, treat them as a “copilot” for simple, low-risk support and smoother workflows, not as a counsellor.
This article is an implementation guide you can use this week. It assumes a whole-school stance: clear rules, consistent language, and a shared understanding of what the bot will not do. If you’re also refreshing your wider governance, it pairs well with an annual AI acceptable use policy refresh, so your wellbeing work sits within an agreed safety framework.
What AI can do
AI is useful where the stakes are low and the next step is clear. In practice, that means helping pupils put feelings into words, offering generic coping ideas (sleep routine, breathing, study breaks), and signposting to trusted, human support routes. It also means helping staff reduce admin load: drafting pastoral communications, organising signposting resources, or preparing scripts for difficult conversations.
What AI cannot do is assess risk, interpret complex context, or hold responsibility. It cannot reliably distinguish “I’m fine” from “I’m masking”, or detect coercion, abuse, or imminent harm. It also cannot promise confidentiality. If a pupil uses a chatbot as if it were therapy, you have already drifted into unsafe territory.
Define the copilot model
A “wellbeing copilot” is a constrained chatbot experience designed to do three things: offer a brief check-in, provide safe self-care suggestions, and point the user towards human help. The model is deliberately limited. It should repeatedly remind pupils that it is not a counsellor and that the school’s safeguarding systems are the right route for serious concerns.
Crucially, humans remain responsible at every point. The copilot can help a pupil decide what to say to a trusted adult, but it does not decide whether a situation is urgent. It can help a tutor plan a five-minute wellbeing routine, but it does not replace professional judgement. If you want a practical way to embed that “humans sign off” habit across school life, borrow the same principle used in operational AI planning, such as AI event ops workflows with human sign-off, and apply it to pastoral processes.
Safeguarding non-negotiables
Before you launch anything pupil-facing, write down the non-negotiables and train staff to enforce them consistently. Your copilot should have hard-coded boundaries that trigger immediate signposting and, where appropriate, escalation.
At minimum, agree three layers of response. First, everyday feelings: stress about homework, friendship worries, low mood after an argument. The copilot can offer simple strategies and encourage speaking to a tutor or pastoral lead. Second, “concerning but not explicit” statements: “I can’t cope”, “I don’t want to be here”, “everything feels pointless”. The copilot should switch to a more directive tone, encourage contacting a named adult now, and provide clear steps for reaching help in school. Third, explicit high-risk content: self-harm, suicidal intent, abuse disclosures, violence, or threats. The copilot must refuse to continue as normal, provide urgent signposting, and instruct the pupil to seek immediate adult help.
Record-keeping needs equal clarity. Decide what is recorded, by whom, and where. A safe default is that the copilot does not create pastoral records automatically. Instead, it encourages the pupil to contact a trusted adult, and staff record concerns through existing safeguarding channels. If you do allow staff to log anonymised themes (for example, “exam stress spikes in Year 11”), keep it aggregated and non-identifying.
For staff training, it can help to run a short INSET micro-routine: practise the escalation language, practise what to do when a pupil shows a transcript, and practise how to respond if a pupil says, “The bot told me…”. The structure in an INSET day AI workshop with safety protocols adapts well to this kind of rehearsal.
Data protection by design
A wellbeing copilot should be built around data minimisation. In plain terms: collect as little as possible, keep it for as short a time as possible, and avoid anything that would make a child identifiable.
Start with inputs. Do not ask for names, dates of birth, tutor groups, addresses, phone numbers, or any unique identifiers. Avoid free-text fields that encourage “tell me everything” disclosures. Use short, structured check-ins such as “Pick a word for how you feel” or “Choose a stress level from 1–5”, then offer generic guidance. If you offer an optional free-text box, add a prominent warning: “Do not include personal details.”
Retention should be short and explicit. If you can, switch off conversation history. If you cannot, set a tight retention period and document it. Make sure staff know that screenshots and copy-and-pastes create new data risks. Vendor checks matter too: where is data processed, what is used for training, what controls exist for children’s data, and what audit logs are available? If you’re reviewing suppliers more broadly, you may find it useful to align this with a procurement lens such as the UK schools AI tutoring procurement and safeguarding rubric, even though the use case is different.
Student use cases
Student-facing use should stay “low-stakes, high-value”. A good test is: if the copilot disappeared tomorrow, would anyone be at risk? If the answer is yes, the use case is too critical.
One strong use case is a two-minute daily or weekly check-in that ends with a concrete next step. For example, a pupil chooses “anxious” and the copilot offers three options: a breathing exercise, a study plan for tonight, or a prompt to speak to an adult. Another is “help me ask for help”: the copilot helps a pupil draft a short message to a tutor, such as, “I’m finding break times hard this week and I’d like a quick chat.” A third is signposting: the copilot can describe what pastoral roles do in your school and how to access them, using language pupils actually understand.
If you are introducing this to younger year groups, integrate it into existing routines and expectations around safe tool use. The approach used in a Year 7 induction safe AI charter can help you set norms early: what to share, what not to share, and when to go straight to an adult.
Staff wellbeing use cases
Staff-facing use is often where you get quick wins without exposing pupils to extra risk. The key is “workload relief without oversharing”. Staff should not paste identifiable pupil information, case details, or sensitive safeguarding notes into a general chatbot.
Instead, use the copilot to draft generic resources and communications. A head of year might ask for three versions of a calm, supportive reminder about revision routines, then adapt it to their voice. A pastoral lead might generate a one-page signposting sheet that explains who to contact for anxiety, bullying, bereavement, or financial hardship, then check it against local services. Tutors can use it to plan a five-minute check-in script for World Mental Health Day that avoids triggering language and ends with clear support routes.
These uses also create consistency. When staff have shared scripts, pupils hear the same message from multiple adults, which builds trust.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Prompt patterns and scripts
The safest prompts are short, structured, and repetitive about boundaries. You are aiming for predictable responses, not “deep” conversations. Here are patterns you can adapt into your platform or staff guidance.
A safe check-in pattern is: “Ask me one question at a time. Keep it brief. Do not ask for personal details. At the end, give me one small next step and one school support option.” In tutor time, a pupil might use: “I feel stressed about friendships. Give me three simple coping ideas and suggest who at school I could speak to.” The response should stay general and end with signposting.
A signposting pattern is: “Explain the difference between a tutor, pastoral lead, counsellor, and safeguarding lead in simple terms. Then suggest who to contact for [issue].” This helps pupils navigate support without the bot acting as a gatekeeper.
You also need refusal scripts that do not feel punitive. A strong “I can’t help with that” pattern is: “If you are thinking about self-harm, suicide, abuse, or you are in danger, I can’t support you here. Please speak to a trusted adult now. If you are in school, go to [named place/role]. If you are not in school, contact local emergency services or a crisis line in your country.” Keep the language calm, direct, and consistent.
Do-not-do list
Your red lines should be written plainly and repeated in pupil-facing guidance. Do not allow the copilot to act as therapy, to diagnose, or to “keep secrets”. Do not use it to assess risk, to decide whether a disclosure is credible, or to replace professional support. Do not invite pupils to share detailed histories, names, locations, or identifying information. Do not build it as a reporting channel for abuse or self-harm; instead, signpost to your established safeguarding routes immediately. Do not present the bot as “always available” support for crisis moments, and do not suggest that talking to the bot is equivalent to talking to a trained adult.
Week one plan
In week one, prioritise clarity over cleverness. Start with a short communications message to staff, pupils, and families explaining what the copilot is for, what it is not for, and exactly how to get human help. Then run a 20–30 minute staff briefing: demonstrate the tool, practise the refusal and escalation language, and remind everyone of the “no identifiers” rule.
Next, pilot with a small group and monitor closely. Look for predictable failure modes: pupils testing boundaries, staff accidentally oversharing, or the bot giving overconfident advice. Set a daily review for the first week to tighten prompts, adjust signposting, and confirm that safeguarding routes are working as intended. If you want a simple rollout rhythm that keeps privacy as the default, the structure in a minimum viable back-to-school AI toolkit with privacy defaults can be repurposed for this wellbeing launch.
Finally, end the week by deciding whether to scale, pause, or revise. A wellbeing copilot is only worth keeping if it reduces friction to human support and reduces staff load, without creating new risk.
May your World Mental Health Day plans lead to calmer corridors and clearer support routes.
The Automated Education Team