State of AI in UK Education: Sept 2024

A practical term-start briefing for UK school and college leaders

School leaders discussing AI strategy at the start of term

Where UK schools stand

September 2024 finds most UK schools somewhere between experimentation and hesitation with AI. A minority have clear strategies, piloted tools and staff training in place. Many more have pockets of enthusiastic use alongside unresolved questions about safeguarding, data protection, workload and academic integrity. Some are still operating informal “don’t ask, don’t tell” approaches to AI.

The good news is that you are not expected to have solved everything this term. The Department for Education’s guidance recognises that schools are at different stages and frames AI as an opportunity to be explored, not a requirement to implement at scale. At the same time, there is growing expectation that leaders can show they are thinking about AI in a structured, risk-aware way rather than leaving it to chance.

If you are still developing your approach, it may help to pair this briefing with a more detailed look at policies and processes in creating your school’s AI acceptable use policy and a practical September AI readiness checklist.

DfE guidance: what’s expected

The DfE’s position on AI in 2024 is advisory rather than statutory, but it still shapes what “reasonable” leadership looks like. It is crucial to distinguish between what you must do under existing law and what is currently recommended as good practice.

Legally, your obligations around AI are largely extensions of existing frameworks: data protection, safeguarding, equality, health and safety, and exam regulations. If an AI tool processes personal data, you must meet UK GDPR requirements, including lawful basis, data minimisation and appropriate contracts with providers. If AI is used in safeguarding workflows, you remain responsible for decisions and must ensure human oversight.

The DfE’s AI guidance does not mandate any specific tools or uses, but it does expect leaders to have considered AI within existing policies. This includes acceptable use, behaviour, assessment, remote learning and the staff code of conduct. You are not required to write an “AI policy” from scratch, but you should be able to show that AI has been integrated into your current framework in a thoughtful way.

In practice, this means documenting where AI is and is not permitted, clarifying that staff and students remain responsible for work produced with AI, and being explicit that AI outputs may be inaccurate or biased. The guidance also encourages schools to explore AI for workload reduction, but only where it is safe, ethical and aligned with your context.

Ofsted’s stance

Ofsted has been careful not to prescribe how schools should use AI, but inspectors are increasingly alert to its presence. They are not arriving with a checklist of AI tools you should be using. Instead, they are looking at whether your approach to AI is consistent with your broader leadership, safeguarding and curriculum decisions.

Inspectors are unlikely to ask, “What AI do you use?” in isolation. They are more likely to explore how you ensure pupils learn to think critically in a world where AI exists, how you protect them from harm when using technology, and how you support staff to use digital tools without undermining professional judgement. AI becomes part of wider questions about quality of education, personal development and leadership.

To evidence “responsible AI” in an inspection, you do not need glossy strategies or complex dashboards. You do need to show that you have assessed risks, set clear expectations and communicated them. Inspectors will expect staff to understand your stance on AI, know where it is allowed or restricted, and be able to describe how pupils are taught to use it appropriately.

Documentation helps, but lived practice matters more. A short, clear set of staff guidelines, consistent classroom routines around AI use, and a curriculum that explicitly addresses AI literacy are stronger evidence than a lengthy policy that few colleagues have read. For curriculum thinking, you may find it useful to connect this with broader work on AI literacy in schools.

Union positions and concerns

Unions have broadly welcomed the potential for AI to reduce workload, while warning against its misuse and the risk of increased surveillance or deprofessionalisation. They are particularly wary of AI being used to monitor staff performance, analyse lesson recordings or automate decisions about capability.

From a workload perspective, unions tend to support voluntary use of AI to help with planning, resources and admin, provided it is genuinely optional and does not become a new expectation. For example, a teacher might choose to use an AI tool to draft a scheme of work, but should not be judged negatively if they prefer not to.

Safeguarding and data protection remain key concerns. Unions emphasise the need for schools to risk-assess any AI tool before use, avoid uploading identifiable pupil data to external systems, and ensure staff understand the limits of AI-generated content. They also stress the importance of protecting professional autonomy: AI should support, not replace, teacher judgement.

For leaders, the implication is clear. Position AI as a set of optional tools within a human-led system. Avoid tying AI usage to performance management targets, and involve staff in decisions about which tools are trialled or adopted. Make it explicit that teachers are responsible for checking and adapting AI outputs, and that no one will be penalised for choosing not to use them.

Reconciling mixed messages

With DfE encouragement, Ofsted caution and union concern, it is easy to feel pulled in different directions. A simple decision framework can help you navigate this without paralysis. One way to think about AI decisions this term is through four questions: purpose, risk, control and evidence.

Start with purpose. What educational or organisational problem are you trying to solve? If the answer is vague, do not introduce AI yet. If it is specific, such as reducing admin time for reports or supporting EAL learners with vocabulary, you can evaluate tools more clearly.

Next, assess risk. Consider data sensitivity, safeguarding implications, potential bias and the impact of errors. A tool that drafts generic letters carries different risks to one that analyses behaviour data. For higher-risk uses, you will need stronger controls and clearer oversight.

Then consider control. Who remains accountable for decisions? Can staff override or ignore AI suggestions? Are pupils taught to question AI outputs? A responsible approach keeps humans firmly in charge, with AI as an assistant rather than an authority.

Finally, think about evidence. How will you know whether the AI use is effective and safe? Can you gather feedback from staff and pupils? Can you show inspectors and governors that you have monitored impact and adjusted accordingly?

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Priority actions for September

With limited time at the start of term, focus on a small set of high-impact actions rather than trying to do everything. First, ensure your existing policies explicitly reference AI where relevant. This may mean adding short sections to acceptable use, assessment and staff conduct policies. If you have not yet tackled this, a structured approach is outlined in creating your school’s AI acceptable use policy.

Second, issue clear, simple guidance to staff. A two-page briefing can clarify what tools are approved or banned, how to handle pupil work suspected of heavy AI use, and how to avoid uploading personal data to public AI systems. Include concrete examples, such as using AI to draft lesson outlines but not to mark extended writing without teacher review.

Third, plan at least one staff development session this term focused on AI literacy, ethics and practical classroom use. This should not be a sales pitch for particular products. Instead, help colleagues understand what AI can and cannot do, where it might save time, and how to maintain professional standards when using it.

Fourth, decide your stance on pupil use. Are older students allowed to use AI at home for research or drafting, and under what conditions? How will you teach them to reference AI assistance honestly and avoid plagiarism? This is also a good moment to revisit your approach to copyright, especially where AI-generated content is used in teaching materials. For a deeper dive, see the discussion in copyright and AI in schools.

Finally, communicate with parents and carers. A short note in your first newsletter explaining your approach to AI, including safeguards and educational aims, can pre-empt misunderstandings and build trust.

Looking ahead to 2025

While no one can predict the exact shape of future guidance, some trends are becoming clearer. It is likely that expectations around AI literacy in the curriculum will strengthen, particularly in secondary and post-16 education. Inspectors may increasingly ask how pupils are being prepared to live and work alongside AI, not just how they are protected from risks.

We can also expect more concrete examples and case studies from the DfE, as early adopters share what has worked. This may lead to clearer benchmarks for responsible AI use, even if full regulation remains some way off. At the same time, awarding bodies will continue to refine their rules around AI and assessment, especially for coursework and non-examined components.

For leaders, the most sustainable strategy is to build flexible, principle-based approaches rather than tool-specific rules. Focus on critical thinking, digital resilience, data protection and professional autonomy. If these foundations are strong, you will be able to adapt quickly as tools evolve and guidance tightens.

Above all, remember that you are not expected to have all the answers now. What matters this term is that you can show thoughtful leadership: clear boundaries, honest communication, and a willingness to learn alongside your staff and students. From that base, you can refine and extend your AI strategy as 2024–25 unfolds.

Happy term ahead!
The Automated Education Team

Table of Contents

Categories

AI in Education

Tags

Artificial Intelligence Education Technology

Latest

Alternative Languages