
Introduction: a year with AI
By the end of 2024, AI is no longer the shiny new thing in many schools. The initial rush of experimentation has given way to something more revealing: the slow, honest verdict of everyday classroom life. Certain AI routines have become as normal as opening the register. Others, once loudly championed, have quietly disappeared from lesson plans and staff meetings.
This article looks back at the first full year of AI use through the eyes of practitioners. It does not re-explain tools or policies. Instead, it gathers what teachers say actually worked, what failed to survive contact with reality, and what they wish they had known at the start. If you are planning your AI strategy for 2025, these are the patterns worth paying attention to.
How we gathered perspectives
The reflections below are drawn from composite “mini case studies”, built from interviews, webinars, training sessions and written feedback from teachers in primary, secondary and further education across different systems. Names and details are blended to protect anonymity, but the situations will feel familiar.
These teachers were not selected because they are AI enthusiasts. Many describe themselves as “cautiously curious” or “pragmatic sceptics”. Most had at least a year of intermittent AI use, often alongside whole-school guidance such as an AI literacy focus or a September “AI readiness” push similar to those in our planning checklist.
Their stories cluster around a few clear themes: routines that stuck, workflows that faded, and lessons for the next cohort of schools starting their AI journey.
What actually stuck
Across phases and subjects, the most durable AI uses shared three features: they were quick, low-stakes and tightly connected to existing practice.
A Year 4 teacher in a large primary school described a simple routine that survived the whole year. Each Friday, she pasted her next week’s plans into an AI tool and asked for three differentiated exit questions per lesson. She did not accept them blindly. Instead, she skimmed, tweaked and occasionally discarded them. The key was that the AI reduced the “blank page” problem and saved her ten to fifteen minutes per day. The questions slotted neatly into lessons she was already confident with.
In a secondary history department, a head of department used AI to generate first-draft model essays at different grades. Teachers then rewrote them together in a department meeting, arguing over phrasing, structure and misconceptions. The AI drafts were never shown directly to pupils. Instead, they became a shared starting point for professional discussion and a bank of exemplars that felt realistic, not perfect.
Further education lecturers reported similar patterns. One vocational tutor used AI to convert specification language into plain-English glossaries and scenario-based questions tailored to different workplace contexts. Again, the routine was sustainable because it built on tasks she was already doing, just faster and with more variety.
What these cases share is a “human–AI co-pilot” approach, similar to the model described in our piece on co-pilot teaching: AI drafts, teachers decide, refine and own the final product.
Quiet failures that faded away
Not every promising idea survived past the first half term. Many of the quiet failures looked attractive in training sessions, but crumbled under real-world constraints.
A secondary science department tried to introduce AI-generated personalised worksheets for every pupil after each assessment. In theory, it was a perfect blend of data and differentiation. In practice, feeding in marks, checking outputs and formatting sheets for multiple classes became unmanageable. Teachers reported spending more time curating AI output than they had previously spent on targeted revision tasks. Within a term, they reverted to shared, teacher-designed revision booklets with a few AI-generated question banks in the background.
In a primary setting, one school attempted to use AI chatbots as a regular “learning buddy” for older pupils. Despite careful prompts and supervision, teachers found that pupils either over-relied on the chatbot for answers, or ignored it completely. Managing logins, device access and safeguarding checks added friction. By summer, the chatbot was used only occasionally, usually for specific research tasks with tight time limits and clear success criteria.
A common pattern in these failures was over-ambition: trying to automate entire workflows or learning experiences rather than using AI to streamline small, well-understood pieces of practice.
Patterns across phases
Phase differences mattered less than you might expect. What mattered more was the level of structure around AI use.
In primary schools, successful routines were almost always staff-facing. Teachers used AI for planning, resource adaptation and drafting parent communications, but kept AI away from pupils for most of the year. This allowed them to build confidence, test safeguards and align AI use with existing behaviour expectations before introducing pupil-facing tools in small, supervised pilots.
Secondary teachers were more likely to experiment with pupil use, especially in older year groups. However, the most sustainable approaches still involved clear boundaries: AI as a brainstorming partner for project ideas, a checker for grammar and clarity in extended writing, or a way to generate practice questions. Where AI use was loosely defined (“Use AI to help you with your homework”), it quickly drifted into shortcut territory.
In further education, lecturers highlighted the importance of linking AI use directly to workplace expectations. One business studies tutor framed AI as “the assistant you will probably have in your first job”. Students practised using AI to summarise reports, draft emails and prepare meeting notes, always with an emphasis on checking accuracy and tone. This alignment with employability made AI use feel purposeful rather than gimmicky.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Subject spotlights
Certain subjects reported particularly clear gains.
In languages, teachers used AI to generate short, level-appropriate dialogues on any topic the class suggested. Pupils then edited the dialogues to fix errors or adapt them to local contexts. The teacher retained control over key vocabulary and grammar, but gained a steady stream of fresh, relevant material.
In maths, the impact was more modest but still useful. Teachers used AI to create multiple variations of similar problems, allowing for quick low-stakes quizzes and extra practice. However, they noted that AI struggled with precise mathematical formatting and sometimes produced flawed solutions, reinforcing the need for careful checking.
Humanities teachers, especially in history and geography, benefited from AI’s ability to produce contrasting viewpoints or starter texts at different reading levels. One history teacher routinely asked AI to rewrite a complex source at three reading ages, then used these versions to scaffold whole-class analysis.
Across subjects, the pattern was consistent: AI worked best when it generated raw material that teachers then shaped, rather than finished resources ready to print.
Equity, access and safeguarding
Practitioners were candid about the challenges here. Equity issues surfaced in two main ways: uneven access to devices at home, and varying confidence in navigating AI safely.
Several secondary teachers noticed that pupils with reliable home access to AI tools developed a more fluent “prompting” style and sometimes produced more polished work. This raised questions about fairness in homework tasks. Some departments responded by shifting more extended writing into class time, where AI use could be guided and visible.
Safeguarding concerns were most acute where pupils interacted directly with AI chat tools. Schools that fared better tended to adopt a small set of approved tools, configure them cautiously, and provide explicit instruction on safe and ethical use as part of wider AI literacy work, echoing themes from our article on why AI literacy matters now.
Teachers also highlighted the risk of “hidden” AI use by staff. A few admitted pasting sensitive information into public tools in the rush of marking or report writing, then later realising the implications. The most effective response was not blame, but clear, practical guidance on what could and could not be shared, supported by safer, institution-approved alternatives.
If we were starting again
When asked what they would do differently, practitioners converged on a surprisingly small set of points.
Many wished they had started smaller. Instead of grand AI projects, they would begin with one or two staff-facing routines per department, such as generating practice questions or adapting texts for different reading levels. Once those were embedded, they would gradually expand.
Several emphasised the value of shared prompts and exemplars. Departments that created a simple “prompt bank” and agreed on how to use it saw more consistent, less frustrating experiences than those where everyone experimented alone.
Others stressed the importance of aligning AI use with existing teaching models. Where schools had already been exploring a co-teaching or co-planning approach, AI slotted in naturally as another planning partner, much like we explored in our piece on the human–AI co-pilot model.
Finally, many would invest earlier in staff training that focused on practice, not features: short, subject-specific sessions where teachers brought real planning or assessment tasks and worked through them with AI together.
A 2025 action plan
Looking ahead to 2025, the first year of widespread AI use offers a useful compass.
Schools can start by auditing which AI routines have genuinely stuck and why. From there, they can prune unsustainable workflows, consolidate what works and identify two or three new, carefully scoped pilots for the coming year.
It is also a good moment to revisit policies and readiness, perhaps using or adapting frameworks like our September AI readiness checklist or reflecting on the broader trajectory outlined in two years of AI in schools. The aim is not more AI, but better, more intentional AI use.
Crucially, any 2025 plan should build in time for staff to reflect, share and iterate. The most successful schools in year one treated AI as an ongoing professional conversation, not a one-off initiative.
Conclusion: towards sustainable practice
The first year of AI in classrooms has been messy, creative and, at times, exhausting. It has produced both genuine time-savers and quietly abandoned experiments. Yet clear patterns are emerging. AI works best when it is modest in scope, rooted in existing practice, and framed as a co-pilot rather than a replacement.
As you refine your approach for 2025, the most valuable data you have are the stories and instincts of your own staff. Listen carefully to what has actually stuck, what has quietly failed, and what teachers wish they had known at the start. That reflective loop, more than any single tool, will determine whether AI becomes a sustainable part of your educational practice.
Happy reflecting!
The Automated Education Team