Two Years of ChatGPT in Schools

From panic and bans to pragmatic, policy-aligned classroom use

A secondary teacher discussing AI use with students in class

From shock to normal

When ChatGPT appeared in late 2022, it landed in schools like a surprise inspection. Overnight, teachers saw fluent essays written in seconds, homework that felt “too good”, and a wave of headlines about cheating. Two years on, the tone is noticeably different. ChatGPT has not destroyed education, nor has it magically fixed it. Instead, it has become another powerful, messy tool that schools are slowly learning to live with.

Across systems and countries, the trajectory has been remarkably similar: an initial phase of bans and moral panic, followed by quiet experimentation, and now a more pragmatic, policy-aligned integration. This “two-year report card” traces that shift across three domains: policy, practice and student outcomes. It also offers a simple maturity model so you can place your school on the adoption curve and plan what to do next.

For broader context on how AI is reshaping schooling, you may also find our overview of national trends useful in this article.

Phase 1: panic and bans

Nov 2022 – Summer 2023

In the first six to nine months, school responses were dominated by fear. The main concerns were plagiarism, loss of writing skills, and safeguarding. Many systems issued blanket bans on generative AI tools, often by blocking websites on school networks and updating behaviour policies to define AI-assisted work as cheating.

A typical scenario from this period: a Year 10 English teacher notices three identical homework essays, each with the same slightly odd phrasing and American spelling. Suspicion leads to a staff meeting, where leaders announce that “AI use in assessed work is strictly forbidden”. No guidance is offered on when, if ever, AI might be acceptable.

Yet even during this phase, quiet experimentation began. A science teacher might use ChatGPT to generate differentiated practice questions, but never mention it to colleagues. A languages teacher might test it for model answers or vocabulary lists. These early adopters often worked in isolation, with no policy cover and little training.

Policy documents, where they existed, tended to be reactive. Few schools had a dedicated AI or ChatGPT section; instead, references to “online tools” or “plagiarism” were stretched to cover emerging technologies. Leaders were understandably cautious, but this caution left many teachers unsure what was allowed.

Phase 2: pragmatic adoption

2023–24

By the second year, the conversation shifted. As the worst fears of “the end of homework” failed to materialise, and as teachers themselves began to see time-saving benefits, schools moved from prohibition to pragmatic adoption.

In many settings, the ban softened into “allowed with conditions”. Some schools piloted AI use in specific subjects or year groups. Others introduced explicit AI literacy lessons, helping students understand capabilities, limitations and ethical use. The tone of staff meetings changed from “How do we stop this?” to “How do we do this safely and fairly?”

A concrete vignette from this phase: a secondary school history department agrees that students may use ChatGPT for brainstorming and planning, but not for final drafts. Teachers demonstrate live how to critique AI outputs, spotting bias and factual errors. The school’s updated acceptable use policy includes a section on generative AI, co-written with staff and students, and aligned with assessment guidance. If you are writing or revising your own policy, our guide on creating your school’s AI acceptable use policy may be helpful.

Crucially, 2023–24 saw more alignment with assessment design. Rather than relying solely on detection tools, schools began to rethink tasks so that AI misuse would be less tempting and less effective, for example by emphasising process, oral explanation and in-class work. This shift towards “AI-resilient” assessment is explored further in our piece on designing AI-resilient assessments.

Policy shifts

Two years in, school rules around ChatGPT have generally moved through three stages:

  1. Implicit or blanket bans: AI use equated with cheating; little nuance about formative vs summative work.
  2. Conditional permission: Allowed for staff preparation; cautiously allowed for students in certain contexts; rules clarified but often inconsistent across departments.
  3. Integrated, living policies: Clear expectations for staff and students; AI use linked to curriculum, safeguarding and assessment policies; regular review cycles.

Leaders who have made most progress tend to:

  • Involve students in policy discussions, especially around academic honesty.
  • Align AI rules with existing principles (integrity, fairness, inclusion) rather than treat AI as an entirely separate issue.
  • Provide explicit examples of acceptable and unacceptable use, not just abstract statements.

If your policy still lives mainly in corridor conversations and email threads, you are probably between stages one and two.

Classroom practice

What has actually changed in day-to-day teaching and learning?

In 2022–23, most AI use in classrooms was either covert (teachers using ChatGPT for planning) or crisis-driven (trying to catch AI-written essays). By 2023–24, we saw more intentional integration across at least four patterns:

  • Teacher productivity: Generating draft lesson plans, explanations at different reading levels, quiz questions and feedback comments. Teachers often edit heavily, but save time on first drafts.
  • Learning scaffolds: Students using AI to generate practice questions, vocabulary lists, or worked examples, especially in languages and STEM subjects.
  • Critical AI literacy: Lessons where the AI output itself becomes the object of study. For instance, students compare ChatGPT’s response to a historical question with textbook accounts, identifying omissions and bias.
  • Creative extension: Using AI as a brainstorming partner for projects, coding challenges or story writing, with clear rules about ownership and originality.

One Year 8 computing class, for example, used ChatGPT to suggest different algorithms for solving a simple problem, then tested and debugged them. The focus remained on conceptual understanding, not on the AI’s “cleverness”. This type of work aligns strongly with emerging frameworks for AI literacy in schools, which emphasise critique, control and creativity over passive consumption.

Ready to Revolutionise Your Teaching Experience?

Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.

Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.

🎓 Register for FREE!

Student outcomes

Evidence on the impact of ChatGPT on student outcomes is still emerging and often mixed. Two years is a short time in educational research, and most studies so far are small-scale or context-specific. However, several patterns are becoming clearer.

On the positive side, there is early evidence that:

  • Students can benefit from “always-on” explanations and examples, especially where teacher time is stretched.
  • Struggling writers sometimes gain confidence when using AI for idea generation or structure, provided they are supported to maintain ownership of the final text.
  • When used deliberately, AI can help students practise metacognitive skills: planning, checking, and revising their own work against model responses.

On the risk side, concerns include:

  • Over-reliance on AI for low-effort answers, which can mask gaps in understanding.
  • Widening inequities if some students have access to powerful tools and devices at home while others do not.
  • Subtle erosion of original voice, particularly in extended writing, if students lean too heavily on AI phrasing.

At this stage, we can say that ChatGPT can support learning in specific conditions, but it does not automatically improve outcomes. The quality of task design, scaffolding and teacher oversight remains decisive.

Equity and unintended effects

Equity is the unresolved question of this two-year period. In many regions, the students who benefit most from ChatGPT are those who already have strong digital access and support at home. Meanwhile, some schools serving disadvantaged communities have adopted stricter bans, partly from legitimate safeguarding concerns and partly from lack of infrastructure.

Unintended consequences are also surfacing. For example, some students report anxiety about being falsely accused of using AI, particularly where detection tools are used bluntly. Others feel pressure to use AI because “everyone else is”, even when they would prefer to work independently.

Addressing these issues requires more than technical fixes. It demands open dialogue with students, transparent assessment practices, and deliberate efforts to ensure that AI-supported learning opportunities are not limited to the most privileged.

A two-year maturity model

To help you benchmark your school, consider this simple ChatGPT maturity model across policy, practice and outcomes. You may be at different stages in each area.

Stage 1 – Reactive

  • Policy: Implicit or blanket bans; AI mostly framed as a threat.
  • Practice: Little or no sanctioned use; isolated experimentation; focus on detection.
  • Outcomes: Limited understanding of impact; staff anxiety high.

Stage 2 – Exploratory

  • Policy: Basic guidelines; AI allowed in some contexts; rules vary by department.
  • Practice: Teachers use AI for planning; some structured student use; early AI literacy lessons.
  • Outcomes: Anecdotal benefits and concerns; some monitoring, but no systematic data.

Stage 3 – Embedded

  • Policy: Clear, regularly reviewed AI policy; aligned with assessment, safeguarding and digital strategy.
  • Practice: AI integrated into schemes of work where appropriate; staff trained; students taught to use AI critically.
  • Outcomes: Schools gather data on usage, attainment and equity; adjustments made based on evidence.

Quick self-audit

For each statement, rate your school from 1 (not at all true) to 5 (fully true):

  • We have a written, up-to-date policy on student and staff use of generative AI.
  • Teachers have received training on pedagogically sound uses of ChatGPT.
  • Students are explicitly taught when and how to use AI tools, and when not to.
  • Our assessment practices are designed with AI in mind, not against it.
  • We monitor AI’s impact on different student groups, including those with less digital access.

Your lowest-scoring items are your most urgent priorities for the next two years.

Planning the next two years

Looking ahead to 2024–26, the question is no longer “Should we allow ChatGPT?” but “How do we use it wisely, fairly and sustainably?”

For leaders, priorities might include:

  • Embedding AI within whole-school digital strategy, not treating it as a bolt-on.
  • Providing ongoing professional development that goes beyond tool demonstrations to focus on pedagogy and ethics.
  • Establishing feedback loops: surveying staff and students, reviewing work samples, and updating policies annually.

For classroom teachers, practical next steps could be:

  • Choosing one or two specific, low-risk uses (for example, generating practice questions or model answers) and refining them.
  • Designing at least one unit where critical engagement with AI is explicit, not incidental.
  • Making your own AI use transparent to students, modelling ethical, reflective practice.

Key takeaways and reflection

Two years on, ChatGPT in schools has moved from shock to something closer to normal. The most successful schools are neither banning nor blindly embracing it; they are building thoughtful policies, redesigning tasks, and teaching students to think with and about AI.

To close, here is a simple reflection checklist you might discuss with your team:

  • Do we have shared language and expectations around AI use?
  • Are we designing learning and assessment that make productive, honest use of these tools possible?
  • How are we supporting students who have less access or confidence with technology?
  • What evidence are we gathering about impact, and how will we act on it?

The next two years will not be about whether AI exists in schools, but about the quality, equity and integrity of its use. Starting from where you are on the maturity model, small, deliberate steps can make a significant difference.

Happy integrating!
The Automated Education Team

Table of Contents

Categories

AI in Education

Tags

Artificial Intelligence Education Assessment

Latest

Alternative Languages