
From policing to designing
By 2024, most teachers have discovered that trying to “catch” AI use with detection tools is a losing game. Detection is inconsistent, easy to evade, and risks falsely accusing students. The energy spent on policing could be better invested in designing assessments that make unacknowledged AI use pointless or, at least, unattractive.
We are moving from a world where originality meant “no copying” to one where originality must mean “my thinking, in my context, with transparent support”. This shift is explored further in AI is not automatically cheating, but the practical question remains: how do we redesign what we already assess?
The answer is not to abandon essays, projects or written responses. It is to rework them so that process is visible, context is specific, and students’ voices are indispensable.
What counts as original now?
When AI tools can produce competent paragraphs in seconds, “original” cannot simply mean “never seen these words before”. Instead, originality in 2024 needs to foreground:
- The student’s reasoning and decision-making
- Their ability to connect ideas to local or lived contexts
- Their capacity to critique, adapt or improve AI-generated material
- Their reflection on how they used tools, including AI, to get there
An AI system can help draft a paragraph on climate change. It cannot easily recount how last year’s floods affected the student’s own town, or explain why they chose one solution over another after comparing sources. Originality becomes less about the surface of the text and more about the thinking and evidence beneath it.
For a deeper dive into the limits of detection tools, you might also read AI detection accuracy: the evidence.
Patterns that invite AI shortcuts
Some assessment patterns almost beg students to paste prompts into AI tools. These often share familiar features.
The first is the generic, decontextualised essay. Prompts such as “Discuss the causes of the French Revolution” or “Explain the impact of social media on teenagers” are exactly the kind of tasks AI handles well, especially when the assessment only values a polished final product.
The second is the single-shot submission. When students hand in one final piece with no drafts, checkpoints or oral follow-up, it becomes very hard to distinguish between their own work and something generated elsewhere.
The third is the overemphasis on surface features. If marks are heavily weighted towards spelling, formal tone and length, students quickly learn that a well-formatted AI answer scores better than a rough but genuinely thoughtful attempt.
None of these patterns are wrong in themselves, but they are vulnerable. The challenge is to reshape them so that genuine engagement becomes the easiest path to success.
Principles for originality by design
“Originality by design” means building assessments where authentic thinking is structurally required. Several design principles help:
First, make the process assessable, not just the product. Give credit for planning notes, annotated sources, draft changes and reflections on tool use. The more steps are visible, the harder it is to outsource the whole task.
Second, anchor tasks in specific contexts. Use local data, school events, classroom experiments or case studies that generic AI training data is unlikely to mirror exactly. Context does not have to be geographical; it might be tied to a particular text studied, a class survey, or a practical investigation.
Third, define explicit boundaries for AI use. For example, you might allow AI to help with idea generation or editing, but not for writing full paragraphs. Students then document what they did within those boundaries.
Finally, align rubrics with thinking, not polish. Reward analysis, judgement, connection-making and reflection more than flawless phrasing.
These ideas build on broader strategies in designing AI-resilient assessments, but here we focus on practical redesign of existing tasks.
Redesigning written tasks
Consider a traditional literature essay: “How does the author present conflict in the novel?” It is easy to feed this into an AI tool and receive a reasonable response.
To redesign, you might keep the core focus but change the shape of the task. Students could first select two short extracts where conflict is especially visible. In class, they annotate these by hand, focusing on language choices. For homework, they write a commentary that weaves close analysis of their chosen extracts with a brief reflection on how their own perspectives on conflict shaped their reading.
The assessment then includes the annotated extracts, a short planning sheet, the commentary, and a one-paragraph statement explaining whether and how they used AI. The final mark draws on all four components. AI can still help, but it cannot replace the student’s selection of extracts, their in-class annotations or their personal reflection.
In science, instead of “Explain photosynthesis”, a redesigned task might ask students to analyse data from a class plant-growth experiment, link it to the theory of photosynthesis, and suggest improvements for next year’s experiment. The local data and reference to specific class procedures make generic AI answers less useful.
Making process visible
Visible process is one of your strongest safeguards against inauthentic work. It also improves learning by encouraging metacognition.
You might build in short, low-stakes checkpoints: a research plan submitted in week one, a partial draft in week two, and a peer feedback session in week three. Each stage receives brief comments and perhaps a small proportion of the final mark. Students can use AI at certain points, but they must show how their work evolves over time.
A simple routine is the “three-layer draft”: handwritten planning notes, a first digital draft, and a final version. During a brief viva-style conversation, you ask students to explain one significant change between each layer. This makes it much easier to see who understands their own work.
Reflections on AI use can be short but structured. For instance, you might ask students to answer three questions: What did you ask the AI to do? What did you keep, change or reject, and why? How did AI use affect your understanding of the topic?
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Leveraging context and personalisation
Tasks that draw on students’ experiences, local environments or class-specific activities are harder to fake convincingly.
In history, students might compare how a global event is portrayed in international media and in a local newspaper, then interview a family member about their memory of it. Their final piece weaves together these perspectives, citing specific articles and quotes that AI is unlikely to reproduce accurately.
In mathematics, rather than a generic investigation on “statistics in sport”, students could analyse data from their own school teams or a survey conducted in class. They present their findings in a short report and a five-minute presentation, responding to questions from peers. AI can help them structure the report, but it cannot attend the match or run the survey.
Personalisation does not mean asking students to share sensitive information. It means giving them choice over topics, examples or data sets, and grounding tasks in authentic situations.
For a broader view on preparing learners for an AI-rich world, see future-proofing students’ skills AI can’t replace.
Rubrics that value thinking
If your rubric mainly rewards structure, correctness and formal style, students will naturally seek tools that optimise those features. To encourage originality, rubrics need to foreground thinking.
You might include criteria such as:
- Quality of reasoning: Are claims supported with relevant evidence or examples?
- Depth of analysis: Does the student move beyond description to explanation or evaluation?
- Use of context: Does the work meaningfully integrate local data, class activities or chosen case studies?
- Reflection on process: Does the student thoughtfully explain their approach and tool use, including AI?
Polished prose still matters, but it becomes one criterion among several, not the main route to high marks. Sharing the rubric early in the unit helps students understand what you value.
Talking to students about AI
Clear, open conversations about AI are essential. Students need to know that using AI is not automatically wrong, but that undisclosed or over-reliant use undermines both integrity and learning.
You might start a unit by co-constructing an “AI use agreement” with your class. Together, you decide what counts as acceptable support (for example, brainstorming, grammar checking, alternative explanations) and what crosses the line (for example, submitting AI-written work as your own). Refer back to this agreement when introducing each new task.
It also helps to model your own use of AI. Show how you might use a tool to generate quiz questions, then critique its output and improve it. This demystifies AI and frames it as something to think with, not copy from.
Department and whole-school routines
Sustainable change requires shared routines, not heroic individual effort.
Departments might schedule an annual “assessment refresh” meeting where each teacher brings one key task to review through an AI lens. Together, you identify where process could be made more visible, where local context could be added, and where rubrics could be tweaked to emphasise thinking.
At whole-school level, leadership can support by providing simple guidance on AI use, professional learning on assessment design, and time for collaborative planning. A common language around originality, integrity and tool use reduces mixed messages between classes.
Schools may also decide which high-stakes tasks must be completed under controlled conditions and which can explicitly incorporate AI support. Being transparent about this balance helps students navigate expectations.
Quick-start checklist
You do not need to redesign everything at once. Choose one unit this term and try the following:
First, identify one major assessment that currently invites generic responses. Rewrite the prompt so it draws on specific class activities, local data or student choice.
Next, add at least two visible process checkpoints. These might be a planning sheet, annotated sources, or a partial draft with feedback.
Then, define and share clear AI use boundaries for this task. Decide what is allowed, what must be acknowledged, and what is not permitted.
Finally, adjust the rubric so that at least half the marks relate to reasoning, contextualisation and reflection, rather than surface polish.
After the unit, gather student feedback on the new design. Ask what felt fair, what helped them learn, and where AI use was genuinely useful. Use this to refine the next round of assessments.
Redefining originality in 2024 is not about outsmarting technology. It is about designing learning experiences where students’ thinking, voices and contexts are central, and where AI becomes a visible, bounded partner rather than a hidden shortcut.
Happy redesigning!
The Automated Education Team