
Why source evaluation is changing
In most classrooms, source evaluation used to mean checking books, websites and perhaps a documentary or two. Now, many students’ “first source” is an AI tool that can summarise, rewrite and even fabricate information with confidence.
This does not make AI the enemy of learning. Used well, it can be a powerful research assistant and thinking partner. But it does mean our existing lessons on “reliable websites” are no longer enough. Students need to evaluate:
- Traditional sources (books, articles, websites, videos)
- AI-generated outputs (text, images, code, explanations)
- The way these sources are combined, paraphrased and reused
This sits at the heart of AI literacy in schools: helping students understand what AI is good at, where it fails, and how to use it responsibly.
The goal is not to frighten students away from AI, nor to pretend it is infallible. Instead, we want them to develop a calm, habitual response: “Can I trust this? How do I know?”
From CRAAP to “Can I trust this?”
Many teachers use frameworks like CRAAP (Currency, Relevance, Authority, Accuracy, Purpose) or similar models. These are still useful, but the questions need updating for AI-shaped information flows.
Rather than teaching a new acronym, you might frame evaluation around one core question: “Can I trust this for this purpose?” Then unpack it with students:
Who made this and how?
Is it a named author, an organisation, a classmate, or an AI tool? If it is AI, what model or platform, and what prompt?
What evidence does it provide?
Does it give sources, links, data, examples, or is it just confident prose?
How might it be wrong or biased?
Could it be outdated, one-sided, hallucinated, or missing key perspectives?
How can I cross-check it?
What other sources can I use to confirm or challenge this information?
You can still map these back to CRAAP or your preferred framework if that helps students connect old and new practice. The key shift is that the process matters more than the acronym. Students should be able to narrate their thinking: “I checked who created it, looked for evidence, and compared it with two other sources.”
Teaching students to treat AI as a source
Many pupils experience AI tools as magic answer machines. Our job is to reframe them as fallible sources with particular strengths and weaknesses.
A simple classroom script works well:
“AI is like a very fast, very confident student who has read a lot but sometimes makes things up. You can ask for help, but you must always check its work.”
You might compare AI to:
- A rough first draft generator, not a final essay
- A brainstorming partner, not a textbook
- A calculator that sometimes mis-keys its own numbers
In practical terms, this means teaching students to:
- Label AI outputs clearly in their notes: “AI draft”, “AI summary of X”, “AI explanation of Y”
- Ask AI to show its working: “What are your sources?” “Give me links or references I can check.”
- Assume any citation from AI might be wrong until verified
- Use AI to find questions and angles, then research answers using verifiable sources
This connects strongly with conversations about when AI helps vs harms learning. Evaluation is one of the key habits that keeps AI in the “helps” column.
Practical routines for mixed sources
Students now often blend a textbook paragraph, a website, a YouTube video and an AI-generated explanation in one piece of work. They need routines that work across all of these.
Consider building in short, repeatable habits:
1. The “Stop and Label” pause
Whenever students gather information, ask them to pause and label each item in their notes:
- T = textbook or print source
- W = website or online article
- V = video or podcast
- A = AI-generated
This takes seconds but makes source evaluation visible. You can then ask, “How many A sources do you have compared with W and T? Do you need more variety?”
2. The “Two-check minimum”
Before using any key fact or explanation in a final product, students must confirm it with at least two independent sources. One may be AI, but not both. For example:
- AI explanation + textbook
- Website + AI summary of a research paper
- Two different websites from reputable organisations
3. The “Mismatch hunt”
Give students a short AI-generated paragraph on a topic they are studying, plus two or three traditional sources. Their task is to highlight where the AI:
- Misses important information
- Contradicts other sources
- Uses vague or unsourced claims
This turns hallucinations and errors into teachable moments, and it is engaging: students enjoy “catching the robot out”.
Discover the power of Automated Education by joining out community of educators who are reclaiming their time whilst enriching their classrooms. With our intuitive platform, you can automate administrative tasks, personalise student learning, and engage with your class like never before.
Don’t let administrative tasks overshadow your passion for teaching. Sign up today and transform your educational environment with Automated Education.
🎓 Register for FREE!
Designing assignments that show evaluation
If we want students to take evaluation seriously, it needs to be visible and assessed. This does not always mean extra marks, but it does mean structured expectations.
You might:
Require a short “Source Evaluation Note” with each assignment
Two or three sentences explaining how they chose and checked their sources, including any AI use.
Ask students to submit a screenshot or transcript of key AI interactions
They can annotate these, explaining what they accepted, rejected or verified.
Include a “Source Mix” requirement
For example, “At least one book or PDF, one reputable website, and no more than one AI-generated explanation per key point.”
For older students, you can tie this to discussions of academic integrity and AI use policies, connecting with ideas from AI is not cheating. The focus shifts from “Did you use AI?” to “How did you use it, and did you evaluate it?”
Checklists and organisers by age
Simple tools make evaluation more concrete, especially for younger learners or those who struggle with abstract criteria.
For primary or early secondary students, a traffic-light checklist works well:
- Green: I know who made this. I can say when it was made.
- Amber: I can find at least one other source that says something similar.
- Red: I cannot tell who made this, or it disagrees with most other sources.
You might adapt the language:
- “Who said this?”
- “When did they say it?”
- “Can I find someone else who agrees?”
For older students, a one-page organiser can guide them through mixed sources. Columns might include:
- Source type (book, website, AI, video)
- Creator / platform
- Evidence given (data, references, examples)
- Possible problems (bias, missing voices, outdated)
- How I checked it (cross-checks, alternative sources, expert input)
These can be used repeatedly across subjects, from history to science to media studies, building a shared vocabulary and expectation.
Whole-school norms on citation and transparency
Evaluation is easier when the whole school treats AI as something to be used openly and critically, rather than secretly. This links closely with developing a shared approach to AI literacy across the school.
Consider agreeing some simple norms:
Students must always state if they used AI, and how
For example, “I used AI to brainstorm questions, then researched answers using books and websites.”
AI is cited as a tool, not as an author
For instance, “AI tool (ChatGPT), prompt: ‘Explain photosynthesis for Year 8’, accessed 10 March 2026.”
Teachers model this transparency
If you use AI to draft a worksheet or generate examples, say so, and explain how you checked and adapted the output.
Over time, this normalises the idea that AI is part of the research landscape, but not beyond scrutiny.
Quick start: three lessons to run
You do not need a whole new scheme of work to begin. Here are three compact lessons you can adapt for most age groups.
Lesson 1: Human vs AI vs Web
Students research a simple factual question (for example, “What causes the seasons?” or “What started the First World War?”) using:
- A textbook or teacher-provided article
- A reputable website
- An AI tool
They then compare the three, noting similarities, differences and missing details. Finish with a class discussion: “Which would you trust most, and why?”
Lesson 2: Fix the flawed AI answer
Provide an AI-generated answer that you know contains errors, oversimplifications or missing perspectives. Students must:
- Identify problems
- Use other sources to correct or expand the answer
- Present a revised version, with a brief explanation of their changes
This reinforces the idea of AI as a draft, not a destination.
Lesson 3: Show your evaluation trail
In a regular research task, add one requirement: students must submit a short “evaluation trail” alongside their work. This could be:
- A completed organiser
- Annotated screenshots
- A short reflection paragraph
You can keep marking light by scanning for patterns rather than grading every detail, but the routine itself builds powerful habits.
For more ideas on structuring research in an AI world, you might also explore the student research playbook, which looks at combining search tools and AI assistants effectively.
Teaching source evaluation in the AI era is not about abandoning what you already do; it is about extending it. By treating AI as one source among many, building simple routines, and making evaluation visible in students’ work, you help them become thoughtful, resilient learners in a noisy information world.
Happy evaluating!
The Automated Education Team