Skip to Main Content

English 110 - Smith-Tubiolo: Evaluating Sources

This guide provides students with recommended resources for research in English 110 with Nancy Smith-Tubiolo.

Strategies for Evaluating Sources

SIFT & PICK

What Makes an Information Source "Good?"

“Good” sources include those that provide complete, current, factual information, and/or credible arguments based on the information creator’s original research, expertise, and/or use of other reliable sources.

Whether a source is a good choice for you depends on your information needs and how you plan to use the source.

Evaluating Sources Using Lateral & Vertical Reading

The SIFT* & PICK approach to evaluating sources helps you select quality sources by practicing:

yellow arrow pointing to the right  Lateral Reading (SIFT): fact-checking by examining other sources and internet fact-checking tools; and

green arrow pointing downVertical Reading (PICK): examining the source itself to decide whether it is the best choice for your needs.

*The SIFT method was created by Mike Caulfield under a CC BY 4.0 International License.

SIFT

SIFT

Stop

  • Check your emotions before engaging
  • Do you know and trust the author, publisher, publication, or website?
    • If not, use the following fact-checking strategies before reading, sharing, or using the source in your research

Investigate the source

  • Don’t focus on the source itself for now
  • Instead, read laterally
    • Learn about the source’s author, publisher, publication, website, etc. from other sources, such as Wikipedia

Find better coverage

  • Focus on the information rather than getting attached to a particular source
  • If you can’t determine whether a source is reliable, trade up for a higher quality source
  • Professional fact checkers build a list of sources they know they can trust

Trace claims to the original context

  • Identify whether the source is original or re-reporting
  • Consider what context might be missing in re-reporting
  • Go “upstream” to the original source
    • Was the version you saw accurate and complete?

PICK

PICK

Purpose / Genre / Type

  • Determine the type of source (book, article, website, social media post, etc.)
    • Why and how it was created? How it was reviewed before publication?
  • Determine the genre of the source (factual reporting, opinion, ad, satire, etc.)
  • Consider whether the type and genre are appropriate for your information needs

Information Relevance / Usefulness

  • Consider how well the content of the source addresses your specific information needs
    • Is it directly related to your topic?
    • How does it help you explore a research interest or develop an argument?

Creation Date

  • Determine when the source was first published or posted
    • Is the information in the source (including cited references) up-to-date?
  • Consider whether newer sources are available that would add important information

Knowledge-Building

  • Consider how this source relates to the body of knowledge on the topic
    • Does it echo other experts’ contributions? Does it challenge them in important ways?
    • Does this source contribute something new to the conversation?
  • Consider what voices or perspectives are missing or excluded from the conversation
    • Does this source represent an important missing voice or perspective on the topic?
    • Are other sources available that better include those voices or perspectives?
  • How does this source help you to build and share your own knowledge?

Creative Commons License SIFT & PICK by Ellen Carey is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Last updated 4/11/23.

Evaluating Information Generated by AI

Can AI Lie?

Despite its name, AI (Artificial Intelligence) does not think or feel or have intentions, so it cannot lie the way a human might. However, AI is capable of producing false, misleading, or inaccurate information. 

What Are AI Hallucinations?

Generative AI such as Large Language Models frequently "hallucinate" false information that cannot be found in or explained by the information used to train the AI tool. Examples of AI hallucinations include fake people and data, and citations for sources that do not exist. Information that AI presents as fact is not necessarily accurate!

What About Google's AI Overview?

The "AI Overview" that might appear at the top of your Google search relies on generative AI. Google notes that "AI Overviews can and will make mistakes" and "may provide inaccurate or offensive information," and cautions users to "think critically about AI Overview responses" (see the information about AI Overview in Google's Help Center).

Can AI Produced Biased Information?

AI tools reflect the bias of the information used in their training materials. For example, if an AI image generator is trained on pictures of doctors that are mostly white men, it will be likely to produce an image of a white male doctor when prompted for a picture of a doctor, even when the prompt does not specify race or gender.

How Do You Evaluate Information Produced by AI?

Always fact-check information that was produced by AI! You can use the same fact checking and source selection strategies for AI that you use for human-created sources. For example:

  • Use the lateral reading strategies described in the "SIFT" portion of SIFT & PICK 
  • Verify information presented as fact by using other credible sources
  • Verify citations to make sure the sources actually exist
  • Ask a Librarian for help!

What is AI?

Artificial Intelligence

Artificial intelligence (AI) is technology that learns how to learn, and then applies that knowledge to a specific task or purpose.

Generative Artificial Intelligence

Generative artificial intelligence is technology that learns to recognize patterns in content used in its training (text, images, data, etc.) and then produces content that mimics those patterns. 

Large Language Models

Large Language Models (LLMs)  are a type of generative AI that can understand and produce natural-sounding paragraphs of text. LLMs use probability to predict the next word in a sentence as they produce text, but they do not "understand" what they are saying the way a human would.