“Good” sources include those that provide complete, current, factual information, and/or credible arguments based on the information creator’s original research, expertise, and/or use of other reliable sources.
Whether a source is a good choice for you depends on your information needs and how you plan to use the source.
The SIFT* & PICK approach to evaluating sources helps you select quality sources by practicing:
Lateral Reading (SIFT): fact-checking by examining other sources and internet fact-checking tools; and
Vertical Reading (PICK): examining the source itself to decide whether it is the best choice for your needs.
*The SIFT method was created by Mike Caulfield under a CC BY 4.0 International License.
|
|
SIFT & PICK by Ellen Carey is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Last updated 4/11/23.
Despite its name, AI (Artificial Intelligence) does not think or feel or have intentions, so it cannot lie the way a human might. However, AI is capable of producing false, misleading, or inaccurate information.
Generative AI such as Large Language Models frequently "hallucinate" false information that cannot be found in or explained by the information used to train the AI tool. Examples of AI hallucinations include fake people and data, and citations for sources that do not exist. Information that AI presents as fact is not necessarily accurate!
The "AI Overview" that might appear at the top of your Google search relies on generative AI. Google notes that "AI Overviews can and will make mistakes" and "may provide inaccurate or offensive information," and cautions users to "think critically about AI Overview responses" (see the information about AI Overview in Google's Help Center).
AI tools reflect the bias of the information used in their training materials. For example, if an AI image generator is trained on pictures of doctors that are mostly white men, it will be likely to produce an image of a white male doctor when prompted for a picture of a doctor, even when the prompt does not specify race or gender.
Always fact-check information that was produced by AI! You can use the same fact checking and source selection strategies for AI that you use for human-created sources. For example:
Artificial intelligence (AI) is technology that learns how to learn, and then applies that knowledge to a specific task or purpose.
Generative artificial intelligence is technology that learns to recognize patterns in content used in its training (text, images, data, etc.) and then produces content that mimics those patterns.
Large Language Models (LLMs) are a type of generative AI that can understand and produce natural-sounding paragraphs of text. LLMs use probability to predict the next word in a sentence as they produce text, but they do not "understand" what they are saying the way a human would.