Despite its name, AI (Artificial Intelligence) does not think or feel or have intentions, so it cannot lie the way a human might. However, AI is capable of producing false, misleading, or inaccurate information.
Generative AI such as Large Language Models frequently "hallucinate" false information that cannot be found in or explained by the information used to train the AI tool. Examples of AI hallucinations include fake people and data, and citations for sources that do not exist. Information that AI presents as fact is not necessarily accurate!
AI tools reflect the bias of the information used in their training materials. For example, if an AI image generator is trained on pictures of doctors that are mostly white men, it will be likely to produce an image of a white male doctor when prompted for a picture of a doctor, even when the prompt does not specify race or gender.
You can use the same fact checking and source selection strategies for AI that you use for human-created sources. For example:
“Good” sources include those that provide complete, current, factual information, and/or credible arguments based on the information creator’s original research, expertise, and/or use of other reliable sources.
Whether a source is a good choice for you depends on your information needs and how you plan to use the source.
The SIFT* & PICK approach to evaluating sources helps you select quality sources by practicing:
Lateral Reading (SIFT): fact-checking by examining other sources and internet fact-checking tools; and
Vertical Reading (PICK): examining the source itself to decide whether it is the best choice for your needs.
SIFT & PICK by Ellen Carey is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Last updated 4/11/23.