Who is the author? Could their view be biased in any way?
Text or images generated by AI tools have no human author, but they are trained on materials created by humans with human biases. Unlike humans, AI tools cannot reliably distinguish between biased material and unbiased material when using information to construct their responses.
What was the intended audience?
Generative AI tools can be used to generate content for any audience based on the user’s prompt.
What is the intended purpose of the content? Was it created to inform, to make money, to entertain?
Generative AI tools can create convincing text and images that can be used to propagate many different ideas without being clear that the information or images could be false.
Where was it published? Was it in a scholarly publication, a website, or an organization page?
Generative AI has already been used to create content for websites and news outlets. Considering whether the source is scholarly, has a good reputation and a clear history of providing reliable information is useful for figuring out whether the information you find is useful or misleading.
Does it provide sources for the information?
Articles, news outlets, and websites that provide sources could be an indicator of reliability. Further assessing the sources by following the links and citations to verify the information will help confirm that the information you find is reliable.
Generative AI natural language processing tools, language models, or chatbots like ChatGPT have been shown to hallucinate, or provide completely unsubstantiated information. Text generated by AI can also seem very confident, so it can be very difficult to ascertain what information generated by AI is trustworthy and what information is not.
The lessons learned when discerning fake news from legitimate sources can help when interacting with AI generated content or determining whether any website should be trusted.