Hallucination (AI)
We say a generative AI model hallucinates when it produces false or invented information while presenting it with confidence — a fabricated citation, code that calls a non-existent API, a made-up historical fact.
We say a generative AI model hallucinates when it produces false or invented information while presenting it with confidence — a fabricated citation, code that calls a non-existent API, a made-up historical fact.
The phenomenon is inherent to LLMs, which generate by statistically predicting the next token rather than reasoning over verified facts. It gets worse outside the training distribution and on niche topics.
The usual countermeasures are RAG (grounding generation in trustworthy sources), post-hoc verification (a second call that validates facts), explicit source citation, prompts that allow the model to say « I don't know », and continuous evaluation of outputs.
