Skip to main content
Bluecoders
← Tech glossary

Hallucination (AI)

TermConcept

We say a generative AI model hallucinates when it produces false or invented information while presenting it with confidence — a fabricated citation, code that calls a non-existent API, a made-up historical fact.

We say a generative AI model hallucinates when it produces false or invented information while presenting it with confidence — a fabricated citation, code that calls a non-existent API, a made-up historical fact.

The phenomenon is inherent to LLMs, which generate by statistically predicting the next token rather than reasoning over verified facts. It gets worse outside the training distribution and on niche topics.

The usual countermeasures are RAG (grounding generation in trustworthy sources), post-hoc verification (a second call that validates facts), explicit source citation, prompts that allow the model to say « I don't know », and continuous evaluation of outputs.

Ready to find the missing piece of your team?

Let's talk about your hiring needs. A team member will get back to you quickly to qualify the brief and kick off the search.