In artificial intelligence, hallucinating refers to when an AI model generates information that sounds plausible but is false, misleading, or entirely made up. This often happens with large language models (LLMs) when they produce text that isn’t grounded in real data or accurate sources.
In the context of SearchRovr, hallucination can occur if the AI responds to a user’s question with an answer not actually found in the indexed site content. To prevent this, SearchRovr uses Retrieval-Augmented Generation (RAG) and citation-backed answers to keep responses tightly aligned with your actual content.