Hearing voices or something presented as very real without existing, that’s a typical hallicination
In general, “hallucinations” refer to experiences where someone perceives something that isn’t actually present—like seeing, hearing, or sensing something that isn’t real. For example, a person might hear a voice speaking when no one is around or see shapes or patterns that don’t exist.
In the context of artificial intelligence, “hallucinations” are a term often used when an AI system generates incorrect or nonsensical outputs. For instance, if you ask an AI about a historical event, and it fabricates a fact or cites a source that doesn’t exist, that’s considered a “hallucination.” It’s not that the AI is seeing things, but rather that it’s producing information that doesn’t correspond to reality.
AI systems like GPT or other large language models are trained on vast amounts of text data. During training, they learn patterns, correlations, and relationships between words and phrases. However, they don’t have direct access to factual information or a reliable way to verify their responses. Instead, they generate text based on the patterns they’ve observed in their training data.
AI models, especially large language models like GPT, generally do not have a built-in fact-checking mechanism by default. They don’t inherently cross-reference a trusted database or verify their outputs against authoritative sources. Instead, they rely on the patterns and associations learned from their training data to generate responses, and they don’t have a way to confirm whether the information they produce is correct independently.
This means that while an AI might produce text that sounds plausible, it can still fabricate details, mix up facts, or present outdated or incorrect information. In some cases, developers can integrate additional fact-checking systems—like linking the AI to verified data sets or using external APIs for confirmation—but this isn’t always part of the core model.
Reasoning and factual verification are related but distinct concepts. Let’s analyse them
Reasoning:
Reasoning is the process of drawing conclusions based on available information, logical relationships, or underlying principles. It involves interpreting data, identifying patterns, and applying logical steps to arrive at a conclusion. Reasoning doesn’t necessarily involve checking whether the information used is correct—it’s more about understanding relationships and making inferences.
For example, if you know that “all humans are mortal” and “Socrates is a human,” reasoning helps you conclude that “Socrates is mortal.” The logic is sound even if you never verify the fact that Socrates is human.
Factual Verification:
Factual verification, on the other hand, is about determining whether a statement is true or false by comparing it against trusted sources or evidence. This process involves cross-referencing information with reliable data, records, or authorities to confirm accuracy. It’s not just about making logical connections—it’s about ensuring that each piece of information is grounded in truth.
For instance, if someone claims that “Socrates was born in 470 BCE,” factual verification would require you to check historical records or credible sources to confirm whether that birthdate is correct.
Key Difference:
Reasoning involves logical inference and deriving conclusions from premises.
Factual verification involves checking the accuracy of the premises themselves against evidence or trusted references.