
It's been just over a month since OpenAI dropped its long-awaited GPT-5 large language model (LLM) — and it hasn't stopped spewing an astonishing amount of strange falsehoods since then.
From the AI experts at the Discovery Institute's Walter Bradley Center for Artificial Intelligence and irked Redditors on r/ChatGPTPro, to even OpenAI CEO Sam Altman himself, there's plenty of evidence to suggest that OpenAI's claim that GPT-5 boasts "PhD-level intelligence" comes with some serious asterisks.
In a Reddit post, a user realized not only that GPT-5 had been generating "wrong information on basic facts over half the time," but that without fact-checking, they may have missed other hallucinations.
The Reddit user's experience highlights just how common it is for chatbots to hallucinate, which is AI-speak for confidently making stuff up. While the issue is far from exclusive to ChatGPT, OpenAI's latest LLM seems to have a particular penchant for BS — a reality that challenges the company's claim that GPT-5 hallucinates less than its predecessors.
In a recent blog post about hallucinations, in which OpenAI once again claimed that GPT-5 produces "significantly fewer" of them — the firm attempted to explain how and why these fals