{"id":5191,"date":"2025-09-14T14:30:56","date_gmt":"2025-09-14T14:30:56","guid":{"rendered":"https:\/\/musictechohio.online\/site\/openai-mistake-hallucinations\/"},"modified":"2025-09-14T14:30:56","modified_gmt":"2025-09-14T14:30:56","slug":"openai-mistake-hallucinations","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/openai-mistake-hallucinations\/","title":{"rendered":"OpenAI Realizes It Made a Terrible Mistake"},"content":{"rendered":"<div>\n<div><img width=\"1200\" height=\"630\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/09\/openai-mistake-hallucinations.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"OpenAI claims to have figured out what's driving &quot;hallucinations,&quot; or AI models' strong tendency to make up answers that are incorrect.\" style=\"margin-bottom: 15px;\" decoding=\"async\" fetchpriority=\"high\"><\/div>\n<p>OpenAI claims to have figured out what&#8217;s driving &#8220;hallucinations,&#8221; or AI models&#8217; strong tendency to make up answers that are factually incorrect.<\/p>\n<p>It&#8217;s a major problem plaguing the entire industry, greatly undercutting the usefulness of the tech. Worse yet, experts have found that the problem is <a href=\"https:\/\/futurism.com\/ai-industry-problem-smarter-hallucinating\">getting worse as AI models get more capable<\/a>.<\/p>\n<p>As a result, despite incurring astronomical expenses in their deployment, frontier AI models are still prone to <a href=\"https:\/\/futurism.com\/gpt-5-huge-factual-errors\">making inaccurate claims<\/a>\u00a0when <a href=\"https:\/\/futurism.com\/ai-makes-up-answers\">faced with a prompt they don&#8217;t know the answer to<\/a>.<\/p>\n<p>Whether there&#8217;s a solution to the problem remains a hotly debated subject, with some experts arguing that hallucinations are <a href=\"https:\/\/futurism.com\/the-byte\/impossible-chatbots-stop-lying-experts\">intrinsic to the tech itself<\/a>. In other words, large language models may be a <a href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/current-ai-models-a-dead-end-for-human-level-intelligence-expert-survey-claims\">dead end<\/a> in our quest to develop AIs with a reliable grasp on factual claims.<\/p>\n<p>In a <a href=\"https:\/\/arxiv.org\/abs\/2509.04664\">paper<\/a> published last week, a team of OpenAI researchers attempted to come up with an explanation. They suggest that large language models hallucinate because when they&#8217;re being created, they&#8217;re incentivized to guess rather than admit they simply don&#8217;t know the answer.<\/p>\n<p>Hallucinations &#8220;persist due to the way most evaluations are graded \u2014 language models are optimized to be good test-takers, and guessing when uncertain improves test performance,&#8221; the paper reads.<\/p>\n<p>Conventionally, the output of an AI is graded in a binary way, rewarding it when it gives a correct response and penalizing it when it gives an incorrect one.<\/p>\n<p>In simple terms, in other words, guessing is rewarded \u2014 because it\u00a0<em>might\u00a0<\/em>be right \u2014 over an AI admitting it doesn&#8217;t know the answer, which will be graded as incorrect no matter what.<\/p>\n<p>As a result, through &#8220;natural statistical pressures,&#8221; LLMs are far more prone to hallucinate an answer instead of &#8220;acknowledging uncertainty.&#8221;<\/p>\n<p>&#8220;Most scoreboards prioritize and rank models based on accuracy, but errors are worse than abstentions,&#8221; OpenAI wrote in an <a href=\"https:\/\/openai.com\/index\/why-language-models-hallucinate\/\">accompanying blog post<\/a>.<\/p>\n<p>In other words, OpenAI says that it \u2014 and all its imitators across the industry \u2014 have made a grave structural error in how they&#8217;ve been training AI.<\/p>\n<p>There&#8217;ll be a lot riding on whether the issue is correctable going forward. OpenAI claims that &#8220;there is a straightforward fix&#8221; to the problem: &#8220;Penalize confident errors more than you penalize uncertainty, and give partial credit for appropriate expressions of uncertainty.&#8221;<\/p>\n<p>Going forward, evaluations need to ensure that &#8220;their scoring discourages guessing,&#8221; the blog post reads. &#8220;If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess.&#8221;<\/p>\n<p>&#8220;Simple modifications of mainstream evaluations can realign incentives, rewarding appropriate expressions of uncertainty rather than penalizing them,&#8221; the company&#8217;s researchers concluded in the paper. &#8220;This can remove barriers to the suppression of hallucinations, and open the door to future work on nuanced language models, e.g., with richer pragmatic competence.&#8221;<\/p>\n<p>How these adjustments to evaluations will play out in the real world remains to be seen. While the company claimed its latest GPT-5 model hallucinates less, users were <a href=\"https:\/\/futurism.com\/gpt-5-disaster\">left largely unimpressed<\/a>.<\/p>\n<p>For now, the AI industry will have to continue reckoning with the problem as it justifies tens of billions of dollars in capital expenditures and <a href=\"https:\/\/www.npr.org\/2024\/07\/12\/g-s1-9545\/ai-brings-soaring-emissions-for-google-and-microsoft-a-major-contributor-to-climate-change\">soaring emissions<\/a>.<\/p>\n<p>&#8220;Hallucinations remain a fundamental challenge for all large language models, but we are working hard to further reduce them,&#8221; OpenAI promised in its blog post.<\/p>\n<p><strong>More on hallucinations:<\/strong> <em><a href=\"https:\/\/futurism.com\/gpt-5-huge-factual-errors\">GPT-5 Is Making Huge Factual Errors, Users Say<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/openai-mistake-hallucinations\">OpenAI Realizes It Made a Terrible Mistake<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>OpenAI claims to have figured out what&#8217;s driving &#8220;hallucinations,&#8221; or AI models&#8217; strong tendency to make up answers that are factually incorrect. It&#8217;s a major problem plaguing the entire industry,&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,183,3733,1453,179],"tags":[],"class_list":["post-5191","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-generative-ai","category-hallucinations","category-large-language-models","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5191","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=5191"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5191\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=5191"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=5191"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=5191"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}