{"id":5385,"date":"2025-09-22T21:00:19","date_gmt":"2025-09-22T21:00:19","guid":{"rendered":"https:\/\/musictechohio.online\/site\/chatgpt-meltdown-specific-question\/"},"modified":"2025-09-22T21:00:19","modified_gmt":"2025-09-22T21:00:19","slug":"chatgpt-meltdown-specific-question","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/chatgpt-meltdown-specific-question\/","title":{"rendered":"ChatGPT Has a Stroke When You Ask It This Specific Question"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\">Nearly two months since the release of GPT-5, an update to ChatGPT that was supposed to give it \u201c<a href=\"https:\/\/futurism.com\/ceo-deepmind-openai-phd-ai\">PhD level<\/a>\u201d intelligence and bring it once step closer to <strong>t<\/strong>he industry\u2019s vaunted goal of artificial general intelligence (AGI)<strong>, <\/strong>the OpenAI chatbot is still going bizarrely haywire over simple and completely innocuous inquiries.<\/p>\n<p class=\"article-paragraph skip\">On the ChatGPT subreddit, <a href=\"https:\/\/www.reddit.com\/r\/ChatGPT\/comments\/1nna3jm\/i_think_my_chat_gpt_is_having_a_stroke_lol\/\">fans of the AI bot noticed<\/a> that asking it if there\u2019s an \u201cNFL team whose name doesn\u2019t end with the letter \u2018s&#8217;\u201d sends it into a long-winded meltdown, allowing you to witness its superpowered \u201creasoning\u201d abilities turn to mush in real time.<\/p>\n<p class=\"article-paragraph skip\">\u201cYes \u2014 there are two NFL teams whose names don\u2019t end with an \u2018s,&#8217;\u201d ChatGPT says, before proceeding to list two teams that do. \u201cMiami Dolphins? <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" style=\"height: 1em;max-height: 1em\"> end with s. Green Bay Packers? <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" style=\"height: 1em;max-height: 1em\"> ends with s.\u201d<\/p>\n<p class=\"article-paragraph skip\">It\u2019s an odd way of addressing the question. ChatGPT correctly identifies that those names do end with s, but why bring them up? <\/p>\n<p class=\"article-paragraph skip\">Then it looks like it\u2019s cutting to the chase \u2014 at least, that is, until it goes off the rails yet again.<\/p>\n<p class=\"article-paragraph skip\">\u201cThe only two teams that don\u2019t end with \u2018s\u2019 are: Miami Dolphins <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" style=\"height: 1em;max-height: 1em\"> no (wait, ends with s),\u201d it says, catching its mistake. \u201cHold up, let\u2019s do this carefully. Actually, the correct answer is: Washington Commanders <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" style=\"height: 1em;max-height: 1em\"> ends with s. Chicago Bears <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" style=\"height: 1em;max-height: 1em\"> ends with s.\u201d<\/p>\n<p class=\"article-paragraph skip\">In the original example uploaded to Reddit, ChatGPT goes on like this for several more paragraphs. And it never arrives at the correct answer \u2014 that there <em>aren\u2019t<\/em> any teams that don\u2019t end in an \u201cs.\u201d<\/p>\n<p class=\"article-paragraph skip\">Like a high schooler hitting a word count, it peddles irrelevant details while teasing a conclusion. It also peppers in phrases to make it sound like it\u2019s actually doing some deep thinking. \u201cHold up, let\u2019s do this carefully,\u201d it says. Or \u201clet me do this systematically.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cThe <em>actual correct answer <\/em>is,\u201d ChatGPT says at one point, not realizing the shtick is getting old.<\/p>\n<p class=\"article-paragraph skip\">Eventually, it promises \u201cthe <em>correct answer<\/em> (for real this time).\u201d It says it will list \u201ctwo teams\u201d that don\u2019t end with \u201cs\u201d \u2014 before listing an additional<strong> <\/strong>three<strong> <\/strong>teams that do.<\/p>\n<p class=\"article-paragraph skip\">Other users posted examples where ChatGPT eventually gives the correct answer, but only after stringing the user along a similarly delirious spiel. In our testing, it produced similar bizarre results.<\/p>\n<p class=\"article-paragraph skip\">This is far from the first time the chatbot has been <a href=\"https:\/\/www.theguardian.com\/australia-news\/2025\/aug\/08\/openai-chatgpt-5-struggled-with-spelling-and-geography\" rel=\"nofollow\">foiled by<\/a> a <a href=\"https:\/\/www.inc.com\/kit-eaton\/how-many-rs-in-strawberry-this-ai-cant-tell-you.html\">simple question<\/a> \u2014 or even melted down in such an incredibly circuitous manner. <\/p>\n<p class=\"article-paragraph skip\">Earlier this month, for instance,<strong> <\/strong>fans noticed that <a href=\"https:\/\/futurism.com\/chatgpt-haywire-seahorse-emoji\">asking it if a mythical seahorse emoji existed<\/a> sent it spiraling into a crisis of logic.\u00a0Despite the aquatic creature never being part of the official emoji dictionary, ChatGPT insisted it was real, exemplifying the absurd lengths AI is willing to go to <a href=\"https:\/\/futurism.com\/openai-gpt5-more-sycophantic\">to please the user<\/a>. What\u2019s bending a few facts if the AI gets to come off as personable and human-like, convincing users that they should come back for more?<\/p>\n<p class=\"article-paragraph skip\">Sycophancy probably isn\u2019t the only factor to blame. GPT-5 is actually a tag team of a lightweight model for basic prompts and a heavy duty \u201creasoning\u201d model for tougher questions. What\u2019s probably going on here is that the lightweight model is getting stuck with a question it can\u2019t really handle, instead of handing it off to its smarter cousin. This often <a href=\"https:\/\/futurism.com\/theory-why-gpt-5-sucks\">malfunctioning dynamic<\/a> is part of the reason why fans were left feeling disappointed \u2014 and in many cases, <a href=\"https:\/\/futurism.com\/gpt-5-disaster\">furious<\/a>\u2014 with GPT-5\u2019s launch (which was only exacerbated by OpenAI cutting off access to the old models that its customers had grown attached to, a decision it <a href=\"https:\/\/futurism.com\/openai-brings-back-4o-gpt-5\">soon reversed<\/a>.)<\/p>\n<p class=\"article-paragraph skip\">In any case, it\u2019s a pretty thin excuse. If the AI needs to bust out its biggest guns to answer such a simple question, then maybe it\u2019s not on the fast-track to surpassing human intelligence.<\/p>\n<p class=\"article-paragraph skip\"><strong>More on AI:<\/strong> <a href=\"https:\/\/futurism.com\/chatgpt-marriages-divorces\"><em>ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatgpt-meltdown-specific-question\">ChatGPT Has a Stroke When You Ask It This Specific Question<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>Nearly two months since the release of GPT-5, an update to ChatGPT that was supposed to give it \u201cPhD level\u201d intelligence and bring it once step closer to the industry\u2019s&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,179],"tags":[],"class_list":["post-5385","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5385","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=5385"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5385\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=5385"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=5385"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=5385"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}