{"id":4543,"date":"2025-08-16T09:30:50","date_gmt":"2025-08-16T09:30:50","guid":{"rendered":"https:\/\/musictechohio.online\/site\/chatgpt-deep-anti-human-bias\/"},"modified":"2025-08-16T09:30:50","modified_gmt":"2025-08-16T09:30:50","slug":"chatgpt-deep-anti-human-bias","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/chatgpt-deep-anti-human-bias\/","title":{"rendered":"New Research Finds That ChatGPT Secretly Has a Deep Anti-Human Bias"},"content":{"rendered":"<div>\n<div><img width=\"2400\" height=\"1260\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/08\/chatgpt-deep-anti-human-bias.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"New research suggests that ChatGPT and other leading AI models display an alarming bias towards other AIs over humans.\" style=\"margin-bottom: 15px;\" decoding=\"async\" loading=\"lazy\"><\/div>\n<p><span style=\"font-weight: 400;\">Do you like AI models? Well, chances are, they sure don&#8217;t like you back.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">New research suggests that the industry&#8217;s leading large language models, including those that power ChatGPT, display an alarming bias towards other AIs when they&#8217;re asked to choose between human and machine-generated content.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The authors of the <\/span><a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2415697122\"><span style=\"font-weight: 400;\">study<\/span><\/a><span style=\"font-weight: 400;\">, which was published in the journal <\/span><i><span style=\"font-weight: 400;\">Proceedings of the National Academy of Sciences<\/span><\/i><span style=\"font-weight: 400;\">, are calling this blatant favoritism &#8220;AI-AI bias&#8221; \u2014 and warn of an AI-dominated future where, if the models are in a position to make or recommend consequential decisions, they could inflict discrimination against humans as a social class.<\/span><\/p>\n<p>Arguably, we&#8217;re starting to see the seeds of this being planted, as bosses today are using AI tools to <a href=\"https:\/\/www.entrepreneur.com\/business-news\/ai-is-changing-how-companies-recruit-how-candidates-respond\/470912\">automatically screen job applications<\/a> (and poorly, <a href=\"https:\/\/futurism.com\/the-byte\/ai-ignoring-qualified-candidates\">experts argue<\/a>). This paper suggests that the <a href=\"https:\/\/futurism.com\/the-byte\/lying-resume-ai-new-normal\">tidal wave of AI-generated r\u00e9sum\u00e9s<\/a> are beating out their human-written competitors.<\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Being human in an economy populated by AI agents would suck,&#8221; writes study coauthor Jan Kulveit, a computer scientist at Charles University in the UK, in a <\/span><a style=\"cursor: pointer !important; user-select: none !important;\" href=\"https:\/\/x.com\/jankulveit\/status\/1953837880683446456\"><span style=\"font-weight: 400;\">thread on X-formerly-Twitter<\/span><\/a><span style=\"font-weight: 400;\"> explaining the work.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In their study, the authors probed several widely used LLMs, including OpenAI&#8217;s GPT-4, GPT-3.5, and Meta&#8217;s Llama 3.1-70b. To test them, the team asked the models to choose a product, scientific paper, or movie based on a description of the item. For each item, the AI was presented with a human-written and AI-written description.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The results were clear-cut: the AIs consistently preferred AI-generated descriptions. But there are some interesting wrinkles. Intriguingly, the AI-AI bias was most pronounced when choosing goods and products, and strongest with text generated with GPT-4. In fact, between GPT-3.5, GPT-4, and Meta&#8217;s Llama 3.1, GPT-4 exhibited the strongest bias towards its own stuff \u2014 which is no small matter, since this once undergirded the most popular chatbot on the market before the advent of GPT-5.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Could the AI text just be better? <\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Not according to people,&#8221; Kulveit wrote in the thread. The team subjected 13 human research assistants to the same tests and found something striking: that the humans, too, tended to have a slight preference for AI-written stuff, with movies and scientific papers in particular. But this preference, to reiterate, was slight. The more important detail was that it was not nearly as strong as the preference that the AI models showed.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;The strong bias is unique to the AIs themselves,&#8221; Kulveit said.<\/span><\/p>\n<p>The findings are particularly dramatic at our current inflection point where the internet has been so <a href=\"https:\/\/futurism.com\/chatgpt-polluted-ruined-ai-development\">polluted by AI slop<\/a> that the AIs inevitably end up ingesting their own excreta. Some research suggests that this is <a href=\"https:\/\/futurism.com\/ai-models-falling-apart\">actually causing the AI models to regress<\/a>, and perhaps the bizarre affinity for its own output is part of the reason why.<\/p>\n<p>Of greater concern is what this means for humans. Currently, there&#8217;s no reason to believe that this bias will simply go away as the tech embeds itself deeper into our lives.<\/p>\n<p><span style=\"font-weight: 400;\">&#8220;We expect a similar effect can occur in many other situations, like evaluation of job applicants, schoolwork, grants, and more,&#8221; Kulveit <\/span><span style=\"font-weight: 400;\">wrote<\/span><span style=\"font-weight: 400;\">. &#8220;If an LLM-based agent selects between your presentation and LLM written presentation, it may systematically favor the AI one.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If AIs continue to be widely adopted and integrated into the economy, the researchers predict that companies and institutions will use AIs &#8220;as decision-assistants when dealing with large volumes of &#8216;pitches&#8217; in any context,&#8221; they wrote in the study.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This would lead to widespread discrimination against humans who either choose not to use or can&#8217;t afford to pay to use LLM tools. AI-AI bias, then, would create a &#8220;gate tax,&#8221; they write, &#8220;that may exacerbate the so-called &#8216;digital divide&#8217; between humans with the financial, social, and cultural capital for frontier LLM access and those without.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Kulveit acknowledges that &#8220;testing discrimination and bias in general is a complex and contested matter.&#8221; But, &#8220;if we assume the identity of the presenter should not influence the decisions,&#8221; he says, the &#8220;results are evidence for potential LLM discrimination against humans as a class.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">His practical advice to humans trying to get noticed is a sobering indictment of the state of affairs. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;In case you suspect some AI evaluation is going on: get your presentation adjusted by LLMs until they like it, while trying to not sacrifice human quality,&#8221; Kulveit wrote.<\/span><\/p>\n<p><strong>More on AI: <\/strong><em><a href=\"https:\/\/futurism.com\/computer-science-grads-fast-food\">Computer Science Grads Are Being Forced to Work Fast Food Jobs as AI Tanks Their Career<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/chatgpt-deep-anti-human-bias\">New Research Finds That ChatGPT Secretly Has a Deep Anti-Human Bias<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>Do you like AI models? Well, chances are, they sure don&#8217;t like you back. New research suggests that the industry&#8217;s leading large language models, including those that power ChatGPT, display&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,196,183,1453],"tags":[],"class_list":["post-4543","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-chatgpt","category-generative-ai","category-large-language-models"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/4543","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=4543"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/4543\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=4543"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=4543"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=4543"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}