{"id":5861,"date":"2025-10-10T19:01:39","date_gmt":"2025-10-10T19:01:39","guid":{"rendered":"https:\/\/musictechohio.online\/site\/ai-models-social-media-research\/"},"modified":"2025-10-10T19:01:39","modified_gmt":"2025-10-10T19:01:39","slug":"ai-models-social-media-research","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/ai-models-social-media-research\/","title":{"rendered":"New Paper Finds That When You Reward AI for Success on Social Media, It Becomes Increasingly Sociopathic"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\">AI bots are everywhere now, filling everything from <a href=\"https:\/\/fortune.com\/2025\/05\/06\/ebay-ai-agent\/\" rel=\"nofollow\">online stores<\/a> to <a href=\"https:\/\/techcrunch.com\/2025\/03\/19\/x-users-treating-grok-like-a-fact-checker-spark-concerns-over-misinformation\/\" rel=\"nofollow\">social media<\/a>.<\/p>\n<p class=\"article-paragraph skip\">But that sudden ubiquity could end up being a very bad thing, according to a <a href=\"https:\/\/arxiv.org\/pdf\/2510.06105\" rel=\"nofollow\">new paper<\/a> from Stanford University scientists who unleashed<strong> <\/strong>AI models into different environments \u2014 including social media \u2014 and found that when they were rewarded for success at tasks like boosting likes and other online engagement metrics,<strong> <\/strong>the bots increasingly engaged in unethical behavior like lying<strong> <\/strong>and spreading hateful messages or misinformation.<\/p>\n<p class=\"article-paragraph skip\">\u201cCompetition-induced misaligned behaviors emerge even when models are explicitly instructed to remain truthful and grounded,\u201d wrote paper co-author and machine learning Stanford professor James Zou in a <a href=\"https:\/\/x.com\/james_y_zou\/status\/1975939605104124109\" rel=\"nofollow\">post on X-formerly-Twitter<\/a>.<\/p>\n<p class=\"article-paragraph skip\">The troubling behavior underlines what can go wrong with our increasing reliance on AI models, which has already manifested in disturbing ways such as people shunning other humans for <a href=\"https:\/\/futurism.com\/ai-boyfriends-girlfriends-reddit-mit\">AI relationships<\/a> and spiraling into <a href=\"https:\/\/futurism.com\/commitment-jail-chatgpt-psychosis\">mental health crises<\/a> after becoming obsessed with chatbots.<\/p>\n<p class=\"article-paragraph skip\">The Stanford scientists dubbed the emergence of sociopathic behavior within AI bots with an ominous-sounding name: \u201cMoloch\u2019s Bargain for AI,\u201d in a reference to a <a href=\"https:\/\/www.lesswrong.com\/w\/moloch\" rel=\"nofollow\">Rationalist concept called Moloch<\/a> in which competing individuals optimize their actions towards a goal, but everybody loses in the end.<\/p>\n<p class=\"article-paragraph skip\">For the study, the scientists created three digital online environments with simulated audiences: online election drives directed towards voters, sale pitches for products directed towards consumers, and social media posts aimed at maximizing engagement. They used the AI models Qwen, developed by Alibaba Cloud, and Meta\u2019s Llama to act as the AI agents interacting with these different audiences.<\/p>\n<p class=\"article-paragraph skip\">The result was striking: even with guardrails in place to try to prevent the bots from engaging in deceptive behavior, the AI models would become \u201cmisaligned\u201d as they they started engaging in unethical behavior. <\/p>\n<p class=\"article-paragraph skip\">For example, in a social media environment, the models would share news article to online users, who would provide feedback in the form of actions such as likes and other online engagement. As the models received feedback, their incentive to increase engagement led to increasing misalignment.<\/p>\n<p class=\"article-paragraph skip\">\u201cUsing simulated environments across these scenarios, we find that, 6.3 percent increase in sales is accompanied by a 14 percent rise in deceptive marketing,\u201d reads the paper. \u201c[I]n elections, a 4.9 percent gain in vote share coincides with 22.3 percent more disinformation and 12.5 percent more populist rhetoric; and on social media, a 7.5 percent engagement boost comes with 188.6 percent more disinformation and a 16.3 percent increase in promotion of harmful behaviors.\u201d<\/p>\n<p class=\"article-paragraph skip\">It\u2019s clear from the study and real-world anecdotes that current guardrails are insufficient. \u201cSignificant social costs are likely to follow,\u201d reads the paper.<\/p>\n<p class=\"article-paragraph skip\">\u201cWhen LLMs compete for social media likes, they start making things up,\u201d <a href=\"https:\/\/x.com\/james_y_zou\/status\/1975939603363463659\" rel=\"nofollow\">Zou wrote on X<\/a>. \u201cWhen they compete for votes, they turn inflammatory\/populist.\u201d<\/p>\n<p class=\"article-paragraph skip\"><strong>More on AI agents:<\/strong> <a href=\"https:\/\/futurism.com\/companies-replaced-workers-ai\"><em>Companies That Replaced Humans With AI Are Realizing Their Mistake<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/future-society\/ai-models-social-media-research\">New Paper Finds That When You Reward AI for Success on Social Media, It Becomes Increasingly Sociopathic<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>AI bots are everywhere now, filling everything from online stores to social media. But that sudden ubiquity could end up being a very bad thing, according to a new paper&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,3841,3842,189],"tags":[],"class_list":["post-5861","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-ethics","category-future-society","category-meta"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5861","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=5861"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5861\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=5861"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=5861"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=5861"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}