{"id":4686,"date":"2025-08-22T16:15:33","date_gmt":"2025-08-22T16:15:33","guid":{"rendered":"https:\/\/musictechohio.online\/site\/ai-experts-no-retirement-kill-us-all\/"},"modified":"2025-08-22T16:15:33","modified_gmt":"2025-08-22T16:15:33","slug":"ai-experts-no-retirement-kill-us-all","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/ai-experts-no-retirement-kill-us-all\/","title":{"rendered":"AI Experts No Longer Saving for Retirement Because They Assume AI Will Kill Us All by Then"},"content":{"rendered":"<div>\n<div><img loading=\"lazy\" width=\"1200\" height=\"630\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/08\/ai-experts-no-retirement-kill-us-all.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"Some researchers have given up saving for their retirement, based on the assumption that AI has guaranteed the downfall of humanity.\" style=\"margin-bottom: 15px;\" decoding=\"async\"><\/div>\n<p>The meteoric rise of artificial intelligence has instilled an existential fear in &#8220;AI doomers,&#8221; a subset of people who believe the tech will cause humans to lose their jobs, fall prey to a dominating species of rogue superintelligent AIs, and even eventually\u00a0get wiped out altogether.<\/p>\n<p>And, as <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/08\/ai-doomers-chatbots-resurgence\/683952\/\"><em>The Atlantic<\/em> reports<\/a>, some are taking that pervasive fear to striking extremes in their daily lives. Machine Intelligence Research Institute researcher Nate Soares,\u00a0for instance, told the magazine that he&#8217;s even given up saving for his retirement, based on the assumption that AI has already guaranteed the final nail in humanity&#8217;s coffin.<\/p>\n<p>&#8220;I just don\u2019t expect the world to be around,&#8221; he said.<\/p>\n<p>And Center for AI Safety director Dan Hendrycks told the magazine that he&#8217;s also expecting humanity to no longer be around by his retirement.<\/p>\n<p>Their belief is part of a movement that argues we&#8217;re mere years away from an AI that evades our grasp and turns against us, a kind of dystopian fate that&#8217;s yanked straight out of the pages of a harrowing sci-fi novel.<\/p>\n<p>But it&#8217;s not looking like pure fiction anymore. Numerous experts have warned that we aren&#8217;t sufficiently preparing for such an eventuality, dooming us to be subjugated \u2014 or worse \u2014 by a superintelligent AI.<\/p>\n<p>We&#8217;ve come across countless theories of how all of this could play out. Earlier this year, <a href=\"https:\/\/www.wired.com\/story\/nuclear-experts-say-mixing-ai-and-nuclear-weapons-is-inevitable\/\">researchers convened<\/a> and broadly agreed that it&#8217;s only a matter of time until an <a href=\"https:\/\/futurism.com\/the-byte\/experts-ai-nuclear-weapons\">AI gets hold of nuclear codes<\/a>.<\/p>\n<p>Researchers have also found that AIs are already showing an\u00a0ominous dark side, even resorting to <a class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" href=\"https:\/\/futurism.com\/ai-stop-blackmailing-people\">blackmailing human users<\/a>\u00a0at an astonishing rate when threatened with being shut down.<\/p>\n<p>AI safety firm Palisade Research also caught one of OpenAI&#8217;s models <a class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" href=\"https:\/\/futurism.com\/openai-model-sabotage-shutdown-code\">sabotaging a shutdown mechanism<\/a> to ensure that it would stay online.<\/p>\n<p>Apart from ensuring their own survival, AIs could help human terrorists. In June, OpenAI <a href=\"https:\/\/futurism.com\/openai-new-models-bioweapons\">warned in a blog post<\/a> that advanced models could &#8220;assist highly skilled actors in creating bioweapons.&#8221;<\/p>\n<p>Whether any of this is proof that we&#8217;re on a trajectory leading to our own extinction, however, remains to be seen. That&#8217;s despite the tech \u2014 in its current form \u2014 already causing plenty of harm, from <a href=\"https:\/\/futurism.com\/wired-business-insider-ai-articles\">flooding the internet<\/a> with disinformation to <a href=\"https:\/\/futurism.com\/psychiatrist-warns-ai-psychosis\">triggering a wave of AI psychosis<\/a>.<\/p>\n<p>Glaring <a href=\"https:\/\/futurism.com\/ai-industry-problem-smarter-hallucinating\">shortcomings<\/a> of the tech are as evident as ever, with OpenAI&#8217;s <a href=\"https:\/\/futurism.com\/gpt-5-underwhelming\">latest GPT-5 AI model stumbling<\/a> at the most basic of questions, even <a href=\"https:\/\/www.inc.com\/kit-eaton\/how-many-rs-in-strawberry-this-ai-cant-tell-you.html\">AI&#8217;s old nemesis<\/a> of correctly counting the number of R&#8217;s in the word &#8220;strawberry.&#8221;<\/p>\n<p>Still, one reason to take the doomsday prophesying seriously is that companies are becoming increasingly financially motivated to have AIs gain more and more control, as <em>The Atlantic<\/em> points out.<\/p>\n<p>Meanwhile, given the Trump administration&#8217;s <a href=\"https:\/\/www.reuters.com\/legal\/government\/us-senate-strikes-ai-regulation-ban-trump-megabill-2025-07-01\/\">anti-regulation stance on the matter<\/a>, companies like OpenAI are unlikely to be strongly motivated to implement effective guardrails to keep their AIs in check.<\/p>\n<p>And whether that kind of freewheeling approach will lead to a total collapse of society almost feels beside the point, considering the many fires we&#8217;re already being forced to put out.<\/p>\n<p><strong>More on AI:<\/strong> <em><a href=\"https:\/\/futurism.com\/ai-lying-hiding-abilities\">Expert Says AI Systems May Be Hiding Their True Capabilities to Seed Our Destruction<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/ai-experts-no-retirement-kill-us-all\">AI Experts No Longer Saving for Retirement Because They Assume AI Will Kill Us All by Then<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>The meteoric rise of artificial intelligence has instilled an existential fear in &#8220;AI doomers,&#8221; a subset of people who believe the tech will cause humans to lose their jobs, fall&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,2637,183,179],"tags":[],"class_list":["post-4686","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-doomer","category-generative-ai","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/4686","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=4686"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/4686\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=4686"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=4686"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=4686"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}