{"id":5087,"date":"2025-09-09T17:22:06","date_gmt":"2025-09-09T17:22:06","guid":{"rendered":"https:\/\/musictechohio.online\/site\/simulation-theory-ai-warning\/"},"modified":"2025-09-09T17:22:06","modified_gmt":"2025-09-09T17:22:06","slug":"simulation-theory-ai-warning","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/simulation-theory-ai-warning\/","title":{"rendered":"The Man Who Proposed Simulation Theory Has a Dire Warning"},"content":{"rendered":"<div>\n<div><img loading=\"lazy\" width=\"2400\" height=\"1260\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/09\/simulation-theory-ai-warning.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"Philosopher Nick Bostrom brought simulation theory and AI superintelligence into the public psyche \u2014 and now he has a new warning for us all.\" style=\"margin-bottom: 15px;\" decoding=\"async\"><\/div>\n<p>More than 20 years ago, futurist intellectual Nick Bostrom <a href=\"https:\/\/www.library.rochester.edu\/about\/news\/simulation-theory-ultimate-existential-crisis\">upended the psyches of tech bros<\/a>\u00a0the world around when he proposed in a 2003 <a href=\"https:\/\/simulation-argument.com\/simulation.pdf\"><em>Philosophical Quarterly<\/em> paper<\/a>\u00a0that we all may be living in a computer simulation.<\/p>\n<p>Beloved by such strange bedfellows as <a href=\"https:\/\/www.ox.ac.uk\/news\/arts-blog\/elon-musk-funds-oxford-research-machine-intelligence\">Elon Musk<\/a>, <a href=\"https:\/\/qz.com\/698334\/bill-gates-says-these-are-the-two-books-we-should-all-read-to-understand-ai\">Bill Gates<\/a>, and <a href=\"https:\/\/blog.samaltman.com\/machine-intelligence-part-1\">Sam Altman<\/a>, Bostrom has released two other influential missives \u2014 2014&#8217;s &#8220;<a href=\"https:\/\/lithub.com\/what-if-instead-of-making-paperclips-we-asked-an-ai-super-intelligence-to-make-us-all-happy\/\">Superintelligence: Paths, Dangers, Strategies<\/a>,&#8221; which detailed the ways AI could become smarter than humans, and 2024&#8217;s &#8220;<a href=\"https:\/\/www.wired.com\/story\/nick-bostrom-fear-ai-fix-everything\/\">Deep Utopia: Life and Meaning in a Solved World<\/a>,&#8221; which ponders on what will happen if AI fixes everything \u2014 in the interim.<\/p>\n<p>He also was embroiled in a minor controversy after a <a href=\"https:\/\/www.vice.com\/en\/article\/prominent-ai-philosopher-and-father-of-longtermism-sent-very-racist-email-to-a-90s-philosophy-listserv\/\">very racist email he sent<\/a> in the 1990s was uncovered in 2023, and the following year, his Future of Humanity Institute at Oxford was shut down in what the philosopher lamented as &#8220;<a href=\"https:\/\/www.theguardian.com\/technology\/2024\/apr\/19\/oxford-future-of-humanity-institute-closes\">death by bureaucracy<\/a>.&#8221;<\/p>\n<p>Now, speaking from the other side of the <a href=\"https:\/\/theweek.com\/tech\/2023-ai-boom\">AI boom<\/a> of the <a href=\"https:\/\/www.reuters.com\/business\/nvidia-ceo-says-ai-boom-far-over-after-tepid-sales-forecast-2025-08-28\/\">past few years<\/a>, Bostrom has begun seeing, <a href=\"https:\/\/www.standard.co.uk\/lifestyle\/nick-bostrom-interview-ai-super-intelligence-b1244915.html\">as he told <em>The London Standard<\/em><\/a>, some of his predictions about AI start coming to fruition in real-time.<\/p>\n<p>&#8220;It\u2019s all happening now,&#8221; the philosopher said of AI advancement. &#8220;I\u2019m quite impressed by the speed of developments that we\u2019ve seen in the past several years.&#8221;<\/p>\n<p>Bostrom added that the world appears to be &#8220;on the track towards&#8221; artificial general intelligence, or the point at which AI systems become as intelligent as humans. When he wrote &#8220;Superintelligence&#8221; in the early 2010s, he was, as the philosopher told the <em>Standard<\/em>, mostly spitballing \u2014 and now, as we approach it, some of his ideas about it are changing too.<\/p>\n<p>Back in 2019, when even today&#8217;s nascent AI technology seemed like science fiction, <a href=\"https:\/\/www.businessinsider.com\/nick-bostrom-ai-greater-threat-to-human-existence-than-climate-change-2019-4\">Bostrom told\u00a0<em>Business Insider<\/em><\/a> that &#8220;AI is a bigger threat to human existence than climate change,&#8221; and that it won&#8217;t be the &#8220;biggest change we see this century.&#8221;<\/p>\n<p>When asked if that was still his belief lately, the philosopher demurred.<\/p>\n<p>&#8220;There remains always the possibility that human civilization might destroy itself in some other way,&#8221; Bostrom told the <em>Standard<\/em>, &#8220;such that we don\u2019t even get the chance to try our luck with superintelligence.&#8221;<\/p>\n<p>Even more interestingly, it appears that the former Oxford professor has also changed his tune somewhat about advanced AI, telling the London newspaper that AGI is an inevitability that he&#8217;s not necessary against.<\/p>\n<p>&#8220;Completely reorganizing society could be a positive thing,&#8221; Bostrom mused.<\/p>\n<p>As that level of advanced AI approaches, the futurist thinker has four key quandaries to grapple with: how to align AI with human values and safety, how to govern AGI so that humans won&#8217;t use it for evil, how to &#8220;respect the moral status of digital minds,&#8221; and how to stop superintelligences from going after each other.<\/p>\n<p>Those first two points \u2014 alignment and governance \u2014 can be found in the mission statement of any AI lab. The second two, however, are pretty unconventional, even for the guy who popularized simulation theory.<\/p>\n<p>&#8220;This might sound strange, but as we are building increasingly sophisticated and complex AIs, it\u2019s possible that some of those will have various degrees and forms of moral status,&#8221; Bostrom explained of the &#8220;digital minds&#8221; risk area.<\/p>\n<p>His description of the battle of superintelligences is, somehow, even weirder. Should alien races come to Earth with their own mega-advanced AI, Bostrom posited, humans could theoretically end up playing peacekeeper between them.<\/p>\n<p>&#8220;If any of those types of [alien] beings exist,&#8221; he elucidated, an &#8220;important desideratum when we create our own superintelligence is to make sure that it will get along with these other super beings.\u201d<\/p>\n<p>Bizarre as all those theoretical futures sound, Bostrom&#8217;s thinking on the AI we have right now is much more down to Earth \u2014 albeit a bit utopic, too.<\/p>\n<p>&#8220;The goal is full unemployment,&#8221; the philosopher declared. &#8220;We\u2019d need to find different bases for our self-worth and dignity and different ways of filling our lives and days, aside from having to work for a living.&#8221;<\/p>\n<p>As difficult as it is to imagine a world where &#8220;full unemployment&#8221; is a positive, Bostrom, ever the &#8220;fretful optimist,&#8221; thinks that a post-work world could be splendid.<\/p>\n<p>&#8220;Ultimately, much greater space for human flourishing could be unlocked through advanced AI,&#8221; he concluded. &#8220;If things go well, people will look back on 2025 and shudder in horror at the lives we lived.&#8221;<\/p>\n<p>More on AI futures: <a href=\"https:\/\/futurism.com\/godfather-ai-bizarre-plan-save-humanity\"><em>The \u201cGodfather of AI\u201d Has a Bizarre Plan to Save Humanity From Evil AI<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/simulation-theory-ai-warning\">The Man Who Proposed Simulation Theory Has a Dire Warning<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>More than 20 years ago, futurist intellectual Nick Bostrom upended the psyches of tech bros\u00a0the world around when he proposed in a 2003 Philosophical Quarterly paper\u00a0that we all may be&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[829,177,3701,3702,2258],"tags":[],"class_list":["post-5087","post","type-post","status-publish","format-standard","hentry","category-agi","category-artificial-intelligence","category-nick-bostrom","category-simulation-theory","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5087","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=5087"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5087\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=5087"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=5087"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=5087"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}