
More than 20 years ago, futurist intellectual Nick Bostrom upended the psyches of tech bros the world around when he proposed in a 2003 Philosophical Quarterly paper that we all may be living in a computer simulation.
Beloved by such strange bedfellows as Elon Musk, Bill Gates, and Sam Altman, Bostrom has released two other influential missives — 2014’s “Superintelligence: Paths, Dangers, Strategies,” which detailed the ways AI could become smarter than humans, and 2024’s “Deep Utopia: Life and Meaning in a Solved World,” which ponders on what will happen if AI fixes everything — in the interim.
He also was embroiled in a minor controversy after a very racist email he sent in the 1990s was uncovered in 2023, and the following year, his Future of Humanity Institute at Oxford was shut down in what the philosopher lamented as “death by bureaucracy.”
Now, speaking from the other side of the AI boom of the past few years, Bostrom has begun seeing, as he told The London Standard, some of his predictions about AI start coming to fruition in real-time.
“It’s all happening now,” the philosopher said of AI advancement. “I’m quite impressed by the speed of developments that we’ve seen in the past several years.”
Bostrom added that the world appears to be “on the track towards” artificial general intelligence, or the point at which AI systems become as intelligent as humans. When he wrote “Superintelligence” in the early 2010s, he was, as the philosopher told the Standard, mostly spitballing — and now, as we approach it, some of his ideas about it are changing too.
Back in 2019, when even today’s nascent AI technology seemed like science fiction, Bostrom told Business Insider that “AI is a bigger threat to human existence than climate change,” and that it won’t be the “biggest change we see this century.”
When asked if that was still his belief lately, the philosopher demurred.
“There remains always the possibility that human civilization might destroy itself in some other way,” Bostrom told the Standard, “such that we don’t even get the chance to try our luck with superintelligence.”
Even more interestingly, it appears that the former Oxford professor has also changed his tune somewhat about advanced AI, telling the London newspaper that AGI is an inevitability that he’s not necessary against.
“Completely reorganizing society could be a positive thing,” Bostrom mused.
As that level of advanced AI approaches, the futurist thinker has four key quandaries to grapple with: how to align AI with human values and safety, how to govern AGI so that humans won’t use it for evil, how to “respect the moral status of digital minds,” and how to stop superintelligences from going after each other.
Those first two points — alignment and governance — can be found in the mission statement of any AI lab. The second two, however, are pretty unconventional, even for the guy who popularized simulation theory.
“This might sound strange, but as we are building increasingly sophisticated and complex AIs, it’s possible that some of those will have various degrees and forms of moral status,” Bostrom explained of the “digital minds” risk area.
His description of the battle of superintelligences is, somehow, even weirder. Should alien races come to Earth with their own mega-advanced AI, Bostrom posited, humans could theoretically end up playing peacekeeper between them.
“If any of those types of [alien] beings exist,” he elucidated, an “important desideratum when we create our own superintelligence is to make sure that it will get along with these other super beings.”
Bizarre as all those theoretical futures sound, Bostrom’s thinking on the AI we have right now is much more down to Earth — albeit a bit utopic, too.
“The goal is full unemployment,” the philosopher declared. “We’d need to find different bases for our self-worth and dignity and different ways of filling our lives and days, aside from having to work for a living.”
As difficult as it is to imagine a world where “full unemployment” is a positive, Bostrom, ever the “fretful optimist,” thinks that a post-work world could be splendid.
“Ultimately, much greater space for human flourishing could be unlocked through advanced AI,” he concluded. “If things go well, people will look back on 2025 and shudder in horror at the lives we lived.”
More on AI futures: The “Godfather of AI” Has a Bizarre Plan to Save Humanity From Evil AI
The post The Man Who Proposed Simulation Theory Has a Dire Warning appeared first on Futurism.