OpenAI’s Top Scientist Wanted to “Build a Bunker Before We Release AGI”

AI Chat - Image Generator:
OpenAI's former chief scientist Ilya Sutskever has long been preparing for AGI — and he discussed with coworkers doomsday prep plans.

Feel The AGI

OpenAI’s former chief scientist, Ilya Sutskever, has long been preparing for artificial general intelligence (AGI), an ill-defined industry term for the point at which human intellect is outpaced by algorithms — and he’s got some wild plans for when that day may come.

In interviews with The Atlantic‘s Karen Hao, who is writing a book about the unsuccessful November 2023 ouster of CEO Sam Altman, people close to Sutskever said that he seemed mighty preoccupied with AGI.

According to a researcher who heard the since-resigned company cofounder wax prolific about it during a summer 2023 meeting, an apocalyptic scenario seemed to be a foregone conclusion to Sutskever.

“Once we all get into the bunker…” the chief scientist began.

“I’m sorry,” the researcher interrupted, “the bunker?”

“We’re definitely going to build a bunker before we release AGI,” Sutskever said, matter-of-factly. “Of course, it’s going to be optional whether you want to get into the bunker.”

The exchange highlights just how confident OpenAI’s leadership was, and remains, in the technology that it believes it’s building — even though others argue that we are nowhere near AGI and may never get there.

Rapturous

As theatrical as that exchange sounds, two other people present for the exchange confirmed that OpenAI’s resident AGI soothsayer — who, notably, claimed months before ChatGPT’s 2022 release that he believes some AI models are “slightly conscious” — did indeed mention a bunker.

“There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture,” the first researcher told Hao. “Literally, a rapture.”

As others who spoke to the author for her forthcoming book “Empire of AI” noted, Sutskever’s AGI obsession had taken on a novel tenor by summer 2023. Aside from his interest in building AGI, he had also become concerned about the way OpenAI was handling the technology it was gestating.

That concern ultimately led the mad scientist, alongside several other members of the company’s board, to oust CEO Sam Altman a few months later, and ultimately to his own departure.

Though Sutskever led the coup, his resolve, according to sources that The Atlantic spoke to, began to crack once he realized OpenAI’s rank-and-file were falling in line behind Altman. He eventually rescinded his opinion that the CEO was not fit to lead in what seems to have been an effort to save his skin — an effort that, in the end, turned out to be fruitless.

Interestingly, Hao also learned that people inside OpenAI had a nickname for the failed coup d’etat: “The Blip.”

More on AGI: Sam Altman Says OpenAI Has Figured Out How to Build AGI

The post OpenAI’s Top Scientist Wanted to “Build a Bunker Before We Release AGI” appeared first on Futurism.

Codex, OpenAI’s New Coding Agent, Wants to Be a World-Killer

AI Chat - Image Generator:
OpenAI is peddling what it calls a "cloud-based software engineering agent," but fails to explain where its getting the data to train it.

Though artificial intelligence is taking the world by storm, it’s still pretty bad at tasks demanding a high-degree of flexibility, like writing computer code.

Earlier this year, ChatGPT maker OpenAI published a white paper taking AI to task for its lackluster performance in a coding scrum. Among other things, it found that even the most advanced AI models are “still unable to solve the majority” of coding tasks.

Later in an interview, OpenAI CEO Sam Altman said that these models are “on the precipice of being incredible at software engineering,” adding that “software engineering by the end of 2025 looks very different than software engineering at the beginning of 2025.”

It was a bold prediction without much substance to back it — if anything, generative AI like the kind Altman pedals has only gotten worse at coding as hallucination rates increase with each new iteration.

Now we know what he was playing at.

Early on Friday, OpenAI revealed a preview of Codex, the company’s stab at a specialty coding “agent” — a fluffy industry term that seems to change definitions depending on which company is trying to sell one to you.

“Codex is a cloud-based software engineering agent that can work on many tasks in parallel,” the company’s research preview reads.

The new tool will seemingly help software engineers by writing new features, debugging existing code, and answering questions about source code, among other tasks.

Contrary to ChatGPT’s everything-in-a-box model, which is geared toward the mass market, Codex has been trained to “generate code that closely mirrors human style and PR preferences.” That’s a charitable way to say “steal other people’s code” — an AI training tactic OpenAI has been sued for in the not-too-distant past, when it helped Microsoft’s Copilot go to town on open-source and copyrighted code shared on GitHub.

Thanks in large part to a technicality, OpenAI, GitHub, and Microsoft came out of that legal scuffle pretty much unscathed, giving OpenAI some convenient legal armor should it choose to go it alone with its own in-house model trained on GitHub code.

In the Codex release, OpenAI claims its coding agent operates entirely in the cloud, cut off from the internet, meaning it can’t scour the web for data like ChatGPT. Instead, OpenAI “limits the agent’s interaction solely to the code explicitly provided via GitHub repositories and pre-installed dependencies configured by the user via a setup script.”

Still, the data used to train Codex had to come from somewhere, and judging by the rash of copyright lawsuits that seem to plague the AI industry, it’s only a matter of time before we find out where.

More on OpenAI: ChatGPT Users Are Developing Bizarre Delusions

The post Codex, OpenAI’s New Coding Agent, Wants to Be a World-Killer appeared first on Futurism.

Company Regrets Replacing All Those Pesky Human Workers With AI, Just Wants Its Humans Back

AI Chat - Image Generator:
Years after outsourcing marketing and customer service gigs to AI, the Swedish company Klarna is looking to hire its humans back.

Two years after partnering with OpenAI to automate marketing and customer service jobs, financial tech startup Klarna says it’s longing for human connection again.

Once gunning to be OpenAI CEO Sam Altman’s “favorite guinea pig,” Klarna is now plotting a big recruitment drive after its AI customer service agents couldn’t quite hack it.

The buy-now-pay-later company had previously shredded its marketing contracts in 2023, followed by its customer service team in 2024, which it proudly began replacing with AI agents. Now, the company says it imagines an “Uber-type of setup” to fill their ranks, with gig workers logging in remotely to argue with customers from the comfort of their own homes.

“From a brand perspective, a company perspective, I just think it’s so critical that you are clear to your customer that there will be always a human if you want,” admitted Sebastian Siemiatkowski, the Swedish fintech’s CEO.

That’s a pretty big shift from his comments in December of 2024, when he told Bloomberg he was “of the opinion that AI can already do all of the jobs that we, as humans, do.” A year before that, Klarna had stopped hiring humans altogether, reducing its workforce by 22 percent.

A few months after freezing new hires, Klarna bragged that it saved $10 million on marketing costs by outsourcing tasks like translation, art production, and data analysis to generative AI. It likewise claimed that its automated customer service agents could do the work of “700 full-time agents.”

So why the sudden about-face? As it turns out, leaving your already-frustrated customers to deal with a slop-spinning algorithm isn’t exactly best practice.

As Siemiatkowski told Bloomberg, “cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality.”

Klarna isn’t alone. Though executives in every industry, from news media to fast food, seem to think AI is ready for the hot seat — an attitude that’s more grounded in investor relations than an honest assessment of the tech — there are growing signs that robot chickens are coming home to roost.

In January of last year, a survey of over 1,400 business executives found that 66 percent were “ambivalent or outright dissatisfied with their organization’s progress on AI and GenAI so far.” The top issue corporate bosses cited was AI’s “lack of talent and skills.”

It’s a problem that evidently hasn’t improved over the year. Another survey recently found that over 55 percent of UK business leaders who rushed to replace jobs with AI now regret their decision.

It’s not hard to see why. An experiment carried out by researchers at Carnegie Mellon University stuffed a fake software company full of AI employees, and their performance was laughably bad — the best AI worker finished just 24 percent of the tasks assigned to it.

When it comes to the question of whether AI will take jobs, there seem to be as many answers as there are CEOs excited to save a buck.

There are gray areas, to be sure — AI is certainly helping corporations speed up low-wage outsourcing, and the tech is having a verifiable effect on labor market volatility — just don’t count on CEOs to have much patience as AI starts to chomp at their bottom line.

More on AI: Dystopia Intensifies as Startup Lets You Take Out a Micro-Loan to Get Fast Food

The post Company Regrets Replacing All Those Pesky Human Workers With AI, Just Wants Its Humans Back appeared first on Futurism.

The FDA Will Use AI to Accelerate Approving Drugs

AI Chat - Image Generator:
The FDA announced that it will start using AI across all of its centers to shorten the drug review process.

The Food and Drug Administration just announced that it will immediately start using AI across all of its centers, after completing a new generative AI pilot for scientific reviewers.

Supposedly, the AI tool will speed up the FDA’s drug review process by reducing the time its scientists have to spend doing tedious, repetitive tasks — though, given AI’s track record of constantly hallucinating, these claims warrant plenty of scrutiny.

“This is a game-changer technology that has enabled me to perform scientific review tasks in minutes that used to take three days,” said Jinzhong Liu, a deputy director in the FDA’s Center for Drug Evaluation and Research (CDER), in a statement.

FDA commissioner Martin Makary has directed that all FDA centers should achieve full AI integration by June 30, a questionably aggressive timeline.

“By that date, all centers will be operating on a common, secure generative AI system integrated with FDA’s internal data platforms,” the agency said in its announcement.

The announcement comes just a day after Wired reported that the FDA and OpenAI were holding talks to discuss the agency’s use of AI. Notably, the FDA’s new statement makes no mention of OpenAI or its potential involvement.

Behind the scenes, however, Wired sources say that a team from the ChatGPT maker met with the FDA and two associates from Elon Musk’s so-called Department of Government Efficiency multiple times in recent weeks, to discuss a project called “cderGPT.” The name is almost certainly a reference to the FDA’s abovementioned CDER, which regulates drugs sold in the US.

This may have been a long time coming. Wired notes that the FDA sponsored a fellowship in 2023 to develop large language models for internal use. And according to Robert Califf, who served as FDA commissioner between 2016 and 2017, the agency review teams have already been experimenting with AI for several years.

“It will be interesting to hear the details of which parts of the review were ‘AI assisted’ and what that means,” Califf told Wired. “There has always been a quest to shorten review times and a broad consensus that AI could help.”

The agency was considering using AI in other aspects of its operations, too.

“Final reviews for approval are only one part of a much larger opportunity,” Califf added.

Makary, who was appointed commissioner by president Donald Trump, has frequently expressed his enthusiasm for the technology.

“Why does it take over ten years for a new drug to come to market?” he tweeted on Wednesday. “Why are we not modernized with AI and other things?”

The FDA news parallels a broader trend of AI adoption in federal agencies during the Trump administration. In March, OpenAI announced a version of its chatbot called ChatGPT Gov designed to be secure enough to process sensitive government information. Musk has pushed to fast-track the development of another AI chatbot for the US General Services Administration, while using the technology to try to rewrite the Social Security computer system.

Yet, the risks of using the technology in a medical context are concerning, to say the least. Speaking to Wired, an ex-FDA staffer who has tested ChatGPT as a clinical tool pointed out the chatbot’s proclivity for making up convincing-sounding lies — a problem that won’t go away anytime soon.

“Who knows how robust the platform will be for these reviewers’ tasks,” the former FDA employee told the magazine.

More on medical AI: Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies

The post The FDA Will Use AI to Accelerate Approving Drugs appeared first on Futurism.

OpenAI Forced to Abandon Plans to Become For-Profit

AI Chat - Image Generator:
Thanks in part to erstwhile cofounder Elon Musk's lawsuit, OpenAI won't be going entirely for-profit anytime soon.

Money Matters

OpenAI may be raking in the investor dough, but thanks in part to erstwhile cofounder Elon Musk, the company won’t be going entirely for-profit anytime soon.

In a blog post this week, the Sam Altman-run company announced that it would remain under the control of its original non-profit governing board as it shifts its planned restructuring efforts of its for-profit arm.

“Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC),” the post reads, which is a “purpose-driven company structure that has to consider the interests of both shareholders and the mission.”

Though Musk was not named, that allusion to “the mission” — the building of artificial general intelligence (AGI) that “benefits all of humanity” — hearkens back to the billionaire’s lawsuit alleging that OpenAI strayed from said purpose when initially launching its for-profit arm in 2019 upon his exit.

OpenAI claims in its post that it came to the decision to remain under the control of the non-profit board — the same one that fired Altman in late November 2023, only to reinstate him a few days later — “after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.”

Mission Impossible

Late last December, amid Musk’s ongoing suit that was initially filed in March 2024, the company announced its plans to restructure into a PBC that would help it “raise more capital than we’d imagined” while staying on-mission.

That plan, as CNN reports, raised alarm bells about how OpenAI would balance raising gobs of money with its beneficial AGI mission. It seems that this latest move is its response — though according to Musk’s attorney Marc Toberoff, the PBC announcement “changes nothing.”

“OpenAI’s announcement is a transparent dodge that fails to address the core issues: charitable assets have been and still will be transferred for the benefit of private persons,” Toberoff said in a statement provided to Bloomberg. “The founding mission remains betrayed.”

In a rebuttal to the same outlet, an OpenAI insider hit back at Musk and his “baseless lawsuit,” which “only proves that it was always a bad-faith attempt to slow us down.”

Accusations aside, this is still a pretty far cry from turning OpenAI into a bona fide for-profit venture — and regardless of what the company claims, Musk’s almost certainly jealousy-based lawsuit has played a role in making sure that doesn’t happen.

More on OpenAI moves: OpenAI Trying to Buy Chrome So It Can Ingest Your Entire Online Life to Train AI

The post OpenAI Forced to Abandon Plans to Become For-Profit appeared first on Futurism.