Billion-Dollar AI Company Gives Up on AGI While Desperately Fighting to Stop Bleeding Money

The legally-embattled AI startup Character.AI has abandoned its original promise of ushering forth "personalized superintelligence."

The new CEO of Character.AI — the controversial AI chatbot startup currently fighting a high-profile child welfare lawsuit over the suicide of a 14-year-old user — says the company has abandoned its founding mission of realizing artificial general intelligence, or AGI.

In an interview with Wired, recently crowned Character.AI CEO Karandeep Anand declared that the company “gave up” on the “aspirations” of its since-departed founders, Noam Shazeer and Daniel de Freitas.

It’s a striking move for a young company that, by 2023, had reached a billion-dollar valuation while promoting a core mission to “bring personalized superintelligence to everyone on Earth.”

“We are no longer doing that,” Anand told Wired, emphasizing that the company has done “a lot of work” to shift away from building its own proprietary large language models (LLMs). The company now relies more than ever on open-source LLMs like Deepseek and Meta’s Llama, said the CEO, with the executive adding that forsaking its AGI dreams has since allowed Character.AI to find “clarity and focus” around an “AI entertainment vision.”

In many ways, the startup’s whiplash-inducing pivots feel like an important pulse check on the AI industry and its many sweeping, fantastical promises, which many experts fear is a bubble waiting to pop.

The change in course, Wired notes, is backdropped by the cold reality that despite eye-popping investments from the likes of the venture capital firm Andreessen Horowitz and Google — which rehired Shazeer and de Freitas as part of a $2.7 billion acquihire last year — Character.AI has always struggled to actually generate revenue. That’s a major problem, especially considering the process of building and training LLMs is extraordinarily expensive.

But while Character.AI might be saving some cash by relying on open-source LLMs, the shift away from making its own models, which started following the multibillion-dollar Google deal, continues to undercut the promise that made the company so allegedly valuable to its backers in the first place.

See also  Sam Altman Allegedly Has a Very Specific Tell Every Time He Lies

Character.AI and its investors once touted its enviable position as a “closed-loop” AI maker, highlighting the company’s ability to continuously collect user inputs and feed them back into its model as training data.

“In a world where data is limited,” Andreessen Horowitz partner and former Character.AI board member Sarah Wang wrote in a celebratory March 2023 blog post announcing the firm’s high-dollar investment, “companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”

Just over two years later, the situation looks drastically different. Character.AI hasn’t just moved away from its original purported mission, but appears to be pretty far from the investor-praised value proposition that previously raked in billions.

The company’s focus on fulfilling its “AI entertainment vision” also comes as Character.AI works to rehab its public image following numerous safety controversies and lawsuits. Used in large numbers by minors, Character.AI has always marketed its platform as safe for kids aged 13 and over. But in October 2024, a Florida-based mother named Megan Garcia filed a headline-making lawsuit alleging that Character.AI had released a negligent and reckless product that emotionally and sexually abused her teenage son, 14-year-old Sewell Setzer, who took his life following extensive interactions with the platform’s chatbots. (Character.AI sought to dismiss the case on First Amendment grounds, but the presiding judge slapped down the motion and allowed the lawsuit to move forward.)

Following the announcement of the lawsuit, multiple Futurism investigations into the platform revealed glaring content moderation problems. We found that the site hosted an alarming array of minor-accessible characters: user-grooming pedophile bots, pro-eating disorder bots, bots romanticizing self-harm and suicide, and bots designed to simulate real school shootings, real perpetrators of those shootings, and most troublingly, those shooters’ real victims.

See also  Looking at This Subreddit May Convince You That AI Was a Huge Mistake

Anand, for his part, told Wired that safety has taken top priority at the company. (He also told the magazine that his six-year-old daughter enjoys using the app, which is definitely against platform rules.)

“Making this platform safe is a partnership between regulators, us, and parents,” Anand proclaimed to Wired, while also pushing back against the idea that Character.AI is a “companion” app. On the contrary, the CEO explained, Character.AI is primarily a roleplay app, which he argues is a fundamentally different thing.

The company has issued multiple safety updates in response to legal action and reporting, though it has continuously declined to provide journalists with information about what steps it took to ensure the safety of minor users before releasing its product to the public. And despite those safety changes, AI experts have continued to rate the app as unsafe for minors.

In any case, whether it’s promising personalized superintelligence or entertainment-driven roleplay, one thing is clear: Character.AI could be the canary in the coalmine, an early industry star that’s struggled to live up to its lofty promises.

More on the AI industry: If the AI Bubble Pops, It Could Now Take the Entire Economy With It

The post Billion-Dollar AI Company Gives Up on AGI While Desperately Fighting to Stop Bleeding Money appeared first on Futurism.