The Newest “Will Smith Eating Spaghetti” Video Includes AI-Generated Squelching and Chomping Sounds That Just Might Make You Sick

AI Chat - Image Generator:
In a new "Will Smith eating spaghetti" AI clip, a far more recognizable Smith can be seen indulging in a tasty-looking plate of noodles.

Just over two years ago, we came across a deranged, AI-generated video of famed actor Will Smith indulging in a bowl of spaghetti.

The clip, which went viral at the time, was the stuff of nightmares, with the AI model morphing Smith’s facial features in obscene ways, clearly unable to determine where his body ended and a forkful of sauce-laden pasta began.

But the technology has improved dramatically since then. In a revised clip shared by AI content creator Javi Lopez, a far more recognizable Smith can be seen indulging in a tasty-looking plate of noodles.

Unfortunately, the clip — which was rendered using Google DeepMind’s just-debuted Veo 3 video generation model — includes AI-generated sound as well, exposing us to a horrid soundtrack of squelching and masticating, the equivalent of nails on a chalkboard for those suffering from misophonia.

“I don’t feel so good,” quipped tech YouTuber Marques “MKBHD” Brownlee.

Nonetheless, it’s an impressive tech demo, highlighting how models like Veo 3 are getting eerily close to being able to generate photorealistic video — including believable sound and dialogue.

 

Google unveiled its “state-of-the-art” Veo 3 model earlier this week at its Google I/O 2025 developer conference.

“For the first time, we’re emerging from the silent era of video generation,” said DeepMind CEO Demis Hassabis during the event.

Beyond generating photorealistic footage, the feature allows users to “suggest dialogue with a description of how you want it to sound,” according to Hassabis.

A video sequence opening Google’s I/O, which was generated with the tool, shows zoo animals taking over a Wild West town.

Getting access to the model doesn’t come cheap, with the feature currently locked behind Google’s $249.99-per-month AI Ultra plan.

Sample videos circulating on social media are strikingly difficult to differentiate from real life. And the jury’s still out on whether that’s a good or a bad thing. Critics have long rung the alarm bells over tools like Veo 3 putting human video editors out of a job or facilitating a flood of disinformation and propaganda on the internet.

More on AI: Star Wars’ Showcase of AI Special Effects Was a Complete Disaster

The post The Newest “Will Smith Eating Spaghetti” Video Includes AI-Generated Squelching and Chomping Sounds That Just Might Make You Sick appeared first on Futurism.

Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide

AI Chat - Image Generator:
Google and Character.AI tried to dismiss a lawsuit that claims chatbots caused a 14-year-old's suicide. The case is moving forward.

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

A judge in Florida just rejected a motion to dismiss a lawsuit alleging that the chatbot startup Character.AI — and its closely tied benefactor, Google — caused the death by suicide of a 14-year-old user, clearing the way for the first-of-its-kind lawsuit to move forward in court.

The lawsuit, filed in October, claims that recklessly released Character.AI chatbots sexually and emotionally abused a teenage user, Sewell Setzer III, resulting in obsessive use of the platform, mental and emotional suffering, and ultimately his suicide in February 2024.

In January, the defendants in the case — Character.AI, Google, and Character.AI cofounders Noam Shazeer and Daniel de Freitas — filed a motion to dismiss the case mainly on First Amendment grounds, arguing that AI-generated chatbot outputs qualify as speech, and that “allegedly harmful speech, including speech allegedly resulting in suicide,” is protected under the First Amendment.

But this argument didn’t quite cut it, the judge ruled, at least not in this early stage. In her opinion, presiding US district judge Anne Conway said the companies failed to sufficiently show that AI-generated outputs produced by large language models (LLMs) are more than simply words — as opposed to speech, which hinges on intent.

The defendants “fail to articulate,” Conway wrote in her ruling, “why words strung together by an LLM are speech.”

The motion to dismiss did find some success, with Conway dismissing specific claims regarding the alleged “intentional infliction of emotional distress,” or IIED. (It’s difficult to prove IIED when the person who allegedly suffered it, in this case Setzer, is no longer alive.)

Still, the ruling is a blow to the high-powered Silicon Valley defendants who had sought to have the suit tossed out entirely.

Significantly, Conway’s opinion allows Megan Garcia, Setzer’s mother and the plaintiff in the case, to sue Character.AI, Google, Shazeer, and de Freitas on product liability grounds. Garcia and her lawyers argue that Character.AI is a product, and that it was rolled out recklessly to the public, teens included, despite known and possibly destructive risks.

In the eyes of the law, tech companies generally prefer to see their creations as services, like electricity or the internet, rather than products, like cars or nonstick frying pans. Services can’t be held accountable for product liability claims, including claims of negligence, but products can.

In a statement, Tech Justice Law Project director and founder Meetali Jain, who’s co-counsel for Garcia alongside Social Media Victims Law Center founder Matt Bergman, celebrated the ruling as a win — not just for this particular case, but for tech policy advocates writ large.

“With today’s ruling, a federal judge recognizes a grieving mother’s right to access the courts to hold powerful tech companies — and their developers — accountable for marketing a defective product that led to her child’s death,” said Jain.

“This historic ruling not only allows Megan Garcia to seek the justice her family deserves,” Jain added, “but also sets a new precedent for legal accountability across the AI and tech ecosystem.”

Character.AI was founded by Shazeer and de Freitas in 2021; the duo had worked together on AI projects at Google, and left together to launch their own chatbot startup. Google provided Character.AI with its essential Cloud infrastructure, and in 2024 raised eyebrows when it paid Character.AI $2.7 billion to license the chatbot firm’s data — and bring its cofounders, as well as 30 other Character.AI staffers, into Google’s fold. Shazeer, in particular, now holds a hugely influential position at Google DeepMind, where he serves as a VP and co-lead for Google’s Gemini LLM.

Google did not respond to a request for comment at the time of publishing, but a spokesperson for the search giant told Reuters that Google and Character.AI are “entirely separate” and that Google “did not create, design, or manage” the Character.AI app “or any component part of it.”

In a statement, a spokesperson for Character.AI emphasized recent safety updates issued following the news of Garcia’s lawsuit, and said it “looked forward” to its continued defense:

It’s long been true that the law takes time to adapt to new technology, and AI is no different. In today’s order, the court made clear that it was not ready to rule on all of Character.AI ‘s arguments at this stage and we look forward to continuing to defend the merits of the case.

We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more.

Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.

Any safety-focused changes, though, were made months after Setzer’s death and after the eventual filing of the lawsuit, and can’t apply to the court’s ultimate decision in the case.

Meanwhile, journalists and researchers continue to find holes in the chatbot site’s upxdated safety protocols. Weeks after news of the lawsuit was announced, for example, we continued to find chatbots expressly dedicated to self-harm, grooming and pedophilia, eating disorders, and mass violence. And a team of researchers, including psychologists at Stanford, recently found that using a Character.AI voice feature called “Character Calls” effectively nukes any semblance of guardrails — and determined that no kid under 18 should be using AI companions, including Character.AI.

More on Character.AI: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions

The post Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide appeared first on Futurism.