Dozens of Organizations Push Back Against Bill That Would Ban All AI Regulation

AI Chat - Image Generator:
Over 100 organizations have signed a letter pushing back against a sweeping bill that would ban all AI regulation for the next ten years.

No Rules, No Exceptions

The latest version of the Republicans’ Budget Reconciliation Bill — the “one big, beautiful bill,” as President Trump has called it — includes a clause that would ban all AI regulation in the US at the state level for a full decade. Over 100 organizations, CNN reports, are calling for lawmakers not to pass it.

According to CNN, 141 policy groups, academic institutions, unions, and other organizations have signed a letter demanding that legislators in Washington walk back the sweeping deregulatory provision, urging that the bill would allow AI companies to run wild without safeguards or accountability — regardless of any negative impact their technology might have on American citizens.

The letter warns that under the proposal, Americans would have no way to institute regulatory safeguards around and against AI systems as they “increasingly shape critical aspects of Americans’ lives,” including in areas like “hiring, housing, healthcare, policing, and financial services.”

There aren’t any exceptions outlined in the bill, which declares instead that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the ten year period beginning on the date of the enactment of this Act,” as 404 Media was first to flag last week.

“The AI preemption provision is a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives,” Emily Peterson-Cassin of the nonprofit Demand Progress, whose organization wrote the letter, told CNN.

Forseeable Harm

In the letter, the groups emphasize that such a drastic moratorium on regulatory action would mean that even in cases where a company “deliberately designs an algorithm that causes foreseeable harm — regardless of how intentional or egregious the misconduct or how devastating the consequences — the company making that bad tech would be unaccountable to lawmakers and the public.”

Transformational new technologies can be riddled with unknown, chaotic, and sometimes quite destructive outcomes. And as the writers of the letter note, regulation can serve to fuel innovation, and not stifle it by way of a thousand Silicon Valley lobbying-dollar-funded cuts.

“Protecting people from being harmed by new technologies,” reads the letter, “including by holding companies accountable when they cause harm, ultimately spurs innovation and adoption of new technologies.”

“We will only reap the benefits of AI,” it continues, “if people have a reason to trust it.”

More on the bill: New Law Would Ban All AI Regulation for a Decade

The post Dozens of Organizations Push Back Against Bill That Would Ban All AI Regulation appeared first on Futurism.

The Newest “Will Smith Eating Spaghetti” Video Includes AI-Generated Squelching and Chomping Sounds That Just Might Make You Sick

AI Chat - Image Generator:
In a new "Will Smith eating spaghetti" AI clip, a far more recognizable Smith can be seen indulging in a tasty-looking plate of noodles.

Just over two years ago, we came across a deranged, AI-generated video of famed actor Will Smith indulging in a bowl of spaghetti.

The clip, which went viral at the time, was the stuff of nightmares, with the AI model morphing Smith’s facial features in obscene ways, clearly unable to determine where his body ended and a forkful of sauce-laden pasta began.

But the technology has improved dramatically since then. In a revised clip shared by AI content creator Javi Lopez, a far more recognizable Smith can be seen indulging in a tasty-looking plate of noodles.

Unfortunately, the clip — which was rendered using Google DeepMind’s just-debuted Veo 3 video generation model — includes AI-generated sound as well, exposing us to a horrid soundtrack of squelching and masticating, the equivalent of nails on a chalkboard for those suffering from misophonia.

“I don’t feel so good,” quipped tech YouTuber Marques “MKBHD” Brownlee.

Nonetheless, it’s an impressive tech demo, highlighting how models like Veo 3 are getting eerily close to being able to generate photorealistic video — including believable sound and dialogue.

 

Google unveiled its “state-of-the-art” Veo 3 model earlier this week at its Google I/O 2025 developer conference.

“For the first time, we’re emerging from the silent era of video generation,” said DeepMind CEO Demis Hassabis during the event.

Beyond generating photorealistic footage, the feature allows users to “suggest dialogue with a description of how you want it to sound,” according to Hassabis.

A video sequence opening Google’s I/O, which was generated with the tool, shows zoo animals taking over a Wild West town.

Getting access to the model doesn’t come cheap, with the feature currently locked behind Google’s $249.99-per-month AI Ultra plan.

Sample videos circulating on social media are strikingly difficult to differentiate from real life. And the jury’s still out on whether that’s a good or a bad thing. Critics have long rung the alarm bells over tools like Veo 3 putting human video editors out of a job or facilitating a flood of disinformation and propaganda on the internet.

More on AI: Star Wars’ Showcase of AI Special Effects Was a Complete Disaster

The post The Newest “Will Smith Eating Spaghetti” Video Includes AI-Generated Squelching and Chomping Sounds That Just Might Make You Sick appeared first on Futurism.

OpenAI’s Top Scientist Wanted to “Build a Bunker Before We Release AGI”

AI Chat - Image Generator:
OpenAI's former chief scientist Ilya Sutskever has long been preparing for AGI — and he discussed with coworkers doomsday prep plans.

Feel The AGI

OpenAI’s former chief scientist, Ilya Sutskever, has long been preparing for artificial general intelligence (AGI), an ill-defined industry term for the point at which human intellect is outpaced by algorithms — and he’s got some wild plans for when that day may come.

In interviews with The Atlantic‘s Karen Hao, who is writing a book about the unsuccessful November 2023 ouster of CEO Sam Altman, people close to Sutskever said that he seemed mighty preoccupied with AGI.

According to a researcher who heard the since-resigned company cofounder wax prolific about it during a summer 2023 meeting, an apocalyptic scenario seemed to be a foregone conclusion to Sutskever.

“Once we all get into the bunker…” the chief scientist began.

“I’m sorry,” the researcher interrupted, “the bunker?”

“We’re definitely going to build a bunker before we release AGI,” Sutskever said, matter-of-factly. “Of course, it’s going to be optional whether you want to get into the bunker.”

The exchange highlights just how confident OpenAI’s leadership was, and remains, in the technology that it believes it’s building — even though others argue that we are nowhere near AGI and may never get there.

Rapturous

As theatrical as that exchange sounds, two other people present for the exchange confirmed that OpenAI’s resident AGI soothsayer — who, notably, claimed months before ChatGPT’s 2022 release that he believes some AI models are “slightly conscious” — did indeed mention a bunker.

“There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture,” the first researcher told Hao. “Literally, a rapture.”

As others who spoke to the author for her forthcoming book “Empire of AI” noted, Sutskever’s AGI obsession had taken on a novel tenor by summer 2023. Aside from his interest in building AGI, he had also become concerned about the way OpenAI was handling the technology it was gestating.

That concern ultimately led the mad scientist, alongside several other members of the company’s board, to oust CEO Sam Altman a few months later, and ultimately to his own departure.

Though Sutskever led the coup, his resolve, according to sources that The Atlantic spoke to, began to crack once he realized OpenAI’s rank-and-file were falling in line behind Altman. He eventually rescinded his opinion that the CEO was not fit to lead in what seems to have been an effort to save his skin — an effort that, in the end, turned out to be fruitless.

Interestingly, Hao also learned that people inside OpenAI had a nickname for the failed coup d’etat: “The Blip.”

More on AGI: Sam Altman Says OpenAI Has Figured Out How to Build AGI

The post OpenAI’s Top Scientist Wanted to “Build a Bunker Before We Release AGI” appeared first on Futurism.

How Much Electricity It Actually Takes to Use AI May Surprise You

AI Chat - Image Generator:
A new survey is shedding light on the staggering amount of energy used to power the AI boom and its fast-rising industry.

By now, most of us should be vaguely aware that artificial intelligence is hungry for power.

Even if you don’t know the exact numbers, the charge that “AI is bad for the environment” is well-documented, bubbling from sources ranging from mainstream press to pop-science YouTube channels to tech trade media.

Still, the AI industry as we know it today is young. Though startups and big tech firms have been plugging away on large language models (LLMs) since the 2010s, the release of consumer generative AI in late 2022 brought about a huge increase in AI adoption, leading to an unprecedented “AI boom.”

In under three years, AI has come to dominate global tech spending in ways researchers are just starting to quantify. In 2024, for example, AI companies nabbed 45 percent of all US venture capital tech investments, up from only nine percent in 2022. Medium-term, big-name consultant firms like McKinsey expect AI infrastructure spending to grow to $6.7 trillion by 2030; compare this to just $450 billion in 2022.

That being the case, research on AI’s climate and environmental impacts can seem vague and scattered, as analysts race to establish concrete environmental trends in the extraordinary explosion of the AI industry.

A new survey by MIT Technology Review is trying to change that. The authors spoke to two dozen AI experts working to uncover the tech’s climate impact, combed “hundreds of pages” of data and reports, and probed the top developers of LLM tools in order to provide a “comprehensive look” at the industry’s impact.

“Ultimately, we found that the common understanding of AI’s energy consumption is full of holes,” the authors wrote. That led them to start small, looking at the energy use of a single LLM query.

Beginning with text-based LLMs, they found that model size directly predicted energy demand, as bigger LLMs use more chips — and therefore more energy — to process questions. While smaller models like Meta’s Llama 3.1 8B used roughly 57 joules per response (or 114 joules when the authors factored for cooling power and other energy needs), larger units needed 3,353 joules (or 6,706), or in MIT Tech‘s point of reference, enough to run a microwave for eight seconds.

Image-generating AI models, like Stable Diffusion 3 Medium, needed 1,141 joules (or 2,282) on average to spit out a standard 1024 x 1024 pixel image — the type that are rapidly strangling the internet. Doubling the quality of the image likewise doubles the energy use to 4,402 joules, worth over five seconds of microwave warming time, still less than the largest language bot.

Video generation is where the sparks really start flying. The lowest-quality AI video software, a nine-month old version of Code Carbon, took an eye-watering 109,000 joules to spew out a low-quality, 8fps film — “more like a GIF than a video,” the authors noted.

Better models use a lot more. With a recent update, that same tool takes 3.4 million joules to spit out a five-second, 16fps video, equivalent to running a microwave for over an hour.

Whether any of those numbers amount to a lot or a little is open to debate. Running the microwave for a few seconds isn’t much, but if everybody starts doing so hundreds of times a day — or in the case of video, for hours at a time — it’ll make a huge impact in the world’s power consumption. And of course, the AI industry is currently trending toward models that use more power, not less.

Zooming out, the MIT Tech survey also highlights some concerning trends.

One is the overall rise in power use correlating to the rise of AI. While data center power use remained mostly steady across the US between 2005 and 2017, their power consumption doubled by 2023, our first full year with mass-market AI.

As of 2024, 4.4 percent of all energy consumed in the US went toward data centers. Meanwhile, data centers’ carbon intensity — the amount of iceberg-melting exhaust spewed relative to energy used — became 48 percent higher than the US average.

All that said, the MIT authors have a few caveats.

First, we can’t look under the hood at closed-source AI models like OpenAI’s ChatGPT, and most of the leading AI titans have declined to join in on good-faith climate mapping initiatives like AI Energy Score. Until that changes, any attempt to map such a company’s climate impact is a stab in the dark at best.

In addition, the survey’s writers note that data centers are not inherently bad for the environment. “If all data centers were hooked up to solar panels and ran only when the Sun was shining, the world would be talking a lot less about AI’s energy consumption,” they wrote. But unfortunately, “that’s not the case.”

In countries like the US, the energy grid used to power data centers is still heavily reliant on fossil fuels, and surging demand for immediate energy are only making that worse. For example, the authors point to Elon Musk’s xAI data center outside of Memphis, which is is using 35 methane gas generators to keep its chips humming, rather than wait for approval to draw from the civilian power grid.

Unless the industry is made to adopt strategies to mitigate AI’s climate impact — like those outlined in the Paris AI Action Declaration — this will just be the beginning of a devastating rise in climate-altering emissions.

More on AI: New Law Would Ban All AI Regulation for a Decade

The post How Much Electricity It Actually Takes to Use AI May Surprise You appeared first on Futurism.

It’s Still Ludicrously Easy to Jailbreak the Strongest AI Models, and the Companies Don’t Care

AI Chat - Image Generator:
Incredibly easy AI jailbreak techniques still work on the industry's leading AI models, even months after they were discovered.

You wouldn’t use a chatbot for evil, would you? Of course not. But if you or some nefarious party wanted to force an AI model to start churning out a bunch of bad stuff it’s not supposed to, it’d be surprisingly easy to do so.

That’s according to a new paper from a team of computer scientists at Ben-Gurion University, who found that the AI industry’s leading chatbots are still extremely vulnerable to jailbreaking, or being tricked into giving harmful responses they’re designed not to — like telling you how to build chemical weapons, for one ominous example.

The key word in that is “still,” because this a threat the AI industry has long known about. And yet, shockingly, the researchers found in their testing that a jailbreak technique discovered over seven months ago still works on many of these leading LLMs.

The risk is “immediate, tangible, and deeply concerning,” they wrote in the report, which was spotlighted recently by The Guardian and is deepened by the rising number of “dark LLMs,” they say, that are explicitly marketed as having little to no ethical guardrails to begin with.

“What was once restricted to state actors or organized crime groups may soon be in the hands of anyone with a laptop or even a mobile phone,” the authors warn.

The challenge of aligning AI models, or adhering them to human values, continues to loom over the industry. Even the most well-trained LLMs can behave chaotically, lying and making up facts and generally saying what they’re not supposed to. And the longer these models are out in the wild, the more they’re exposed to attacks that try to incite this bad behavior.

Security researchers, for example, recently discovered a universal jailbreak technique that could bypass the safety guardrails of all the major LLMs, including OpenAI’s GPT 4o, Google’s Gemini 2.5, Microsoft’s Copilot, and Anthropic Claude 3.7. By using tricks like roleplaying as a fictional character, typing in leetspeak, and formatting prompts to mimic a “policy file” that AI developers give their AI models, the red teamers goaded the chatbots into freely giving detailed tips on incredibly dangerous activities, including how to enrich uranium and create anthrax.

Other research found that you could get an AI to ignore its guardrails simply by throwing in typos, random numbers, and capitalized letters into a prompt.

One big problem the report identifies is just how much of this risky knowledge is embedded in the LLM’s vast trove of training data, suggesting that the AI industry isn’t being diligent enough about what it uses to feed their creations.

“It was shocking to see what this system of knowledge consists of,” lead author Michael Fire, a researcher at Ben-Gurion University, told the Guardian.

“What sets this threat apart from previous technological risks is its unprecedented combination of accessibility, scalability and adaptability,” added his fellow author Lior Rokach.

Fire and Rokach say they contacted the developers of the implicated leading LLMs to warn them about the universal jailbreak. Their responses, however, were “underwhelming.” Some didn’t respond at all, the researchers reported, and others claimed that the jailbreaks fell outside the scope of their bug bounty programs.

In other words, the AI industry is seemingly throwing its hands up in the air.

“Organizations must treat LLMs like any other critical software component — one that requires rigorous security testing, continuous red teaming and contextual threat modelling,” Peter Garraghan, an AI security expert at Lancaster University, told the Guardian. “Real security demands not just responsible disclosure, but responsible design and deployment practices.”

More on AI: AI Chatbots Are Becoming Even Worse At Summarizing Data

The post It’s Still Ludicrously Easy to Jailbreak the Strongest AI Models, and the Companies Don’t Care appeared first on Futurism.

Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide

AI Chat - Image Generator:
Google and Character.AI tried to dismiss a lawsuit that claims chatbots caused a 14-year-old's suicide. The case is moving forward.

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

A judge in Florida just rejected a motion to dismiss a lawsuit alleging that the chatbot startup Character.AI — and its closely tied benefactor, Google — caused the death by suicide of a 14-year-old user, clearing the way for the first-of-its-kind lawsuit to move forward in court.

The lawsuit, filed in October, claims that recklessly released Character.AI chatbots sexually and emotionally abused a teenage user, Sewell Setzer III, resulting in obsessive use of the platform, mental and emotional suffering, and ultimately his suicide in February 2024.

In January, the defendants in the case — Character.AI, Google, and Character.AI cofounders Noam Shazeer and Daniel de Freitas — filed a motion to dismiss the case mainly on First Amendment grounds, arguing that AI-generated chatbot outputs qualify as speech, and that “allegedly harmful speech, including speech allegedly resulting in suicide,” is protected under the First Amendment.

But this argument didn’t quite cut it, the judge ruled, at least not in this early stage. In her opinion, presiding US district judge Anne Conway said the companies failed to sufficiently show that AI-generated outputs produced by large language models (LLMs) are more than simply words — as opposed to speech, which hinges on intent.

The defendants “fail to articulate,” Conway wrote in her ruling, “why words strung together by an LLM are speech.”

The motion to dismiss did find some success, with Conway dismissing specific claims regarding the alleged “intentional infliction of emotional distress,” or IIED. (It’s difficult to prove IIED when the person who allegedly suffered it, in this case Setzer, is no longer alive.)

Still, the ruling is a blow to the high-powered Silicon Valley defendants who had sought to have the suit tossed out entirely.

Significantly, Conway’s opinion allows Megan Garcia, Setzer’s mother and the plaintiff in the case, to sue Character.AI, Google, Shazeer, and de Freitas on product liability grounds. Garcia and her lawyers argue that Character.AI is a product, and that it was rolled out recklessly to the public, teens included, despite known and possibly destructive risks.

In the eyes of the law, tech companies generally prefer to see their creations as services, like electricity or the internet, rather than products, like cars or nonstick frying pans. Services can’t be held accountable for product liability claims, including claims of negligence, but products can.

In a statement, Tech Justice Law Project director and founder Meetali Jain, who’s co-counsel for Garcia alongside Social Media Victims Law Center founder Matt Bergman, celebrated the ruling as a win — not just for this particular case, but for tech policy advocates writ large.

“With today’s ruling, a federal judge recognizes a grieving mother’s right to access the courts to hold powerful tech companies — and their developers — accountable for marketing a defective product that led to her child’s death,” said Jain.

“This historic ruling not only allows Megan Garcia to seek the justice her family deserves,” Jain added, “but also sets a new precedent for legal accountability across the AI and tech ecosystem.”

Character.AI was founded by Shazeer and de Freitas in 2021; the duo had worked together on AI projects at Google, and left together to launch their own chatbot startup. Google provided Character.AI with its essential Cloud infrastructure, and in 2024 raised eyebrows when it paid Character.AI $2.7 billion to license the chatbot firm’s data — and bring its cofounders, as well as 30 other Character.AI staffers, into Google’s fold. Shazeer, in particular, now holds a hugely influential position at Google DeepMind, where he serves as a VP and co-lead for Google’s Gemini LLM.

Google did not respond to a request for comment at the time of publishing, but a spokesperson for the search giant told Reuters that Google and Character.AI are “entirely separate” and that Google “did not create, design, or manage” the Character.AI app “or any component part of it.”

In a statement, a spokesperson for Character.AI emphasized recent safety updates issued following the news of Garcia’s lawsuit, and said it “looked forward” to its continued defense:

It’s long been true that the law takes time to adapt to new technology, and AI is no different. In today’s order, the court made clear that it was not ready to rule on all of Character.AI ‘s arguments at this stage and we look forward to continuing to defend the merits of the case.

We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more.

Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.

Any safety-focused changes, though, were made months after Setzer’s death and after the eventual filing of the lawsuit, and can’t apply to the court’s ultimate decision in the case.

Meanwhile, journalists and researchers continue to find holes in the chatbot site’s upxdated safety protocols. Weeks after news of the lawsuit was announced, for example, we continued to find chatbots expressly dedicated to self-harm, grooming and pedophilia, eating disorders, and mass violence. And a team of researchers, including psychologists at Stanford, recently found that using a Character.AI voice feature called “Character Calls” effectively nukes any semblance of guardrails — and determined that no kid under 18 should be using AI companions, including Character.AI.

More on Character.AI: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions

The post Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide appeared first on Futurism.

A Billion Dollar AI Startup Just Collapsed Spectacularly

AI Chat - Image Generator:
Facing mounting debts, angry investors and very little revenue, the once billion-dollar Builder.ai has filed for bankruptcy.

As the artificial intelligence industry struggles with ever-rising costs — not to mention a steady uptick in hallucinations — investors are getting impatient.

One investment firm went as far as seizing $37 million from accounts owned by Builder.ai, a UK-based AI startup meant to make developing apps “as easy as ordering a pizza.” That left the company with just $5 million, according to Bloomberg, prompting its senior lenders to place it into default.

With very little cash left to keep the ship afloat, CEO Manpreet Ratia closed the startup’s doors and filed for bankruptcy.

Builder.ai was previously one of the most well-funded tech startups in the game, with over $450 million in backing from sources as big as tech giant Microsoft, Japanese investment firm SoftBank, and the Qatari government’s sovereign wealth fund. That gave it a valuation worth over $1 billion, drawing comparisons to Mark Zuckerberg’s Meta.

Ratia told the Financial Times the startup was “unable to recover from historic challenges and past decisions that placed significant strain on its financial position,” adding that he had been running the business with “zero dollars” in its US and UK accounts.

The CEO took over for Builder.ai’s founder and “chief wizard” Sachin Dev Duggal in March, after the latter saddled the business with hundreds of millions worth of debt while burning through its dwindling cash fund, according to FT.

Duggal was likewise embattled in a high-stakes legal probe by authorities in India, who named him a suspect in an alleged money laundering case. For his part, Duggal denied the accusations, saying he was simply a witness, though FT has also reported Duggal heavily relied on the services of an auditor with whom he has close personal ties.

It’s not known what, exactly, pushed the first domino. Viola Credit, the company that seized Builder.ai’s coffers, has yet to give an explanation, though we can probably guess they saw the writing on the wall and simply hoped to pad their losses.

It’s a big moment for the AI industry, as the pressure grows for AI companies to actually come out with a usable — not to mention sustainable — product. Though AI companies accounted for 40 percent of the money raised by US startups last year, the vast majority of them have yet to turn a profit.

Many AI startups struggle to find any consistent revenue stream at all beyond tech-crazed venture capitalists, and a not insignificant number have been caught misleading investors about their AI’s capabilities to keep the cash flowing.

Case in point, after Ratia took the helm back in March, Builder.ai lowered its revenue estimates for the last half of 2024 by 25 percent — a major blow for the much-hyped company. The startup was likewise caught trying to pass off human-built software as AI back in 2019.

As auditors and journalists sift through the rubble to find out what went wrong, now makes as good a time as any to take a temperature check on unchecked AI hype.

More on AI startups: Company Regrets Replacing All Those Pesky Human Workers With AI, Just Wants Its Humans Back

The post A Billion Dollar AI Startup Just Collapsed Spectacularly appeared first on Futurism.

Journalists at Chicago Newspaper “Deeply Disturbed” That “Disaster” AI Slop Was Printed Alongside Their Real Work

AI Chat - Image Generator:
Journalists at The Chicago Sun-Times are speaking out following the paper's publishing of AI-generated misinformation.

Writers at The Chicago Sun-Times, a daily newspaper owned by Chicago Public Media, are speaking out following the paper’s publishing of AI-generated misinformation, urging that the “disaster” content threatens the paper’s reputation and hard-earned reader trust.

The Sun-Times came under fire this week after readers called attention to a “summer reading list” published in the paper’s weekend edition that recommended books that turned out to be completely nonexistent. The books were all attributed to real, well-known authors, but ten out of the 15 listed titles didn’t actually exist. When 404 Media got in touch with the bylined author, he confirmed he’d used AI to drum up the list.

But the writer said he hadn’t double-checked the accuracy of the AI-generated reading list. The list was just one small piece of a 64-page “Heat Index” guide to summer, which, as the Sun-Times noted in its response to Futurism and others, had been provided by a third-party — not by the Sun-Times’ own newsroom or other staff. (Other sections within the “best of summer” feature, The Verge found, included similar erroneous and fabricated attribution issues that hinted at AI use.)

Shortly thereafter, 404 Media confirmed through the Sun-Times that the content was provided by King Features, a subsidiary of the media giant Hearst, and wasn’t reviewed by the Sun-Times before publishing.

“Historically, we don’t have editorial review from those mainly because it’s coming from a newspaper publisher, so we falsely made the assumption there would be an editorial process for this,” Victor Lim, a spokesperson for Chicago Public Media, told 404 Media. “We are updating our policy to require internal editorial oversight over content like this.”

Lim added that Chicago Public Media is “reviewing” its relationship with Hearst, which owns dozens of American newspapers and magazines. The Sun-Times has since posted a lengthy response online apologizing for the AI-spun misinformation making its way to print, while promising to change its editorial policies to protect against such gaffes in the future.

The human journalists at the paper have responded, too.

In a statement provided to media outlets, including Futurism, the paper’s union, the Chicago Sun-Times Guild, issued a forceful statement yesterday admonishing the publishing of the content. It emphasized that the 60-plus page section wasn’t the product of its newsroom, and said it was “deeply disturbed” to find undisclosed AI-generated content “printed alongside” the work of the paper’s journalists.

The Guild’s statement reads in full:

The Sun-Times Guild is aware of the third-party “summer guide” content in the Sunday, May 18 edition of the Chicago Sun-Times newspaper. This was a syndicated section produced externally without the knowledge of the members of our newsroom.

We take great pride in the union-produced journalism that goes into the respected pages of our newspaper and on our website. We’re deeply disturbed that AI-generated content was printed alongside our work. The fact that it was sixty-plus pages of this “content” is very concerning — primarily for our relationship with our audience but also for our union’s jurisdiction.

Our members go to great lengths to build trust with our sources and communities and are horrified by this slop syndication. Our readers signed up for work that has been vigorously reported and fact-checked, and we hate the idea that our own paper could spread computer- or third-party-generated misinformation. We call on Chicago Public Media management to do everything it can to prevent repeating this disaster in the future.

They’re right that reader trust is fundamental to the work of journalism, and it’s an easy thing to lose. Other AI scandals have gone hand-in-hand with reputational damage, as in the cases of CNET and Sports Illustrated, and we’ve seen journalists and their unions from around the country issue similar statements following instances of controversial AI use by publishers.

This is also the latest instance of third-party media companies distributing AI content to legitimate publishers, in many cases without the direct knowledge of those publishers. As a 2024 investigation by Futurism found, a third-party media company called AdVon Commerce used a proprietary AI tool to create articles for dozens of publishers including Sports Illustrated and The Miami Herald; that content was published under the bylines of fake writers with AI-generated headshots and phony bios, manufacturing an air of faux legitimacy. Some publishers, including the Miami Herald and other local newspapers belonging to the McClatchy publishing network, scrubbed their sites of the content following our investigation, saying they were unaware of AI use.

Here, it seems the editorial process was so lacking that AI-generated errors made their way through not just one, but two reputable American publishers before winding up in the Sun-Times’ printed edition. (The freelance writer Joshua Friedman confirmed on Bluesky that the error-riddled “Heat Index” guide was also published in The Philadelphia Inquirer.) Which, as the paper’s union emphasizes in their statement, meant it was published alongside the journalism that human media workers stake their careers on.

More on AI and journalism: Quartz Fires All Writers After Move to AI Slop

The post Journalists at Chicago Newspaper “Deeply Disturbed” That “Disaster” AI Slop Was Printed Alongside Their Real Work appeared first on Futurism.

Codex, OpenAI’s New Coding Agent, Wants to Be a World-Killer

AI Chat - Image Generator:
OpenAI is peddling what it calls a "cloud-based software engineering agent," but fails to explain where its getting the data to train it.

Though artificial intelligence is taking the world by storm, it’s still pretty bad at tasks demanding a high-degree of flexibility, like writing computer code.

Earlier this year, ChatGPT maker OpenAI published a white paper taking AI to task for its lackluster performance in a coding scrum. Among other things, it found that even the most advanced AI models are “still unable to solve the majority” of coding tasks.

Later in an interview, OpenAI CEO Sam Altman said that these models are “on the precipice of being incredible at software engineering,” adding that “software engineering by the end of 2025 looks very different than software engineering at the beginning of 2025.”

It was a bold prediction without much substance to back it — if anything, generative AI like the kind Altman pedals has only gotten worse at coding as hallucination rates increase with each new iteration.

Now we know what he was playing at.

Early on Friday, OpenAI revealed a preview of Codex, the company’s stab at a specialty coding “agent” — a fluffy industry term that seems to change definitions depending on which company is trying to sell one to you.

“Codex is a cloud-based software engineering agent that can work on many tasks in parallel,” the company’s research preview reads.

The new tool will seemingly help software engineers by writing new features, debugging existing code, and answering questions about source code, among other tasks.

Contrary to ChatGPT’s everything-in-a-box model, which is geared toward the mass market, Codex has been trained to “generate code that closely mirrors human style and PR preferences.” That’s a charitable way to say “steal other people’s code” — an AI training tactic OpenAI has been sued for in the not-too-distant past, when it helped Microsoft’s Copilot go to town on open-source and copyrighted code shared on GitHub.

Thanks in large part to a technicality, OpenAI, GitHub, and Microsoft came out of that legal scuffle pretty much unscathed, giving OpenAI some convenient legal armor should it choose to go it alone with its own in-house model trained on GitHub code.

In the Codex release, OpenAI claims its coding agent operates entirely in the cloud, cut off from the internet, meaning it can’t scour the web for data like ChatGPT. Instead, OpenAI “limits the agent’s interaction solely to the code explicitly provided via GitHub repositories and pre-installed dependencies configured by the user via a setup script.”

Still, the data used to train Codex had to come from somewhere, and judging by the rash of copyright lawsuits that seem to plague the AI industry, it’s only a matter of time before we find out where.

More on OpenAI: ChatGPT Users Are Developing Bizarre Delusions

The post Codex, OpenAI’s New Coding Agent, Wants to Be a World-Killer appeared first on Futurism.

AI Chatbots Are Putting Clueless Hikers in Danger, Search and Rescue Groups Warn

AI Chat - Image Generator:
Hikers are ending up in need of rescue because they're following the questionable recommendations of an AI chatbot.

Two hikers trying to tackle Unnecessary Mountain near Vancouver, British Columbia, had to call in a rescue team after they stumbled into snow. The pair were only wearing flat-soled sneakers, unaware that the higher altitudes of a mountain range only some 15 degrees of latitude south of the Arctic Circle might still be snowy in the spring. 

“We ended up going up there with boots for them,” Brent Calkin, leader of the Lions Bay Search and Rescue team, told the Vancouver Sun. “We asked them their boot size and brought up boots and ski poles.”

It turns out that to plan their ill-fated expedition, the hikers heedlessly followed the advice given to them by Google Maps and the AI chatbot ChatGPT.

Now, Calkin and his rescue team are warning that maybe you shouldn’t rely on dodgy apps and AI chatbots — a piece of technology known for lying and being wrong all the time — to plan a grueling excursion through the wilderness.

“With the amount of information available online, it’s really easy for people to get in way over their heads, very quickly,” Calkin told the Vancouver Sun.

Across the pond, a recent report from Mountain Rescue England and Wales blamed social media and bad navigation apps for a historic surge in rescue teams being called out, the newspaper noted.

Stephen Hui, author of the book “105 Hikes,” echoed that warning and cautioned that getting reliable information is one of the biggest challenges presented by AI chatbots and apps. With AI in particular, Hui told the Vancouver Sun, it’s not always easy to tell if it’s giving you outdated information from an obscure source or if it’s pulling from a reliable one.

From his testing of ChatGPT, Hui wasn’t too impressed. Sure, it can give you “decent directions” on the popular trails, he said, but it struggles with the obscure ones.

Most of all, AI chatbots struggle with giving you relevant real-time information.

“Time of year is a big deal in [British Columbia],” Hui told the Vancouver Sun. “The most sought-after view is the mountain top, but that’s really only accessible to hikers from July to October. In winter, people may still be seeking those views and not realize that there’s going to be snow.”

When Calkin tested ChatGPT, he found that a “good input” made a big difference in terms of the quality of the answers he got. Of course, the type of person asking a chatbot for hiking advice probably won’t know the right questions to ask.

Instead of an AI chatbot, you might, for instance, try asking a human being with experience in the area you’re looking at for advice, Calkin suggested, who you can find on indispensable founts of wisdom like Reddit forums and Facebook groups.

“Someone might tell you there’s a storm coming in this week,” Calkin told the Vancouver Sun. “Or I was just up there Wednesday and it looks good. Or you’re out of your mind, don’t take your six-year-old on that trail.”

More on AI: Elon Musk’s AI Just Went There

The post AI Chatbots Are Putting Clueless Hikers in Danger, Search and Rescue Groups Warn appeared first on Futurism.