Dozens of Organizations Push Back Against Bill That Would Ban All AI Regulation

AI Chat - Image Generator:
Over 100 organizations have signed a letter pushing back against a sweeping bill that would ban all AI regulation for the next ten years.

No Rules, No Exceptions

The latest version of the Republicans’ Budget Reconciliation Bill — the “one big, beautiful bill,” as President Trump has called it — includes a clause that would ban all AI regulation in the US at the state level for a full decade. Over 100 organizations, CNN reports, are calling for lawmakers not to pass it.

According to CNN, 141 policy groups, academic institutions, unions, and other organizations have signed a letter demanding that legislators in Washington walk back the sweeping deregulatory provision, urging that the bill would allow AI companies to run wild without safeguards or accountability — regardless of any negative impact their technology might have on American citizens.

The letter warns that under the proposal, Americans would have no way to institute regulatory safeguards around and against AI systems as they “increasingly shape critical aspects of Americans’ lives,” including in areas like “hiring, housing, healthcare, policing, and financial services.”

There aren’t any exceptions outlined in the bill, which declares instead that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the ten year period beginning on the date of the enactment of this Act,” as 404 Media was first to flag last week.

“The AI preemption provision is a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives,” Emily Peterson-Cassin of the nonprofit Demand Progress, whose organization wrote the letter, told CNN.

Forseeable Harm

In the letter, the groups emphasize that such a drastic moratorium on regulatory action would mean that even in cases where a company “deliberately designs an algorithm that causes foreseeable harm — regardless of how intentional or egregious the misconduct or how devastating the consequences — the company making that bad tech would be unaccountable to lawmakers and the public.”

Transformational new technologies can be riddled with unknown, chaotic, and sometimes quite destructive outcomes. And as the writers of the letter note, regulation can serve to fuel innovation, and not stifle it by way of a thousand Silicon Valley lobbying-dollar-funded cuts.

“Protecting people from being harmed by new technologies,” reads the letter, “including by holding companies accountable when they cause harm, ultimately spurs innovation and adoption of new technologies.”

“We will only reap the benefits of AI,” it continues, “if people have a reason to trust it.”

More on the bill: New Law Would Ban All AI Regulation for a Decade

The post Dozens of Organizations Push Back Against Bill That Would Ban All AI Regulation appeared first on Futurism.

OpenAI’s Top Scientist Wanted to “Build a Bunker Before We Release AGI”

AI Chat - Image Generator:
OpenAI's former chief scientist Ilya Sutskever has long been preparing for AGI — and he discussed with coworkers doomsday prep plans.

Feel The AGI

OpenAI’s former chief scientist, Ilya Sutskever, has long been preparing for artificial general intelligence (AGI), an ill-defined industry term for the point at which human intellect is outpaced by algorithms — and he’s got some wild plans for when that day may come.

In interviews with The Atlantic‘s Karen Hao, who is writing a book about the unsuccessful November 2023 ouster of CEO Sam Altman, people close to Sutskever said that he seemed mighty preoccupied with AGI.

According to a researcher who heard the since-resigned company cofounder wax prolific about it during a summer 2023 meeting, an apocalyptic scenario seemed to be a foregone conclusion to Sutskever.

“Once we all get into the bunker…” the chief scientist began.

“I’m sorry,” the researcher interrupted, “the bunker?”

“We’re definitely going to build a bunker before we release AGI,” Sutskever said, matter-of-factly. “Of course, it’s going to be optional whether you want to get into the bunker.”

The exchange highlights just how confident OpenAI’s leadership was, and remains, in the technology that it believes it’s building — even though others argue that we are nowhere near AGI and may never get there.

Rapturous

As theatrical as that exchange sounds, two other people present for the exchange confirmed that OpenAI’s resident AGI soothsayer — who, notably, claimed months before ChatGPT’s 2022 release that he believes some AI models are “slightly conscious” — did indeed mention a bunker.

“There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture,” the first researcher told Hao. “Literally, a rapture.”

As others who spoke to the author for her forthcoming book “Empire of AI” noted, Sutskever’s AGI obsession had taken on a novel tenor by summer 2023. Aside from his interest in building AGI, he had also become concerned about the way OpenAI was handling the technology it was gestating.

That concern ultimately led the mad scientist, alongside several other members of the company’s board, to oust CEO Sam Altman a few months later, and ultimately to his own departure.

Though Sutskever led the coup, his resolve, according to sources that The Atlantic spoke to, began to crack once he realized OpenAI’s rank-and-file were falling in line behind Altman. He eventually rescinded his opinion that the CEO was not fit to lead in what seems to have been an effort to save his skin — an effort that, in the end, turned out to be fruitless.

Interestingly, Hao also learned that people inside OpenAI had a nickname for the failed coup d’etat: “The Blip.”

More on AGI: Sam Altman Says OpenAI Has Figured Out How to Build AGI

The post OpenAI’s Top Scientist Wanted to “Build a Bunker Before We Release AGI” appeared first on Futurism.

MIT Disavowed a Viral Paper Claiming That AI Leads to More Scientific Discoveries

AI Chat - Image Generator:
The paper on AI and scientific discovery has now become a black eye on MIT's reputation.

No Provenance

The Massachusetts Institute of Technology (MIT) is distancing itself from a headline-making paper about AI’s purported ability to accelerate the speed of science.

The paper in question is “Artificial Intelligence, Scientific Discovery, and Product Innovation,” and was published in December as a pre-print by an MIT graduate student in economics, Aidan Toner-Rodgers. It quickly generated buzz, and outlets including The Wall Street Journal, Nature, and The Atlantic covered the paper’s (alleged) findings, which purported to demonstrate how the embrace of AI at a materials science lab led to a significant increase in workforce productivity and scientific discovery, albeit, at the cost of workforce happiness.

Toner-Rodgers’ work even earned praise from top MIT economists David Autor and 2024 Nobel laureate Daron Acemoglu, the latter of whom called the paper “fantastic.”

But it seems that praise was premature, to put it mildly. In a press release on Friday, MIT conceded that following an internal investigation, it “has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.” MIT didn’t give a reason for its backpedaling, citing “student privacy laws and MIT policy,” but it’s a black eye on MIT nonetheless.

The university has also requested that the paper be removed from the ePrint archive ArXiv and requested it be withdrawn from consideration by the Quarterly Journal of Economics, where it’s currently under review.

The ordeal is “more than just embarrassing,” Autor told the WSJ in a new report, “it’s heartbreaking.”

David vs. MIT

According to the WSJ’s latest story, the course reversal kicked off in January, when an unnamed computer scientist “with experience in materials science” approached Autor and Acemoglu with questions about how the AI tech centered in the study actually worked, and “how a lab he wasn’t aware of had experienced gains in innovation.”

When Autor and Acemoglu were unable to get to the bottom of those questions on their own, they took their concerns to MIT’s higher-ups. Enter, months later: Friday’s press release, in which Autor and Acemoglu, in a joint statement, said they wanted to “set the record straight.”

That a paper evidently so flawed passed under so many well-educated eyes with little apparent pushback is, on the one hand, pretty shocking. Then again, as materials scientist Ben Shindel wrote in a blog post, its conclusion — that AI means more scientific productivity, but less joy — feels somewhat intuitive. And yet, according to the WSJ’s reporting, it wasn’t until closer inspection by someone with domain expertise, who could see through the paper’s optimistic veneer, that those seemingly intuitive threads unwound.

More on AI and the workforce: AI Is Helping Job Seekers Lie, Flood the Market, and Steal Jobs

The post MIT Disavowed a Viral Paper Claiming That AI Leads to More Scientific Discoveries appeared first on Futurism.

World Leaders Shown AI Baby Versions of Themselves at European Summit

AI Chat - Image Generator:
World leaders being shown baby versions of themselves at a global summit.

Baby Erdoğan’s Mustache

It’s called diplomacy, guys.

This year’s European Political Community, an annual forum for European leaders founded in 2022 following the Russian invasion of Ukraine, kicked off on Friday in Tirana, Albania. Europe’s leaders were greeted with a ten-ish minute presentation that celebrated Europe’s commitment to sovereignty and shared triumphs over evil. There were flashing lights and dance performances, and a few different video sequences. And to close out the show, as Politico reports, the Albanian government landed on the obvious editorial choice: a montage of the summit’s leaders pictured as AI-generated babies, who each said “Welcome to Albania” in their country’s language.

It was perfect. Did baby-fied Recep Tayyip Erdoğan, Turkey’s authoritarian strongman, rock a tiny AI-generated mustache? He did indeed! Did French President Emmanuel Macron smack his gum in pleasant bemusement as he watched his AI baby self smile onscreen? You bet!

Our hats are off to Edi Rama, Albania’s recently re-elected president. So far, between MAGAworld and its monarch embracing AI slop as its defining aesthetic, AI-generated misinformation causing chaos, and attempted AI mayors and political parties, this is easily the most compelling use of generative AI in politics we’ve seen.

Politicking

The camera televising the event repeatedly panned to the crowd, where the response from Europe’s most powerful was mixed. Some laughed, while others bristled; some mostly looked confused. Which makes sense, given that this is a serious conference where, per Politico, the majority of leaders are looking to push for harsher sanctions on Russia as its war on Ukraine wages on and tense talks between Moscow and Kyiv continue without a ceasefire.

It’s unclear how the AI baby bit fit into Albania’s message of a peaceful, unified Europe. Though the presentation did start with childlike drawings, the sounds of kids laughing, and a youthful voiceover, so maybe it was an attempt to bring the show full circle? Or maybe, considering the heavy subject matter and fast-heating global tension and uncertainty, Rama just wanted to break the ice.

Anyway. We’re sure nothing will humble you, a leader of a nation, like sitting in an auditorium and oscillating between unsure grimaces and giggling whilst staring down your AI-generated baby face.

More on AI and guys in Europe: The New Pope Is Deeply Skeptical of AI

The post World Leaders Shown AI Baby Versions of Themselves at European Summit appeared first on Futurism.

AI Is Helping Job Seekers Lie, Flood the Market, and Steal Jobs

AI Chat - Image Generator:
According to a recent campaign, more than half of recent job applicants said they had used AI tools to write their resumes.

Oodles of Experience

The advent of generative AI has fundamentally altered the job application process. Both recruiters and applicants are making heavy use of the tech, making an already soul-sucking and tedious process even worse.

And as TechRadar reports, applicants are going to extreme lengths to nail down a job — and stand out in an extremely competitive and crowded job market. According to a recent campaign by insurer Hiscox, more than half of recent job applicants said they had used AI tools to write their resumes.

A whopping 37 percent admitted they didn’t bother correcting embellishments the AI chatbot made, like exaggerated experience and fabricated interests. 38 percent admitted to outright lying on their CVs.

The news highlights a worrying new normal, with applicants using AI to facilitate fabricating a “perfect candidate” to score a job interview.

“AI can help many candidates put their best foot forward… but it needs to be used carefully,” Hiscox chief underwriting officer Pete Treloar told TechRadar.

Perfect Candidate

Meanwhile, it’s not just job applicants using generative AI to automate the process. Recruiters have been outsourcing the role of interviewing for jobs to often flawed AI avatars.

Earlier this week, Fortune reported how a former software engineer went from earning $150,000 in upstate New York to living out of a trailer after being replaced by AI. Out of the ten interviews he scored after sending out 800 job applications, a handful of them were with AI bots.

In short, it’s a frustrating process that’s unlikely to make applying for jobs any less grueling. Hiscox found that 41 percent of applicants said AI gives some candidates an unfair advantage. 42 percent of respondents said the tech is misleading employers.

But now that the cat is out of the bag, it remains to be seen how the future of job applications will adapt to a world teeming with accessible generative AI tools.

It’s never been easier to lie on your resume — but anybody willing to do so will have to live with the consequences as well. Being caught could not only lead to immediate disqualification, it can damage one’s professional reputation, and in a worst-case scenario, result in a lawsuit. Remember: Just because everyone’s doing it doesn’t mean you won’t get busted for it — or worse.

More on lying AIs: Law Firms Caught and Punished for Passing Around “Bogus” AI Slop in Court

The post AI Is Helping Job Seekers Lie, Flood the Market, and Steal Jobs appeared first on Futurism.

OnlyFans Model Shocked After Finding Her Pictures With AI-Swapped Faces on Reddit

AI Chat - Image Generator:
An OnlyFans model was shocked to find that a scammer had stolen her content — and used it to flood Reddit with AI deepfakes.

Face Ripoff

An OnlyFans creator is speaking out after discovering that her photos were stolen by someone who used deepfake tech to give her a completely new face — and posted the deepfaked images all over Reddit.

As 25-year-old, UK-based OnlyFans creator Bunni told Mashable, image theft is a common occurrence in her field. Usually, though, catfishers would steal and share Bunni’s image without alterations.

In this case, the grift was sneakier. With the help of deepfake tools, a scammer crafted an entirely new persona named “Sofía,” an alleged 19-year-old in Spain who had Bunni’s body — but an AI-generated face.

It was “a completely different way of doing it that I’ve not had happen to me before,” Bunni, who posted a video about the theft on Instagram back in February, told Mashable. “It was just, like, really weird.”

It’s only the latest instance of a baffling trend, with “virtual influencers” pasting fake faces onto the bodies of real models and sex workers to sell bogus subscriptions and swindle netizens.

Head Swap

Using the fake Sofía persona, the scammer flooded forums across Reddit with fake images and color commentary. Sometimes, the posts were mundane; “Sofía” asked for outfit advice and, per Mashable, even shared photos of pets. But Sofía also posted images to r/PunkGirls, a pornographic subreddit.

Sofía never shared a link to another OnlyFans page, though Bunni suspects that the scammer might have been looking to chat with targets via direct messages, where they might have been passing around an OnlyFans link or requesting cash. And though Bunni was able to get the imposter kicked off of Reddit after reaching out directly to moderators, her story emphasizes how easy it is for catfishers to combine AI with stolen content to easily make and distribute convincing fakes.

“I can’t imagine I’m the first, and I’m definitely not the last, because this whole AI thing is kind of blowing out of proportion,” Bunni told Mashable. “So I can’t imagine it’s going to slow down.”

As Mashable notes, Bunni was somewhat of a perfect target: she has fans, but she’s not famous enough to trigger immediate or widespread recognition. And for a creator like Bunni, pursuing legal action might not be a feasible or even worthwhile option. It’s expensive, and right now, the law itself is still catching up.

“I don’t feel like it’s really worth it,” Bunni told Mashable. “The amount you pay for legal action is just ridiculous, and you probably wouldn’t really get anywhere anyway, to be honest.”

Reddit, for its part, didn’t respond to Mashable’s request for comment.

More on deepfakes: Gross AI Apps Create Videos of People Kissing Without Their Consent

The post OnlyFans Model Shocked After Finding Her Pictures With AI-Swapped Faces on Reddit appeared first on Futurism.

New Law Would Ban All AI Regulation for a Decade

AI Chat - Image Generator:

Fresh Hell

Republican lawmakers slipped language into the Budget Reconciliation Bill this week that would ban AI regulation, on the federal and state levels, for a decade, as 404 Media reports.

An updated version of the bill introduced last night by Congressman Brett Guthrie (R-KY), who chairs the House Committee on Energy and Commerce, includes a new and sweeping clause about AI advancement declaring that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the ten year period beginning on the date of the enactment of this Act.”

It’s a remarkably expansive provision that, as 404 notes, likely reflects the engraining of Silicon Valley figures and influences into Washington and the White House. Tech CEOs have vied for president Donald Trump’s attention since he was inaugurated, and the American tech industry writ large has become a fierce and powerful lobbying force. The Trump administration is also stacked with AI-invested tech moguls like David Sacks, Marc Andreessen, and Elon Musk.

Meanwhile, the impacts of a regulation-free AI landscape are already being felt. Emotive, addictive AI companions have been rolled out explicitly to teenagers without evidence of safety, AI companies are missing their climate targets and spewing unchecked emissions into American neighborhoods, and nonconsensual deepfakes of women and girls are flooding social media.

No regulation will likely mean a lot more fresh hell where that came from — and little chance of stemming the tide.

Blank Checks

The update in the proposed law also seeks to appropriate a staggering $500 million over ten years to fund efforts to infuse the federal government’s IT systems with “commercial” AI tech and unnamed “automation technologies.”

In other words, not only does the government want to completely stifle efforts to regulate a fast-developing technology, it also wants to integrate those unregulated technologies into the beating digital heart of the federal government.

The bill also comes after states including New York and California have worked to pass some limited AI regulations, as 404 notes. Were the bill to be signed into law, it would seemingly render those laws — which, for instance, ensure that employers review AI hiring tools for bias — unenforceable.

As it stands, the bill is in limbo. The proposal is massive, and includes drastic spending cuts to services like Medicaid and climate funds, slashes that Democrats largely oppose; Republican budget hawks, meanwhile, have raised concerns over the bill’s hefty price tag.

Whether it survives in its current form — its controversial AI provisions included — remains to be seen.

More on AI and regulation: Signs Grow That AI Is Starting to Seriously Bite Into the Job Market

The post New Law Would Ban All AI Regulation for a Decade appeared first on Futurism.

The New Pope Is Deeply Skeptical of AI

AI Chat - Image Generator:
Pope Leo XIV, the newly-crowned first American pope, is keeping the social costs of rapid AI advancement front and center.

What’s in a Name

The newly-annointed Pope Leo XIV — formerly cardinal Robert Prevost of Chicago, Illinois — revealed this weekend that his name choice was inspired in part by AI, which he sees as a possible threat to human rights and justice.

As Business Insider reports, the Chicago Pope took time during his first Sunday address to share how AI shaped the symbolic task of choosing his papal name. The last Pope Leo, Leo XIII, headed the church amid the Industrial Revolution of the 19th century, an era defined by rapid technological advancement, rampant labor exploitation, severe wealth inequality, and public health crises.

During his papacy, Pope Leo XIII was deeply concerned with the collateral social damage wrought by unchecked technological innovation. Now, seeing similarities between the technological shifts of centuries past, Leo XIV is ready to pick up where his immediate predecessor, Pope Francis, left off, holding the potential social costs of AI advancement front and center.

“Sensing myself called to continue in this same path, I chose to take the name Leo XIV,” the new Pope said during the landmark speech, according to BI. “There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical ‘Rerum Novarum’ addressed the social question in the context of the first great industrial revolution.”

“In our own day,” he continued, “the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor.”

Undignified AI

The new Pope on the block has a point. Though public-facing products, like AI-powered chatbots and image generators, appear in sleek interfaces on computer and phone screens, they come with some considerable costs behind the scenes.

Case in point, Elon Musk’s massive xAI datacenter in Memphis, which has been polluting a predominantly Black neighborhood with smoggy fumes, worsening air quality in an area that already tops lists for emergency room visits for asthma.

Energy-hungry data centers are also leading to conflicts over water use, and have caused tech giants like Google to miss climate targets.

The public is also grappling with growing concerns over the psychological impacts of generative AI products like AI companions and assistants, particularly their impacts on kids and people with mental health concerns. The tech also continues to be a remarkably efficient and low-cost way to produce misinformation and deepfakes.

In short, much like his predecessor, Pope Leo XIV appears to be well aware of the many “challenges” we face in the age of AI.

More on AI: AI Brown-Nosing Is Becoming a Huge Problem for Society

The post The New Pope Is Deeply Skeptical of AI appeared first on Futurism.

Chinese Tech Giant Wants to Translate Your Cat’s Meows Using AI

AI Chat - Image Generator:

Doodle Translate

Chinese tech company Baidu is working on an artificial intelligence-based translation system that could finally decode the greatest language mystery in the world: your cat’s meows.

As Reuters reports, the company filed a patent with the China National Intellectual Property Administration proposing an AI-powered system to translate animal sounds.

But whether it’ll ultimately be successful in deciphering your dog’s barks or your cat’s meows remains to be seen. Despite years of research, scientists are still far from deciphering animal communication.

Baidu is hoping that the system could bring humans and their pets closer together. According to the company’s patent document, it could allow for a “deeper emotional communication and understanding between animals and humans, improving the accuracy and efficiency of interspecies communication.”

Me Eat Squirrel

A spokesperson told Reuters that the system is “still in the research phase,” suggesting there’s still significant work to be done.

But Baidu has already made considerable headway. The company, which also runs the country’s largest search engine, has invested in AI for years, releasing its latest AI model last month.

Baidu is only one of many companies working to decode animal communication using AI. For instance, California-based nonprofit Earth Species Project has been attempting to build an AI-based system that can translate birdsong, the whistles of dolphins, and the rumblings of elephants.

A separate nonprofit called NatureLM recently announced that it secured $17 million in grants to create language models that can identify the ways animals communicate with each other.

Researchers have also attempted to use machine learning to understand the vocalizations of crows and monkeys.

While a direct animal translation tool is more than likely still many years out, some scientists have claimed early successes. Last year, a team of scientists from SETI (Search for Extraterrestrial Intelligence) claimed to have “conversed” with a humpback whale in Alaska.

“The things we learn from communicating with whales could help us when it comes time to connect with aliens,” SETI researcher and University of California Davis animal behavioralist Josie Hubbard told the New York Post at the time.

More on AI translation: World’s Largest Call Center Deploys AI to “Neutralize the Accent” of Indian Employees

The post Chinese Tech Giant Wants to Translate Your Cat’s Meows Using AI appeared first on Futurism.

OpenAI Forced to Abandon Plans to Become For-Profit

AI Chat - Image Generator:
Thanks in part to erstwhile cofounder Elon Musk's lawsuit, OpenAI won't be going entirely for-profit anytime soon.

Money Matters

OpenAI may be raking in the investor dough, but thanks in part to erstwhile cofounder Elon Musk, the company won’t be going entirely for-profit anytime soon.

In a blog post this week, the Sam Altman-run company announced that it would remain under the control of its original non-profit governing board as it shifts its planned restructuring efforts of its for-profit arm.

“Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC),” the post reads, which is a “purpose-driven company structure that has to consider the interests of both shareholders and the mission.”

Though Musk was not named, that allusion to “the mission” — the building of artificial general intelligence (AGI) that “benefits all of humanity” — hearkens back to the billionaire’s lawsuit alleging that OpenAI strayed from said purpose when initially launching its for-profit arm in 2019 upon his exit.

OpenAI claims in its post that it came to the decision to remain under the control of the non-profit board — the same one that fired Altman in late November 2023, only to reinstate him a few days later — “after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.”

Mission Impossible

Late last December, amid Musk’s ongoing suit that was initially filed in March 2024, the company announced its plans to restructure into a PBC that would help it “raise more capital than we’d imagined” while staying on-mission.

That plan, as CNN reports, raised alarm bells about how OpenAI would balance raising gobs of money with its beneficial AGI mission. It seems that this latest move is its response — though according to Musk’s attorney Marc Toberoff, the PBC announcement “changes nothing.”

“OpenAI’s announcement is a transparent dodge that fails to address the core issues: charitable assets have been and still will be transferred for the benefit of private persons,” Toberoff said in a statement provided to Bloomberg. “The founding mission remains betrayed.”

In a rebuttal to the same outlet, an OpenAI insider hit back at Musk and his “baseless lawsuit,” which “only proves that it was always a bad-faith attempt to slow us down.”

Accusations aside, this is still a pretty far cry from turning OpenAI into a bona fide for-profit venture — and regardless of what the company claims, Musk’s almost certainly jealousy-based lawsuit has played a role in making sure that doesn’t happen.

More on OpenAI moves: OpenAI Trying to Buy Chrome So It Can Ingest Your Entire Online Life to Train AI

The post OpenAI Forced to Abandon Plans to Become For-Profit appeared first on Futurism.