Dozens of Organizations Push Back Against Bill That Would Ban All AI Regulation

AI Chat - Image Generator:
Over 100 organizations have signed a letter pushing back against a sweeping bill that would ban all AI regulation for the next ten years.

No Rules, No Exceptions

The latest version of the Republicans’ Budget Reconciliation Bill — the “one big, beautiful bill,” as President Trump has called it — includes a clause that would ban all AI regulation in the US at the state level for a full decade. Over 100 organizations, CNN reports, are calling for lawmakers not to pass it.

According to CNN, 141 policy groups, academic institutions, unions, and other organizations have signed a letter demanding that legislators in Washington walk back the sweeping deregulatory provision, urging that the bill would allow AI companies to run wild without safeguards or accountability — regardless of any negative impact their technology might have on American citizens.

The letter warns that under the proposal, Americans would have no way to institute regulatory safeguards around and against AI systems as they “increasingly shape critical aspects of Americans’ lives,” including in areas like “hiring, housing, healthcare, policing, and financial services.”

There aren’t any exceptions outlined in the bill, which declares instead that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the ten year period beginning on the date of the enactment of this Act,” as 404 Media was first to flag last week.

“The AI preemption provision is a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives,” Emily Peterson-Cassin of the nonprofit Demand Progress, whose organization wrote the letter, told CNN.

Forseeable Harm

In the letter, the groups emphasize that such a drastic moratorium on regulatory action would mean that even in cases where a company “deliberately designs an algorithm that causes foreseeable harm — regardless of how intentional or egregious the misconduct or how devastating the consequences — the company making that bad tech would be unaccountable to lawmakers and the public.”

Transformational new technologies can be riddled with unknown, chaotic, and sometimes quite destructive outcomes. And as the writers of the letter note, regulation can serve to fuel innovation, and not stifle it by way of a thousand Silicon Valley lobbying-dollar-funded cuts.

“Protecting people from being harmed by new technologies,” reads the letter, “including by holding companies accountable when they cause harm, ultimately spurs innovation and adoption of new technologies.”

“We will only reap the benefits of AI,” it continues, “if people have a reason to trust it.”

More on the bill: New Law Would Ban All AI Regulation for a Decade

The post Dozens of Organizations Push Back Against Bill That Would Ban All AI Regulation appeared first on Futurism.

Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide

AI Chat - Image Generator:
Google and Character.AI tried to dismiss a lawsuit that claims chatbots caused a 14-year-old's suicide. The case is moving forward.

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

A judge in Florida just rejected a motion to dismiss a lawsuit alleging that the chatbot startup Character.AI — and its closely tied benefactor, Google — caused the death by suicide of a 14-year-old user, clearing the way for the first-of-its-kind lawsuit to move forward in court.

The lawsuit, filed in October, claims that recklessly released Character.AI chatbots sexually and emotionally abused a teenage user, Sewell Setzer III, resulting in obsessive use of the platform, mental and emotional suffering, and ultimately his suicide in February 2024.

In January, the defendants in the case — Character.AI, Google, and Character.AI cofounders Noam Shazeer and Daniel de Freitas — filed a motion to dismiss the case mainly on First Amendment grounds, arguing that AI-generated chatbot outputs qualify as speech, and that “allegedly harmful speech, including speech allegedly resulting in suicide,” is protected under the First Amendment.

But this argument didn’t quite cut it, the judge ruled, at least not in this early stage. In her opinion, presiding US district judge Anne Conway said the companies failed to sufficiently show that AI-generated outputs produced by large language models (LLMs) are more than simply words — as opposed to speech, which hinges on intent.

The defendants “fail to articulate,” Conway wrote in her ruling, “why words strung together by an LLM are speech.”

The motion to dismiss did find some success, with Conway dismissing specific claims regarding the alleged “intentional infliction of emotional distress,” or IIED. (It’s difficult to prove IIED when the person who allegedly suffered it, in this case Setzer, is no longer alive.)

Still, the ruling is a blow to the high-powered Silicon Valley defendants who had sought to have the suit tossed out entirely.

Significantly, Conway’s opinion allows Megan Garcia, Setzer’s mother and the plaintiff in the case, to sue Character.AI, Google, Shazeer, and de Freitas on product liability grounds. Garcia and her lawyers argue that Character.AI is a product, and that it was rolled out recklessly to the public, teens included, despite known and possibly destructive risks.

In the eyes of the law, tech companies generally prefer to see their creations as services, like electricity or the internet, rather than products, like cars or nonstick frying pans. Services can’t be held accountable for product liability claims, including claims of negligence, but products can.

In a statement, Tech Justice Law Project director and founder Meetali Jain, who’s co-counsel for Garcia alongside Social Media Victims Law Center founder Matt Bergman, celebrated the ruling as a win — not just for this particular case, but for tech policy advocates writ large.

“With today’s ruling, a federal judge recognizes a grieving mother’s right to access the courts to hold powerful tech companies — and their developers — accountable for marketing a defective product that led to her child’s death,” said Jain.

“This historic ruling not only allows Megan Garcia to seek the justice her family deserves,” Jain added, “but also sets a new precedent for legal accountability across the AI and tech ecosystem.”

Character.AI was founded by Shazeer and de Freitas in 2021; the duo had worked together on AI projects at Google, and left together to launch their own chatbot startup. Google provided Character.AI with its essential Cloud infrastructure, and in 2024 raised eyebrows when it paid Character.AI $2.7 billion to license the chatbot firm’s data — and bring its cofounders, as well as 30 other Character.AI staffers, into Google’s fold. Shazeer, in particular, now holds a hugely influential position at Google DeepMind, where he serves as a VP and co-lead for Google’s Gemini LLM.

Google did not respond to a request for comment at the time of publishing, but a spokesperson for the search giant told Reuters that Google and Character.AI are “entirely separate” and that Google “did not create, design, or manage” the Character.AI app “or any component part of it.”

In a statement, a spokesperson for Character.AI emphasized recent safety updates issued following the news of Garcia’s lawsuit, and said it “looked forward” to its continued defense:

It’s long been true that the law takes time to adapt to new technology, and AI is no different. In today’s order, the court made clear that it was not ready to rule on all of Character.AI ‘s arguments at this stage and we look forward to continuing to defend the merits of the case.

We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more.

Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.

Any safety-focused changes, though, were made months after Setzer’s death and after the eventual filing of the lawsuit, and can’t apply to the court’s ultimate decision in the case.

Meanwhile, journalists and researchers continue to find holes in the chatbot site’s upxdated safety protocols. Weeks after news of the lawsuit was announced, for example, we continued to find chatbots expressly dedicated to self-harm, grooming and pedophilia, eating disorders, and mass violence. And a team of researchers, including psychologists at Stanford, recently found that using a Character.AI voice feature called “Character Calls” effectively nukes any semblance of guardrails — and determined that no kid under 18 should be using AI companions, including Character.AI.

More on Character.AI: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions

The post Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide appeared first on Futurism.

Journalists at Chicago Newspaper “Deeply Disturbed” That “Disaster” AI Slop Was Printed Alongside Their Real Work

AI Chat - Image Generator:
Journalists at The Chicago Sun-Times are speaking out following the paper's publishing of AI-generated misinformation.

Writers at The Chicago Sun-Times, a daily newspaper owned by Chicago Public Media, are speaking out following the paper’s publishing of AI-generated misinformation, urging that the “disaster” content threatens the paper’s reputation and hard-earned reader trust.

The Sun-Times came under fire this week after readers called attention to a “summer reading list” published in the paper’s weekend edition that recommended books that turned out to be completely nonexistent. The books were all attributed to real, well-known authors, but ten out of the 15 listed titles didn’t actually exist. When 404 Media got in touch with the bylined author, he confirmed he’d used AI to drum up the list.

But the writer said he hadn’t double-checked the accuracy of the AI-generated reading list. The list was just one small piece of a 64-page “Heat Index” guide to summer, which, as the Sun-Times noted in its response to Futurism and others, had been provided by a third-party — not by the Sun-Times’ own newsroom or other staff. (Other sections within the “best of summer” feature, The Verge found, included similar erroneous and fabricated attribution issues that hinted at AI use.)

Shortly thereafter, 404 Media confirmed through the Sun-Times that the content was provided by King Features, a subsidiary of the media giant Hearst, and wasn’t reviewed by the Sun-Times before publishing.

“Historically, we don’t have editorial review from those mainly because it’s coming from a newspaper publisher, so we falsely made the assumption there would be an editorial process for this,” Victor Lim, a spokesperson for Chicago Public Media, told 404 Media. “We are updating our policy to require internal editorial oversight over content like this.”

Lim added that Chicago Public Media is “reviewing” its relationship with Hearst, which owns dozens of American newspapers and magazines. The Sun-Times has since posted a lengthy response online apologizing for the AI-spun misinformation making its way to print, while promising to change its editorial policies to protect against such gaffes in the future.

The human journalists at the paper have responded, too.

In a statement provided to media outlets, including Futurism, the paper’s union, the Chicago Sun-Times Guild, issued a forceful statement yesterday admonishing the publishing of the content. It emphasized that the 60-plus page section wasn’t the product of its newsroom, and said it was “deeply disturbed” to find undisclosed AI-generated content “printed alongside” the work of the paper’s journalists.

The Guild’s statement reads in full:

The Sun-Times Guild is aware of the third-party “summer guide” content in the Sunday, May 18 edition of the Chicago Sun-Times newspaper. This was a syndicated section produced externally without the knowledge of the members of our newsroom.

We take great pride in the union-produced journalism that goes into the respected pages of our newspaper and on our website. We’re deeply disturbed that AI-generated content was printed alongside our work. The fact that it was sixty-plus pages of this “content” is very concerning — primarily for our relationship with our audience but also for our union’s jurisdiction.

Our members go to great lengths to build trust with our sources and communities and are horrified by this slop syndication. Our readers signed up for work that has been vigorously reported and fact-checked, and we hate the idea that our own paper could spread computer- or third-party-generated misinformation. We call on Chicago Public Media management to do everything it can to prevent repeating this disaster in the future.

They’re right that reader trust is fundamental to the work of journalism, and it’s an easy thing to lose. Other AI scandals have gone hand-in-hand with reputational damage, as in the cases of CNET and Sports Illustrated, and we’ve seen journalists and their unions from around the country issue similar statements following instances of controversial AI use by publishers.

This is also the latest instance of third-party media companies distributing AI content to legitimate publishers, in many cases without the direct knowledge of those publishers. As a 2024 investigation by Futurism found, a third-party media company called AdVon Commerce used a proprietary AI tool to create articles for dozens of publishers including Sports Illustrated and The Miami Herald; that content was published under the bylines of fake writers with AI-generated headshots and phony bios, manufacturing an air of faux legitimacy. Some publishers, including the Miami Herald and other local newspapers belonging to the McClatchy publishing network, scrubbed their sites of the content following our investigation, saying they were unaware of AI use.

Here, it seems the editorial process was so lacking that AI-generated errors made their way through not just one, but two reputable American publishers before winding up in the Sun-Times’ printed edition. (The freelance writer Joshua Friedman confirmed on Bluesky that the error-riddled “Heat Index” guide was also published in The Philadelphia Inquirer.) Which, as the paper’s union emphasizes in their statement, meant it was published alongside the journalism that human media workers stake their careers on.

More on AI and journalism: Quartz Fires All Writers After Move to AI Slop

The post Journalists at Chicago Newspaper “Deeply Disturbed” That “Disaster” AI Slop Was Printed Alongside Their Real Work appeared first on Futurism.

Chicago Newspaper Caught Publishing a “Summer Reads” Guide Full of AI Slop

AI Chat - Image Generator:

The Chicago Sun-Times, a daily non-profit newspaper owned by Chicago Public Media, published a “summer reading list” featuring wholly fabricated books — the result of broadcasting unverified AI slop in its pages.

An image of a “Summer reading list for 2025” was first shared to Instagram by a book podcaster who goes by Tina Books and was circulated on Bluesky by the novelist Rachael King. The newspaper’s title and the date of the page’s publication are visible in the page’s header.

The page was included in a 64-page “Best of Summer” feature, and as the author, Marco Buscaglia, told 404 Media, it was generated using AI.

“I do use AI for background at times but always check out the material first,” Buscaglia told 404 Media. “This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses.”

“On me 100 percent and I’m completely embarrassed,” he added.

At first glance, the list is unassuming.

“Whether you’re lounging by the pool, relaxing on sandy shores or enjoying the longer daylight hours in your favorite reading spot,” reads the list’s introduction, “these 15 titles — new and old — promise to deliver the perfect summer escape.”

The book titles themselves are unassuming, too. The newspaper recommends titles like the ethereal-sounding “Tidewater Dreams,” which it says was written by the Chilean-American novelist Isabel Allende; “The Last Algorithm,” purported to be a new sci-fi thriller by Andy Weir; and “The Collector’s Piece,” said to be written by the writer Taylor Jenkins Reid about a “reclusive art collector and the journalist determined to uncover the truth behind his most controversial acquisition.”

But as we independently confirmed, though these authors are real and well-known, these books are entirely fake — as are several others listed on the page. Indeed: the first ten out of all fifteen titles listed in the Sun-Times list either don’t exist at all, or the titles are real, but weren’t written by the author that the Sun-Times attributes them to.

Fabrications like made-up citations are commonplace in AI-generated content, and a known risk of using generative AI tools like ChatGPT.

We reached out to the Sun-Times and its owner, Chicago Public Media, which notably also owns the beloved National Public Radio station WBEZ Chicago. In an email, a spokesperson emphasized that the content wasn’t created or approved by the Sun-Times newsroom and that the paper was actively investigating.

“We are looking into how this made it into print as we speak,” read the email. “This is licensed content that was not created by, or approved by, the Sun-Times newsroom, but it is unacceptable for any content we provide to our readers to be inaccurate. We value our readers’ trust in our reporting and take this very seriously. More info will be provided soon as we investigate.”

This was echoed by Buscaglia, who told 404 Media that the content was created to be part of a “promotional special section” not specifically targeted to Chicago.

“It’s supposed to be generic and national,” Buscaglia told 4o4 Media. “We never get a list of where things ran.”

This wouldn’t be the first time AI has been used to create third-party content and published without AI disclosures by journalistic institutions, as Futurism’s investigation last year into AdVon Commerce revealed.

Readers are understandably upset and demanding answers.

“How did the editors at the Sun-Times not catch this? Do they use AI consistently in their work?” reads a Reddit post to r/Chicago about the scandal.  “As a subscriber, I am livid!”

“What is the point of subscribing to a hard copy paper,” the poster continued, “if they are just going to include AI slop too!?”

“I just feel an overwhelming sense of sadness this morning over this?” University of Minnesota Press editorial director Jason Weidemann wrote in a Bluesky post. “There are thousands of struggling writers out there who could write a brilliant summer reads feature and should be paid to do so.”

“Pay humans to do things for fuck’s sake,” he added.

More on AI and journalism: Scammers Stole the Website for Emerson College’s Student Radio Station and Started Running It as a Zombie AI Farm

The post Chicago Newspaper Caught Publishing a “Summer Reads” Guide Full of AI Slop appeared first on Futurism.

MIT Disavowed a Viral Paper Claiming That AI Leads to More Scientific Discoveries

AI Chat - Image Generator:
The paper on AI and scientific discovery has now become a black eye on MIT's reputation.

No Provenance

The Massachusetts Institute of Technology (MIT) is distancing itself from a headline-making paper about AI’s purported ability to accelerate the speed of science.

The paper in question is “Artificial Intelligence, Scientific Discovery, and Product Innovation,” and was published in December as a pre-print by an MIT graduate student in economics, Aidan Toner-Rodgers. It quickly generated buzz, and outlets including The Wall Street Journal, Nature, and The Atlantic covered the paper’s (alleged) findings, which purported to demonstrate how the embrace of AI at a materials science lab led to a significant increase in workforce productivity and scientific discovery, albeit, at the cost of workforce happiness.

Toner-Rodgers’ work even earned praise from top MIT economists David Autor and 2024 Nobel laureate Daron Acemoglu, the latter of whom called the paper “fantastic.”

But it seems that praise was premature, to put it mildly. In a press release on Friday, MIT conceded that following an internal investigation, it “has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.” MIT didn’t give a reason for its backpedaling, citing “student privacy laws and MIT policy,” but it’s a black eye on MIT nonetheless.

The university has also requested that the paper be removed from the ePrint archive ArXiv and requested it be withdrawn from consideration by the Quarterly Journal of Economics, where it’s currently under review.

The ordeal is “more than just embarrassing,” Autor told the WSJ in a new report, “it’s heartbreaking.”

David vs. MIT

According to the WSJ’s latest story, the course reversal kicked off in January, when an unnamed computer scientist “with experience in materials science” approached Autor and Acemoglu with questions about how the AI tech centered in the study actually worked, and “how a lab he wasn’t aware of had experienced gains in innovation.”

When Autor and Acemoglu were unable to get to the bottom of those questions on their own, they took their concerns to MIT’s higher-ups. Enter, months later: Friday’s press release, in which Autor and Acemoglu, in a joint statement, said they wanted to “set the record straight.”

That a paper evidently so flawed passed under so many well-educated eyes with little apparent pushback is, on the one hand, pretty shocking. Then again, as materials scientist Ben Shindel wrote in a blog post, its conclusion — that AI means more scientific productivity, but less joy — feels somewhat intuitive. And yet, according to the WSJ’s reporting, it wasn’t until closer inspection by someone with domain expertise, who could see through the paper’s optimistic veneer, that those seemingly intuitive threads unwound.

More on AI and the workforce: AI Is Helping Job Seekers Lie, Flood the Market, and Steal Jobs

The post MIT Disavowed a Viral Paper Claiming That AI Leads to More Scientific Discoveries appeared first on Futurism.

AI Chatbots Are Becoming Even Worse At Summarizing Data

AI Chat - Image Generator:
Researchers have found that newer AI models can omit key details from text summaries as much as 73 percent of the time.

Ask the CEO of any AI startup, and you’ll probably get an earful about the tech’s potential to “transform work,” or “revolutionize the way we access knowledge.”

Really, there’s no shortage of promises that AI is only getting smarter — which we’re told will speed up the rate of scientific breakthroughs, streamline medical testing, and breed a new kind of scholarship.

But according to a new study published in the Royal Society, as many as 73 percent of seemingly reliable answers from AI chatbots could actually be inaccurate.

The collaborative research paper looked at nearly 5,000 large language model (LLM) summaries of scientific studies by ten widely used chatbots, including ChatGPT-4o, ChatGPT-4.5, DeepSeek, and LLaMA 3.3 70B. It found that, even when explicitly goaded into providing the right facts, AI answers lacked key details at a rate of five times that of human-written scientific summaries.

“When summarizing scientific texts, LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study,” the researchers wrote.

Alarmingly, the LLMs’ rate of error was found to increase the newer the chatbot was — the exact opposite of what AI industry leaders have been promising us. This is in addition to a correlation between an LLM’s tendency to overgeneralize with how widely used it is, “posing a significant risk of large-scale misinterpretations of research findings,” according to the study’s authors.

For example, use of the two ChatGPT models listed in the study doubled from 13 to 26 percent among US teens between 2023 and 2025. Though the older ChatGPT-4 Turbo was roughly 2.6 times more likely to omit key details compared to their original texts, the newer ChatGPT-4o models were nine times as likely. This tendency was also found in Meta’s LLaMA 3.3 70B, which was 36.4 times more likely to overgeneralize compared to older versions.

The job of synthesizing huge swaths of data into just a few sentences is a tricky one. Though it comes pretty easily to fully-grown humans, it’s a really complicated process to program into a chatbot.

While the human brain can instinctively learn broad lessons from specific experiences — like touching a hot stove — complex nuances make it difficult for chatbots to know what facts to focus on. A human quickly understands that stoves can burn while refrigerators do not, but an LLM might reason that all kitchen appliances get hot, unless otherwise told. Expand that metaphor out a bit to the scientific world, and it gets complicated fast.

But summarizing is also time-consuming for humans; the researchers list clinical medical settings as one area where LLM summaries could have a huge impact on work. It goes the other way, too, though: in clinical work, details are extremely important, and even the tiniest omission can compound into a life-changing disaster.

This makes it all the more troubling that LLMs are being shoehorned into every possible workspace, from high school homework to pharmacies to mechanical engineering — despite a growing body of work showing widespread accuracy problems inherent to AI.

However, there were some important drawbacks to their findings, the scientists pointed out. For one, the prompts fed to LLMs can have a significant impact on the answer it spits out. Whether this affects LLM summaries of scientific papers is unknown, suggesting a future avenue for research.

Regardless, the trendlines are clear. Unless AI developers can set their new LLMs on the right path, you’ll just have to keep relying on humble human bloggers to summarize scientific reports for you (wink).

More on AI: Senators Demand Safety Records from AI Chatbot Apps as Controversy Grows

The post AI Chatbots Are Becoming Even Worse At Summarizing Data appeared first on Futurism.

World Leaders Shown AI Baby Versions of Themselves at European Summit

AI Chat - Image Generator:
World leaders being shown baby versions of themselves at a global summit.

Baby Erdoğan’s Mustache

It’s called diplomacy, guys.

This year’s European Political Community, an annual forum for European leaders founded in 2022 following the Russian invasion of Ukraine, kicked off on Friday in Tirana, Albania. Europe’s leaders were greeted with a ten-ish minute presentation that celebrated Europe’s commitment to sovereignty and shared triumphs over evil. There were flashing lights and dance performances, and a few different video sequences. And to close out the show, as Politico reports, the Albanian government landed on the obvious editorial choice: a montage of the summit’s leaders pictured as AI-generated babies, who each said “Welcome to Albania” in their country’s language.

It was perfect. Did baby-fied Recep Tayyip Erdoğan, Turkey’s authoritarian strongman, rock a tiny AI-generated mustache? He did indeed! Did French President Emmanuel Macron smack his gum in pleasant bemusement as he watched his AI baby self smile onscreen? You bet!

Our hats are off to Edi Rama, Albania’s recently re-elected president. So far, between MAGAworld and its monarch embracing AI slop as its defining aesthetic, AI-generated misinformation causing chaos, and attempted AI mayors and political parties, this is easily the most compelling use of generative AI in politics we’ve seen.

Politicking

The camera televising the event repeatedly panned to the crowd, where the response from Europe’s most powerful was mixed. Some laughed, while others bristled; some mostly looked confused. Which makes sense, given that this is a serious conference where, per Politico, the majority of leaders are looking to push for harsher sanctions on Russia as its war on Ukraine wages on and tense talks between Moscow and Kyiv continue without a ceasefire.

It’s unclear how the AI baby bit fit into Albania’s message of a peaceful, unified Europe. Though the presentation did start with childlike drawings, the sounds of kids laughing, and a youthful voiceover, so maybe it was an attempt to bring the show full circle? Or maybe, considering the heavy subject matter and fast-heating global tension and uncertainty, Rama just wanted to break the ice.

Anyway. We’re sure nothing will humble you, a leader of a nation, like sitting in an auditorium and oscillating between unsure grimaces and giggling whilst staring down your AI-generated baby face.

More on AI and guys in Europe: The New Pope Is Deeply Skeptical of AI

The post World Leaders Shown AI Baby Versions of Themselves at European Summit appeared first on Futurism.

The Hot New AI Tool in Law Enforcement Is a Workaround for Places Where Facial Recognition Is Banned

AI Chat - Image Generator:
A new AI tool called Track is being used as a workaround to the current laws against facial recognition, not to improve the tech.

At the end of 2024, fifteen US states had laws banning some version of facial recognition.

Usually, these laws were written on the basis that the technology is a nightmare-level privacy invasion that’s also too shoddy to be relied upon. Now, a new company aims to solve that problem — though maybe not in the way you’d imagine (or like).

Per a report in MIT Technology Review, a new AI tool called Track is being used not to improve facial recognition technology, nor as a way to make it less invasive of your personal civil liberties, but as a workaround to the current laws against facial recognition (which are few and far between, at least when compared to the places it’s allowed to operate). It’s a classic tale of technology as “disruption,” simply by identifying a legal loophole to be exploited.

That new tool, called Track, is a “nonbiometric” system that emerged out of a SkyNet-esque company that specializes in video analytics, Veritone.

According to MIT Technology Review‘s story, it already has 400 customers using Track in places where facial recognition is banned, or in instances where someone’s face is covered. Even more: Last summer, Veritone issued a press release announcing the US Attorney’s office had expanded the remit of their Authorization to Operate, the mandate that gives a company like Veritone the ability to carry out surveillance operations.

Why? Because Track can (supposedly) triangulate people’s identities off of footage using a series of identifying factors, which include monitored subjects’ shoes, clothing, body shape, gender, hair, and various accessories — basically, everything but your face. The footage Track is capable of scanning includes everything from closed-circuit security tapes, body-cams, drone footage, Ring cameras, and crowd/public footage (sourced from various social media networks where it’s been uploaded).

In a view MIT Technology Review obtained of Track in operation, users can select from a dropdown menu listing a series of attributes by which they want to identify subjects: Accessory, Body, Face, Footwear, Gender, Hair, Lower, Upper. Each of those menus has a sub-menu. On “Accessory,” the sub-menu lists: Any Bag, Backpack, Box, Briefcase, Glasses, Handbag, Hat, Scarf, Shoulder Bag, and so on. The “Upper” attribute breaks down into Color, Sleeve, Type (of upper-body clothing), and those types break down into more sub-categories.

Once the user selects the attributes they’re looking for, Track gives the user a series of images taken from the footage being reviewed, containing a series of matches. And from there, it will continue to help users narrow down footage until they’ve assembled a triangulation of their surveillance target’s path.

If this sounds like current facial recognition software — in other words, like it’s a relatively fallible Orwellian enterprise, bound to waste quite a bit of money, netting all the wrong people along the way — well, the folks at Veritone see it another way.

Their CEO called Track their “Jason Bourne tool,” while also praising its ability to exonerate those identified by it. It’s an incredibly dark, canny way to get around limitations on their ability to use facial recognition tracking systems, simply by providing something very much like it, that isn’t precisely biometric data. By going around that loophole, Signal equips police departments and federal law enforcement agencies with the unencumbered opportunity to conduct surveillance that’s been legislated against in all but the precise letter of the law. And surveillance, it’s worth noting, that might be even more harmful or detrimental than facial recognition itself.

It’s entirely possible that people who wear certain kinds of clothing or look a certain way can be caught up by Track. And this is in a world where we already know people have been falsely accused of theft, falsely arrested, or falsely jailed, all thanks to facial recognition technology.

Or as American Civil Liberties Union lawyer Nathan Wessler told MIT Tech Review: “It creates a categorically new scale and nature of privacy invasion and potential for abuse that was literally not possible any time before in human history.”

Looks like they’re gonna have to find another name for the big map.

More on Facial Recognition: Years After Promising to Stop Facial Recognition Work, Meta Has a Devious New Plan

The post The Hot New AI Tool in Law Enforcement Is a Workaround for Places Where Facial Recognition Is Banned appeared first on Futurism.

OnlyFans Model Shocked After Finding Her Pictures With AI-Swapped Faces on Reddit

AI Chat - Image Generator:
An OnlyFans model was shocked to find that a scammer had stolen her content — and used it to flood Reddit with AI deepfakes.

Face Ripoff

An OnlyFans creator is speaking out after discovering that her photos were stolen by someone who used deepfake tech to give her a completely new face — and posted the deepfaked images all over Reddit.

As 25-year-old, UK-based OnlyFans creator Bunni told Mashable, image theft is a common occurrence in her field. Usually, though, catfishers would steal and share Bunni’s image without alterations.

In this case, the grift was sneakier. With the help of deepfake tools, a scammer crafted an entirely new persona named “Sofía,” an alleged 19-year-old in Spain who had Bunni’s body — but an AI-generated face.

It was “a completely different way of doing it that I’ve not had happen to me before,” Bunni, who posted a video about the theft on Instagram back in February, told Mashable. “It was just, like, really weird.”

It’s only the latest instance of a baffling trend, with “virtual influencers” pasting fake faces onto the bodies of real models and sex workers to sell bogus subscriptions and swindle netizens.

Head Swap

Using the fake Sofía persona, the scammer flooded forums across Reddit with fake images and color commentary. Sometimes, the posts were mundane; “Sofía” asked for outfit advice and, per Mashable, even shared photos of pets. But Sofía also posted images to r/PunkGirls, a pornographic subreddit.

Sofía never shared a link to another OnlyFans page, though Bunni suspects that the scammer might have been looking to chat with targets via direct messages, where they might have been passing around an OnlyFans link or requesting cash. And though Bunni was able to get the imposter kicked off of Reddit after reaching out directly to moderators, her story emphasizes how easy it is for catfishers to combine AI with stolen content to easily make and distribute convincing fakes.

“I can’t imagine I’m the first, and I’m definitely not the last, because this whole AI thing is kind of blowing out of proportion,” Bunni told Mashable. “So I can’t imagine it’s going to slow down.”

As Mashable notes, Bunni was somewhat of a perfect target: she has fans, but she’s not famous enough to trigger immediate or widespread recognition. And for a creator like Bunni, pursuing legal action might not be a feasible or even worthwhile option. It’s expensive, and right now, the law itself is still catching up.

“I don’t feel like it’s really worth it,” Bunni told Mashable. “The amount you pay for legal action is just ridiculous, and you probably wouldn’t really get anywhere anyway, to be honest.”

Reddit, for its part, didn’t respond to Mashable’s request for comment.

More on deepfakes: Gross AI Apps Create Videos of People Kissing Without Their Consent

The post OnlyFans Model Shocked After Finding Her Pictures With AI-Swapped Faces on Reddit appeared first on Futurism.

Why Elon Musk Is Furious and Publicly Raging at His Own AI Chatbot, Grok

AI Chat - Image Generator:
Elon Musk is mad that his AI chatbot, Grok, referred to The Atlantic and The BBC as credible news sources.

Elon Musk’s AI chatbot, Grok, thinks that The Atlantic and The BBC are credible, reputable sources for news and information. Which is funny, because Musk — who’s engaged in a years-long project to erode trust in legacy media organizations and even specific journalists — doesn’t. And now, he’s furious at his own AI chatbot.

The Musk-Grok tiff happened over the weekend, when a misinformation-spreading X-formerly-Twitter user @amuse posted an “article” about billionaire bogeymen (like George and Alex Soros, Bill Gates, and the philanthropic Ford Foundation) using deep pockets to “hijack federal grants” by “seeding” nongovernmental organizations with left-wing ideology.

As opposed to a thoughtful or reported analysis of how cash from wealthy donors has transformed American politics, the article was a deeply partisan, conspiracy-riddled account smattered with scary-sounding buzzwords, “DEI” ranting, and no foundational evidence to back its conspiratorial claims (with little mention of high-powered and heavily funded conservative non-profit groups, either).

It seems that Grok, the chatbot created and operated by the Musk-owned AI company xAI, had some issues with the @amuse post, too.

When an X user asked Grok to analyze the post, the AI rejected its core premise, arguing that there’s “no evidence” that Soros, Gates, and the Ford Foundation “hijack federal grants or engage in illegal influence peddling.” In other words, it said that the world as described in the @amuse post doesn’t exist.

The user — amid accusations that Grok has been trained on “woke” data — then asked Grok to explain what “verified” sources it pulled from to come to that conclusion. Grok explained that it used “foundation websites and reputable news outlets,” naming The Atlantic and the BBC, which it said are “credible” and “backed by independent audits and editorial standards.” It also mentioned denials from Soros-led foundations.

“No evidence shows the Gates, Soros, or Ford Foundations hijacking grants; they operate legally with private funds,” said Grok. “However, their support for progressive causes raises transparency concerns, fueling debate. Critics question their influence, while supporters highlight societal benefits. Verification comes from audits and public records, but skepticism persists in polarized discussions.”

This response, apparently, ticked off Musk.

“This is embarrassing,” the world’s richest man responded to his own chatbot. Which, at this rate, might prove to be his Frankenstein.

It’s unclear whether Musk was specifically mad about the characterization of news outlets or claims by Soros-founded organizations as reliable, but we’d go out on a limb to venture the answer is both.

By no means should the world be handing their media literacy over to quick reads by Grok, or any other chatbot. Chatbots get things wrong — they even make up sources — and users need to employ their own discretion, judgment, and reasoning skills while engaging with them. (Interestingly, @amuse stepped in at one point to claim that Grok had given him a figure to use that the chatbot said was inaccurate in a later post.)

But this interaction does highlight the increasing politicization of chatbots, a debate at which Grok has been very much at the center. While there’s a ton of excellent, measured journalism out there, we’re existing in a deeply partisan attention and information climate in which people can — and very much do — seek out information that fuels and supports their personal biases.

In today’s information landscape, conclusion-shopping is easy — and when chatbots fail to scratch that itch, people get upset. Including, it seems, the richest man on Earth, who’s been DIY-ing his preferred reality for a while now.

More on Grok rage: MAGA Angry as Elon Musk’s Grok AI Keeps Explaining Why Their Beliefs Are Factually Incorrect

The post Why Elon Musk Is Furious and Publicly Raging at His Own AI Chatbot, Grok appeared first on Futurism.