Chinese Tech Giant Wants to Translate Your Cat’s Meows Using AI

AI Chat - Image Generator:

Doodle Translate

Chinese tech company Baidu is working on an artificial intelligence-based translation system that could finally decode the greatest language mystery in the world: your cat’s meows.

As Reuters reports, the company filed a patent with the China National Intellectual Property Administration proposing an AI-powered system to translate animal sounds.

But whether it’ll ultimately be successful in deciphering your dog’s barks or your cat’s meows remains to be seen. Despite years of research, scientists are still far from deciphering animal communication.

Baidu is hoping that the system could bring humans and their pets closer together. According to the company’s patent document, it could allow for a “deeper emotional communication and understanding between animals and humans, improving the accuracy and efficiency of interspecies communication.”

Me Eat Squirrel

A spokesperson told Reuters that the system is “still in the research phase,” suggesting there’s still significant work to be done.

But Baidu has already made considerable headway. The company, which also runs the country’s largest search engine, has invested in AI for years, releasing its latest AI model last month.

Baidu is only one of many companies working to decode animal communication using AI. For instance, California-based nonprofit Earth Species Project has been attempting to build an AI-based system that can translate birdsong, the whistles of dolphins, and the rumblings of elephants.

A separate nonprofit called NatureLM recently announced that it secured $17 million in grants to create language models that can identify the ways animals communicate with each other.

Researchers have also attempted to use machine learning to understand the vocalizations of crows and monkeys.

While a direct animal translation tool is more than likely still many years out, some scientists have claimed early successes. Last year, a team of scientists from SETI (Search for Extraterrestrial Intelligence) claimed to have “conversed” with a humpback whale in Alaska.

“The things we learn from communicating with whales could help us when it comes time to connect with aliens,” SETI researcher and University of California Davis animal behavioralist Josie Hubbard told the New York Post at the time.

More on AI translation: World’s Largest Call Center Deploys AI to “Neutralize the Accent” of Indian Employees

The post Chinese Tech Giant Wants to Translate Your Cat’s Meows Using AI appeared first on Futurism.

The FDA Will Use AI to Accelerate Approving Drugs

AI Chat - Image Generator:
The FDA announced that it will start using AI across all of its centers to shorten the drug review process.

The Food and Drug Administration just announced that it will immediately start using AI across all of its centers, after completing a new generative AI pilot for scientific reviewers.

Supposedly, the AI tool will speed up the FDA’s drug review process by reducing the time its scientists have to spend doing tedious, repetitive tasks — though, given AI’s track record of constantly hallucinating, these claims warrant plenty of scrutiny.

“This is a game-changer technology that has enabled me to perform scientific review tasks in minutes that used to take three days,” said Jinzhong Liu, a deputy director in the FDA’s Center for Drug Evaluation and Research (CDER), in a statement.

FDA commissioner Martin Makary has directed that all FDA centers should achieve full AI integration by June 30, a questionably aggressive timeline.

“By that date, all centers will be operating on a common, secure generative AI system integrated with FDA’s internal data platforms,” the agency said in its announcement.

The announcement comes just a day after Wired reported that the FDA and OpenAI were holding talks to discuss the agency’s use of AI. Notably, the FDA’s new statement makes no mention of OpenAI or its potential involvement.

Behind the scenes, however, Wired sources say that a team from the ChatGPT maker met with the FDA and two associates from Elon Musk’s so-called Department of Government Efficiency multiple times in recent weeks, to discuss a project called “cderGPT.” The name is almost certainly a reference to the FDA’s abovementioned CDER, which regulates drugs sold in the US.

This may have been a long time coming. Wired notes that the FDA sponsored a fellowship in 2023 to develop large language models for internal use. And according to Robert Califf, who served as FDA commissioner between 2016 and 2017, the agency review teams have already been experimenting with AI for several years.

“It will be interesting to hear the details of which parts of the review were ‘AI assisted’ and what that means,” Califf told Wired. “There has always been a quest to shorten review times and a broad consensus that AI could help.”

The agency was considering using AI in other aspects of its operations, too.

“Final reviews for approval are only one part of a much larger opportunity,” Califf added.

Makary, who was appointed commissioner by president Donald Trump, has frequently expressed his enthusiasm for the technology.

“Why does it take over ten years for a new drug to come to market?” he tweeted on Wednesday. “Why are we not modernized with AI and other things?”

The FDA news parallels a broader trend of AI adoption in federal agencies during the Trump administration. In March, OpenAI announced a version of its chatbot called ChatGPT Gov designed to be secure enough to process sensitive government information. Musk has pushed to fast-track the development of another AI chatbot for the US General Services Administration, while using the technology to try to rewrite the Social Security computer system.

Yet, the risks of using the technology in a medical context are concerning, to say the least. Speaking to Wired, an ex-FDA staffer who has tested ChatGPT as a clinical tool pointed out the chatbot’s proclivity for making up convincing-sounding lies — a problem that won’t go away anytime soon.

“Who knows how robust the platform will be for these reviewers’ tasks,” the former FDA employee told the magazine.

More on medical AI: Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies

The post The FDA Will Use AI to Accelerate Approving Drugs appeared first on Futurism.

Teachers Using AI to Grade Their Students’ Work Sends a Clear Message: They Don’t Matter, and Will Soon Be Obsolete

AI Chat - Image Generator:
A new study is revealing just how horrible AI is at grading student homework, and the results are worse than you think.

Talk to a teacher lately, and you’ll probably get an earful about AI’s effects on student attention spans, reading comprehension, and cheating.

As AI becomes ubiquitous in everyday life — thanks to tech companies forcing it down our throats — it’s probably no shocker that students are using software like ChatGPT at a nearly unprecedented scale. One study by the Digital Education Council found that nearly 86 percent of university students use some type of AI in their work.

That’s causing some fed-up teachers to fight fire with fire, using AI chatbots to score their students’ work. As one teacher mused on Reddit: “You are welcome to use AI. Just let me know. If you do, the AI will also grade you. You don’t write it, I don’t read it.”

Others are embracing AI with a smile, using it to “tailor math problems to each student,” in one example listed by Vice. Some go so far as requiring students to use AI. One professor in Ithaca, NY, shares both ChatGPT’s comments on student essays as well as her own, and asks her students to run their essays through AI on their own.

While AI might save educators some time and precious brainpower — which arguably make up the bulk of the gig — the tech isn’t even close to cut out for the job, according to researchers at the University of Georgia. While we should probably all know it’s a bad idea to grade papers with AI, a new study by the School of Computing at UG gathered data on just how bad it is.

The research tasked the Large Language Model (LLM) Mixtral with grading written responses to middle school homework. Rather than feeding the LLM a human-created rubric, as is usually done in these studies, the UG team tasked Mixtral with creating its own grading system. The results were abysmal.

Compared to a human grader, the LLM accurately graded student work just 33.5 percent of the time. Even when supplied with a human rubric, the model had an accuracy rate of just over 50 percent.

Though the LLM “graded” quickly, its scores were frequently based on flawed logic inherent to LLMs.

“While LLMs can adapt quickly to scoring tasks, they often resort to shortcuts, bypassing deeper logical reasoning expected in human grading,” wrote the researchers.

“Students could mention a temperature increase, and the large language model interprets that all students understand the particles are moving faster when temperatures rise,” said Xiaoming Zhai, one of the UG researchers. “But based upon the student writing, as a human, we’re not able to infer whether the students know whether the particles will move faster or not.”

Though the UG researchers wrote that “incorporating high-quality analytical rubrics designed to reflect human grading logic can mitigate [the] gap and enhance LLMs’ scoring accuracy,” a boost from 33.5 to 50 percent accuracy is laughable. Remember, this is the technology that’s supposed to bring about a “new epoch” — a technology we’ve poured more seed money into than any in human history.

If there were a 50 percent chance your car would fail catastrophically on the highway, none of us would be driving. So why is it okay for teachers to take the same gamble with students?

It’s just further confirmation that AI is no substitute for a living, breathing teacher, and that isn’t likely to change anytime soon. In fact, there’s mounting evidence that AI’s comprehension abilities are getting worse as time goes on and original data becomes scarce. Recent reporting by the New York Times found that the latest generation of AI models hallucinate as much as 79 percent of the time — way up from past numbers.

When teachers choose to embrace AI, this is the technology they’re shoving off onto their kids: notoriously inaccurate, overly eager to please, and prone to spewing outright lies. That’s before we even get into the cognitive decline that comes with regular AI use. If this is the answer to the AI cheating crisis, then maybe it’d make more sense to cut out the middle man: close the schools and let the kids go one-on-one with their artificial buddies.

More on AI: People With This Level of Education Use AI the Most at Work

The post Teachers Using AI to Grade Their Students’ Work Sends a Clear Message: They Don’t Matter, and Will Soon Be Obsolete appeared first on Futurism.

Years After Promising to Stop Facial Recognition Work, Meta Has a Devious New Plan

AI Chat - Image Generator:
Meta quietly restarted efforts to infuse facial recognition into its smart glasses, according to a report from The Information.

In 2021, Facebook said it was scrapping efforts to build powerful facial recognition software into its then-nascent smart glasses, citing the tech’s glaring privacy and ethics concerns.

Four years later, as The Information reports, the Silicon Valley behemoth has officially dusted off the effort and is once again working on transforming its wearable smart glasses into a facial recognition-infused privacy nightmare.

Meta is working on a feature internally referred to as “super sensing.” In super sensing mode, the glasses’ built-in cameras and sensors will remain on and recording throughout the wearer’s day. It’s still probably a way’s off due to battery life limitations, but in Meta’s imagining, it’ll one day be able to do things like remind someone to drop by the store and get dinner ingredients or nudge them to grab their keys. (Because, of course, every Silicon Valley CEO really just wants to build J.A.R.V.I.S. from the “Iron Man” franchise.)

However, the super sensing feature would also combine AI with facial recognition, according to the Information — a design choice that could have far-reaching and deeply alarming implications.

Infusing facial recognition and AI into smart glasses could help you look up the LinkedIn heads you ran into at a networking event, or keep track of your roommates or family. Which — while annoying and creepy — are arguably a bit more mundane in the grand scheme of facial recognition applications.

But the nightmare scenarios are endless. A wearer could dox strangers on the street; a creep in a bar could look up the name and personal information of a woman who may or may not have wanted to talk to him; undercover law enforcement officials could go to a peaceful protest and keep a careful record of attendees.

It’s not exactly hard to come up with ways this could go wrong, fast — and yet Meta, it seems, has decided to push forward.

According to the report, Meta’s renewed facial recognition efforts are due in part to a more surveillance-friendly political climate where privacy concerns are increasingly taking a backseat in corporate and federal government decision-making.

“The pendulum swings from one side to the other,” Rob Leathern, a privacy expert and former product manager at Facebook and Google, told The Information. “We’re kind of on that swing where some of the things that companies like Google talked about two, three, four years ago aren’t necessarily being seen as quite as important.”

More on Meta’s smart glasses and facial recognition: Terrifying Smart Glasses Hack Can Pull Up Personal Info of Nearby Strangers in Seconds

The post Years After Promising to Stop Facial Recognition Work, Meta Has a Devious New Plan appeared first on Futurism.

Deranged Video Shows AI Job Recruiter Absolutely Losing It During an Interview

AI Chat - Image Generator:
Looking for work is already arduous enough — but for one job-seeker, the process was made worse by an insane AI recruiter.

Looking for work is already arduous enough — but for one job-seeker, the process became something out of a deleted “Black Mirror” scene when the AI recruiter she was paired with went veritably insane.

In a buckwild TikTok video, the job-seeker is seen suffering for nearly 30 seconds as the AI recruiter barked the term “vertical bar pilates” at her no fewer than 14 times, often slurring its words or mixing up letters along the way.

@its_ken04

It was genuinely so creepy and weird. Please stop trying to be lazy and have AI try to do YOUR JOB!!! It gave me the creeps so bad #fyp

♬ original sound – Its Ken 🤍

The incident — and the way it affected the young woman who endured it — is a startling example not only of where America’s abysmal labor market is at, but also of how ill-conceived this sort of AI “outsourcing” has become.

Though she looks nonplussed on her interview screen, the TikToker who goes by Ken told 404 Media that she was pretty perturbed by the incident, which occurred during her first (and only) interview with a Stretch Lab fitness studio in Ohio.

“I thought it was really creepy and I was freaked out,” the college-aged creator told the website. “I was very shocked, I didn’t do anything to make it glitch so this was very surprising.”

As 404 discovered, the glitchy recruiter-bot was hosted by a Y Combinator-backed startup called Apriora, which claims to help companies “hire 87 percent faster” and “interview 93 percent cheaper” because multiple candidates can be interviewed simultaneously.

In a 2024 interview with Forbes, Apriora cofounder Aaron Wang attested that job-seekers “prefer interviewing with AI in many cases, since knowing the interviewer is AI helps to reduce interviewing anxiety, allowing job seekers to perform at their best.”

That’s definitely not the case for Ken, who said she would “never go through this process again.”

“If another company wants me to talk to AI,” she told 404, “I will just decline.”

Commenters on her now-viral TikTok seem to agree as well.

“This is the rudest thing a company could ever do,” one user wrote. “We need to start withdrawing applications folks.”

Others still pointed out the elephant in the room: that recruiting used to be a skilled trade done by human workers.

“Lazy, greedy and arrogant,” another person commented. “AI interviews just show me they don’t care about workers from the get go. This used to be an actual human’s job.”

Though Apriora didn’t respond to 404‘s requests for comment, Ken, at least, has gotten the last word in the way only a Gen Z-er could.

“This was the first meeting [with the company] ever,” she told 404. “I guess I was supposed to earn my right to speak to a human.”

More on AI and labor: High Schools Training Students for Manual Labor as AI Looms Over College and Jobs

The post Deranged Video Shows AI Job Recruiter Absolutely Losing It During an Interview appeared first on Futurism.

The Judge’s Reaction to an AI-Generated Victim Impact Statement Was Not What We Expected

AI Chat - Image Generator:
A slain Arizona man's family used AI to bring him back from the dead for his killer's sentencing — and the judge loved it.

A slain Arizona man’s family used AI to bring him back from the dead for his killer’s sentencing hearing — and the judge presiding over the case apparently “loved” it.

As 404 Media reports, judge Todd Lang was flabbergasted when he saw the AI-generated video of victim Chris Peskey that named and “forgave” the man who killed him in 2021.

“To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” the video, which Peskey’s sister Stacey Wales generated, intoned. “In another life we probably could have been friends. I believe in forgiveness, in God who forgives, I always have. And I still do.”

Found guilty earlier this year, Horcasitas’ sentencing was contingent, as many cases are, upon various factors, including impact statements from the victim’s family.

As Wales told 404 Media, her husband Tim was initially freaked out when she introduced the idea of creating a digital clone of her brother for the hearing and told her she was “asking a lot.”

Ultimately, the video was accepted in the sentencing hearing, the first known instance of an AI clone of a deceased person being used in such a way.

And the gambit appears to have paid off.

“I loved that AI, and thank you for that,” Lang said, per a video of his pre-sentencing speech. “As angry as you are, and as justifiably angry as the family is, I heard the forgiveness, and I know Mr. Horcasitas could appreciate it, but so did I.”

“I feel like calling him Christopher as we’ve gotten to know him today,” Lang continued. “I feel that that was genuine, because obviously the forgiveness of Mr. Horcasitas reflects the character I heard about today.”

Lang acknowledged that although the family itself “demanded the maximum sentence,” the AI Pelkey “spoke from his heart” and didn’t call for such punishment.

“I didn’t hear him asking for the maximum sentence,” the judge said.

Horcasitas’ lawyer also referenced the Peskey avatar when defending his client and, similarly, said that he also believes his client and the man he killed could have been friends had circumstances been different.

That entreaty didn’t seem to mean much, however, to Lang. He ended up sentencing Horcasitas to 10.5 years for manslaughter, which was a year and a half more than prosecutors were seeking.

It’s a surprising reaction, showing that many are not only open to AI being used this way, but also in favor of it — evidence that the chasm between AI skeptics and adopters could be widening.

More on AI fakery: Slop Farmer Boasts About How He Uses AI to Flood Social Media With Garbage to Trick Older Women

The post The Judge’s Reaction to an AI-Generated Victim Impact Statement Was Not What We Expected appeared first on Futurism.

Slop Farmer Boasts About How He Uses AI to Flood Social Media With Garbage to Trick Older Women

AI Chat - Image Generator:
Pinterest is facing an influx of AI slop designed to attract users to synthetic websites. Here's a glimpse into their tactics — and psyches.

Last year, Jesse Cunningham — a self-described “SEO specialist who leverages the power of AI to drive real results” — appeared in a livestream for a closed members group for SEO secret-trading. He’d been invited to discuss his AI strategies for monetizing content on Facebook, where he claimed to have found financial success by flooding the Meta-owned platform with fake, AI-generated images of things like faux houseplants and ChatGPT-created recipes.

“Don’t ban me, people,” Cunningham jokes into a large microphone, explaining that one of his AI pages had previously been flagged by Meta for violating platform policies after he revealed its name in a public-facing YouTube video.

Cunningham explains that his preferred groups to target are devoted fandoms and the elderly. The former is easily excited, he posits, while the latter probably won’t understand that what they’re clicking on is synthetic at all.

“Best are voracious fan bases. Fan boys, fan girls,” Cunningham tells the group. “And an older demographic, where Aunt Carol doesn’t really know how to use Facebook, and she’s just likely to share everything.”

“I’m going after audience 50-plus female,” he reiterates, explaining that targeting older women on Facebook means his content can be cross-posted over on the aspirational image-sharing-and-sourcing platform Pinterest, the userbase of which is overwhelmingly made up of women.

“Why am I going after females? Because… I want to cross-pollinate the audience,” says Cunningham. “I want to kill two birds with one stone and dominate Pinterest and Facebook at the same time. Fifty-plus female is my demo.”

The recorded call is just under an hour in length. At one point, Cunningham triumphantly declares that he’s “starting new pages in the recipe niche” and wants “to disrupt that whole industry” because, in his telling, it’s “ripe for the taking.”

“Going back to the AI recipes, do you know if they actually work?” someone asks Cunningham later in the clip.

“Of course they work. ChatGPT told me they work,” Cunningham, who looks genuinely baffled by the question, responds. “What kind of question is that?”

Cunningham is one of many sloperators using AI to flood social media with AI content to make money.

The process goes like this. Cunningham publishes large numbers of AI-generated articles to websites helmed by made-up bloggers, with AI-generated headshots, purporting to be experts in topics ranging from houseplants and recipes to DIY holiday crafts and nature scenes. Then he posts AI-generated images linking back to those sites on social, with Cunningham claiming he’s able to rake in cash — not by actually putting time and energy into photographing any actual home gardening, or drafting and testing new recipes, but by using AI to quickly and cheaply imitate traditional content creators’ final product.

Evidence of such zombie tactics employed by Cunningham and others are evident on his preferred platforms, Pinterest and Facebook, where users are increasingly made to wade through swamps of parasitic AI slop.

As Futurism reported earlier this year, Pinterest is facing a pervasive influx of AI-generated content masquerading as the real thing. The torrent of AI slop on Facebook is well-documented as well — last year, an in-depth 404 Media investigation revealed that AI slop farmers around the world had figured out how to use AI to generate engagement-bait imagery designed to earn cash by exploiting Facebook’s since-shuttered Performance Bonus program.

We highlighted Cunningham in our previous reporting about Pinterest. He’s an avid YouTuber, and we were struck by his candor as he publicly shared the sordid details of his slop farming process, which frequently includes copying the work of his competitors — real bloggers and online creators who say the AI influx on Pinterest, Facebook, and other platforms has had a destructive impact on their businesses.

“Across the board, like across the board, this is something that is talked about in blogging groups all the time, because it is devastating all of our businesses,” Rachel Farnsworth, a veteran food blogger of the website The Stay at Home Chef, told Futurism of the impact that schemes like Cunningham’s have had on her industry.

“It’s put a ton of people out of business,” she added.

We decided to dig deeper into Cunningham’s extensive content catalog, on YouTube and beyond, where we found a telling portrait of the layers of unreality shrouding the AI slop increasingly crowding the web — and the attitudes of the slopageddon’s eager purveyers, down to their eagerness to trick old ladies and copy others’ work.

***

According to Cunningham, AI offers a way to pretty much print money online.

“Pinterest is one of the easiest ways to make money online right now,” he declares in a November YouTube video titled “🤯AI Pinterest Strategy for $15,942/MONTH.”

“Our goal is to catch fish,” he adds. He then clarifies: “fish is making money.”

But to “catch fish,” he emphasizes, quantity — made easy by generative AI tools — is key.

“Ten pins daily is not going to cut it,” he adds, explaining that he posts around 80 AI pins a day in his efforts to manipulate Pinterest’s algorithm — enough to get his pins to “cruising altitude,” he says, but not enough to get hit with a spam notice by the platform. “You’re not going to compete with me and the other people doing it at scale.”

But “luckily, nowadays,” he continues, “we have AI.”

A screengrab from a YouTube video featuring a man sitting behind a desk and speaking to the camera using a large microphone. Over the video, overlaid text reads: "100% AI." The video is captioned: "[Mind-blown emoji] AI Pinterest Strategy for $15,942/MONTH"

His process, as Cunningham lays out across his videos, begins by tracking down existing pins that are already doing well.

In the November video, for instance, he homes in on a parenting-oriented blog called The Mummy Front. The blog isn’t his; instead, Cunningham seeks to use AI to replicate someone else’s viral content at scale.

“So this one here — ‘Andie DIY Ikea Hacks, Crafts to make,’ blah blah blah blah blah — they crush it for Christmas,” Cunningham remarks. “So I can come in here to their Christmas board… and now we’re looking around. We can figure out, all right, this is what works with Christmas, because this is a top-five Christmas page on all of Pinterest.”

Cunningham then zeroes in on one of The Mummy Front’s top-performing pins, which links back to a listicle-style blog post about Christmas wrapping paper ideas.

Drawing on that post, Cunningham takes to an AI-powered content creation tool called Content Goblin where, after inputting just a headline into a text box — he requests a post for “47 Gift Wrapping Ideas You Need To Try for CHRISTMAS” — he’s able to churn out a lengthy listicle in a matter of moments, complete with AI-generated images.

A screenshot of a YouTube video in which a man, who's sitting behind a desk speaking into a large microphone, shares his screen to show viewers how he uses AI to quickly generate synthetic content about holiday gift-wrapping ideas. The video is captioned: "[Mind-blown emoji] AI Pinterest Strategy for $15,942/MONTH"

Then he uploads the AI-generated blog post, without editing, to a faux blogging site he runs called Bonsai Mary.

A screenshot of a YouTube video in which a man, who's sitting behind a desk speaking into a large microphone, shares his screen to show viewers how he uses AI to quickly generate synthetic content about holiday gift-wrapping ideas. The video is captioned: "[Mind-blown emoji] AI Pinterest Strategy for $15,942/MONTH"

Bonsai Mary is helmed by an “author” named “Mary Smith,” who features prominently on the site’s landing page, along with an AI-generated headshot.

Despite its bonsai-focused title, the blog’s content is surprisingly wide-ranging — an oddity that its alleged blogger-in-chief, Smith, speaks to in a first-person missive published on its homepage.

“Welcome to Bonsai Mary — this website has been around since 2009! The main focus here is plants but I also love to share recipes and interior decorating things I love,” reads the webpage. “My name is Mary Smith, a seasoned gardener and bonsai artist and author of BonsaiMary.com. I love nature and any new plant I haven’t seen before!”

But Smith is clearly not a real person. In addition to her AI-generated headshot, she has no publishing history outside of Bonsai Mary — except for a blog titled Off Grid Dreaming, which is also operated by Cunningham, according to other YouTube videos.

What’s more, though it’s technically true that the Bonsai Mary website has been around since 2009, archived versions of the site show that Bonsai Mary was actually founded back in the late 2000s by a woman named Mary C. Miller, a real American bonsai artist and author.

It’s unclear when the blog’s domain first switched hands. But according to archived snapshots documented in the Internet Archive’s Wayback Machine, “Mary Smith” didn’t appear until late 2023.

 

A screnshot of a synthetic website full of fake, AI-generated articles with AI text and imagery. The synthetic content is attributed to a fake author named Mary.

Finally, to publish his AI-generated images to Pinterest, Cunningham uses ChatGPT to drum up short, Pinterest-optimized descriptions for each image. He throws all of that into a spreadsheet, and using a planning tool, mass-uploads links to his synthetic blog. (In other videos, he uses yet another AI tool to overlay headline text onto AI-generated imagery with little effort.)

And from there, he says, he’s “cruising.”

“You,” he tells the viewer, “can use all these tools to get a competitive advantage on everyone.”

Cunningham creates content for a variety of topics — or “niches,” as folks in his industry will say, from cooking and recipes to interior design and decor.

A screenshot of a Pinterest profile full of fake, AI-generated content.

The Pinterest account page for Bonsai Mary, which lists 8.6 million monthly views on its profile, says in its bio that “we create AI pins and blog posts for all to enjoy.” The associated profile for Off Grid Dreaming, which lists around 20.2k monthly views, fails to issue a similar disclaimer.

“At Off Grid Dreaming,” reads its bio, “we specialize in designing sustainable, off-grid living spaces that blend style, comfort, and functionality.” (There’s no evidence that Cunningham, or “Mary,” actually “specialize” in anything beyond SEO.)

A screenshot of a Pinterest profile full of fake, AI-generated content.

But you’d have to actually visit the Bonsai Mary profile page to see that disclaimer, something not everyone who interacts with an individual pin is going to do. None of the individual pins posted to Pinterest by Cunningham that we’ve discovered specifically denote the use of AI through tools like watermarks or text captions. And despite his apparent willingness to broadcast the details of his AI-powered assembly line to other SEOers on YouTube and in members’ forums, we’ve yet to see Cunningham add AI disclaimers to his many AI-generated blog posts or websites.

That appears to be intentional, we found when we signed up for a free, six-episode instructional video series Cunningham offers about his Facebook scheming.

In the second episode, titled “The Basics,” Cunningham explains why he prefers to use AI images of fake people on his pages.

The “three most important parts” of setting up a Facebook page, says Cunningham, are the page’s title, introductory paragraph, and the associated profile picture, the latter of which he refers to as a “logo.” Overlaid on the screen is a page called “Houseplant Community,” which utilizes the same unreal image attributed to the fake author featured over at Bonsai Mary and Off Grid Dreaming.

“Those all really come into play with user interaction,” he explains. “People feel inclined to interact when they see another person… the mind automatically perceives, ‘oh, this is a person posting this, not a page.’ Therefore, they’re more likely to share a post, comment on a post, just engage with a post in general.”

“So I like having people’s faces as the logo,” he adds.

A screenshot taken from an instructional video in which a man, who's sitting behind a desk speaking into a large microphone, shares his screen to show viewers how he uses AI to power entire Facebook pages attributed to fake writers.

It’s unclear how much money Cunningham actually makes from his AI schemes, and how much of his income comes from people paying him to learn how to create their own AI content. All his videos about Pinterest link back to a paid “AI Pinterest Masterclass” that he markets on his personal website, and he also runs a “private, paid” members group for Pinterest and Facebook tactic trading.

We asked Cunningham how much of his revenue he derives from his AI content versus how much he brings in through his paid classes and forums, but he didn’t respond. Needless to say, if someone did hold a low-effort secret to making enormous sums of money online, especially through practices that some view as unethical and could potentially cause a platform to alter its policies, logic would dictate that they’d probably keep it to themself and pull in the cash instead of selling classes as a get-rich-quick scheme.

We also asked Cunningham about the ethics of his strategies on Pinterest and Facebook and his use of fake authors to legitimize social media accounts and synthetic websites, as well as about his practice of targeting older internet users, specifically because of what he perceives as an inability to understand what they’re clicking on, but received no response.

Pinterest and Facebook declined to respond on the record, though both emphasized on background that they’re working on systems to better detect and label AI content.

***

A few days after our initial investigation into Pinterest’s slop problem, Cunningham took to YouTube to reflect on why, in his view, AI content on Pinterest is so “polarizing.”

“There’s a huge problem on Pinterest right now,” Cunningham tells the camera. “It has to do with what it has to do with money, because it always has to do with money.”

To demonstrate his point, Cunningham then goes to Content Goblin to quickly whip up an AI image-smattered listicle for the headline “Yellow Bedroom Ideas.” The whole post takes just a few seconds to produce.

“Imagine if you were old school — imagine if you were on this platform, on Pinterest, say three years ago, two years ago… how hard would it be to come up with this bedroom?” he asks, pausing on an image of a bedroom with a yellow-toned bed. “The simple bed right here, with the pillows, it’d be a pain. It’d be very hard to put that on Pinterest,  because you’d have to go somewhere and take photos… that’s a lot of money right there if it was real.”

A screenshot of a YouTube video in which a man, who's sitting behind a desk speaking into a large microphone, shares his screen to show viewers how he uses AI to quickly generate synthetic articles featuring AI-generated images of interiors. The video is captioned: "AI on Pinterest is BANNED?!"

“There used to be a huge moat around Pinterest for creators,” says Cunningham. “Now, things have changed.”

“That’s the problem,” he continues. Creators “used to get tons of traffic, and then people like me started talking about AI on Pinterest, right, that’s why they’re so angry… there’s money here, and we disrupted the flow of money.”

In the video, Cunningham describes his approach as driving a car — as opposed to traditional creators, who in his view are still riding a horse.

“Old school creators are so angry about AI. Have you ever wondered why that is?” he ponders. “It’s because it’s the intersection of money, and we found the honeyhole. We found where all the money is.”

In that sense, Cunningham is right: he and others have found a loophole to exploit. AI provides them with a crude shortcut to avoid the overhead — time, money, energy — that comes with making real stuff. For pennies on the dollar, Cunningham can produce more content, and rake in some of the cash that might otherwise be going to the “old school” folks behind real, human-made images and blog posts.

It would be naive to suggest that social media has always rewarded helpful quality content, and Cunningham and other slop farmers certainly wouldn’t be the first to use seamy tactics to juke online algorithms for profit. But the speed and scale at which AI slop is altering the web as we know it is astonishing — and, in the realm of Pinterest and other social media channels, it’s raising real challenges for good-faith creators trying to monetize their online businesses, and making the internet a muddier place to spend time as a user.

That’s the reality that Cunningham, across his many videos, fails to grapple with. Sure, a large part of the content creation formula has always been feeding the algorithmic beast, which has often incentivized quantity over quality.

In an ideal scenario, though, there’s a genuine exchange of value. An interior designer uploads an image of a real-world yellow-toned bedroom they pulled together, and a user clicks through to their website, earning them some visibility and ad revenue to support their business; the user, meanwhile, finds real-world inspiration and maybe even reaches out for a consult. Or perhaps a Pinterest user lands on a human-made and tested recipe, and by clicking through to the poster’s blog, the human who came up with the dish gets a kickback for their work drafting, testing, photographing, and posting their creation.

But in Cunningham’s situation, where the social web is awash with fake images that connect back to equally fake blogs, where fake alleged subject matter experts peddle unreal content for engagement and ad revenue? No one, not even “Aunt Carol,” really gets anything. Except, of course, the spammers.

“It’s devastating to us bloggers, the content creators,” said Farnsworth, the food blogger. “We’re the people who created the content that’s on the internet. And people are just going out of business.”

“Yeah,” she continued, a sense of deflation creeping into her voice. “It’s just a bunch of fraud.”

More on AI and Pinerest: Pinterest Is Being Strangled by AI Slop

The post Slop Farmer Boasts About How He Uses AI to Flood Social Media With Garbage to Trick Older Women appeared first on Futurism.

Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial

AI Chat - Image Generator:
In Arizona, the family of a man killed during a road rage incident has used artificial intelligence to revive their dead loved one in court.

In Arizona, the family of a man killed during a road rage incident has used artificial intelligence to revive their dead loved one in court — and the video is just as unsettling as you think.

As Phoenix’s ABC 15 reports, an uncanny simulacrum of the late Christopher Pelkey, who died from a gunshot wound in 2021, played in a courtroom at the end of his now-convicted killer’s trial.

“In another life, we probably could have been friends,” the AI version of Pelkey, who was 37 when he died, told his shooter, Gabriel Paul Horcasitas. “I believe in forgiveness.”

Despite that moving missive, it doesn’t seem that much forgiveness was in the cards for Horcasitas.

After viewing the video — which was created by the deceased man’s sister, Stacey Wales, using an “aged-up” photo Pelkey made when he was still alive — the judge presiding over the case ended up giving the man a 10-and-a-half year manslaughter sentence, which is a year more than what state prosecutors were asking for.

In the caption on her video, Wales explained that she, her husband Tim, and their friend Scott Yenzer made the “digital AI likeness” of her brother using a script she’d written alongside images and audio files they had of him speaking in a “prerecorded interview” taken months before he died.

“These digital assets and script were fed into multiple AI tools to help create a digital version of Chris,” Wales wrote, “polished by hours of painstaking editing and manual refinement.”

In her interview with ABC15, Pelkey’s sister insisted that everyone who knew her late brother “agreed this capture was a true representation of the spirit and soul of how Chris would have thought about his own sentencing as a murder victim.”

She added that creating the digital clone helped her and her family heal from his loss and left her with a sense of peace, though others felt differently.

“Can’t put into words how disturbing I find this,” writer Eoin Higgins tweeted of the Pelkey clone. “The idea of hearing from my brother through this tech is grotesque. Using it in a courtroom even worse.”

Referencing both the Pelkey video and news that NBC is planning to use late sports narrator Jim Fagan’s voice to do new promos this coming NBA season, a Bluesky user insisted that “no one better do this to me once I’m dead.”

“This AI necromancy bullshit is so creepy and wrong,” that user put it — and we must say, it’s hard to argue with that.

More on AI revivals: NBC Using AI to Bring Beloved NBA Narrator Jim Fagan Back From the Grave

The post Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial appeared first on Futurism.

College Students Are Sprinkling Typos Into Their AI Papers on Purpose

AI Chat - Image Generator:
To bypass AI writing detection, college students are, apparently, adding typos into their chatbot-generated papers. 

To bypass artificial intelligence writing detection, college students are reportedly adding typos into their chatbot-generated papers.

In a wide-ranging exploration into the ways AI has rapidly changed academia, students told New York Magazine that AI cheating has become so normalized, they’re figuring out creative ways to get away with it.

While it’s common for students — and for anyone else who uses ChatGPT and other chatbots — to edit the output of an AI chatbot, some are adding typos manually to make essays sound more human.

Some more ingenious users are advising chatbots to essentially dumb down their writing. In a TikTok viewed by NYMag, for instance, a student said she likes to prompt chatbots to “write [an essay] as a college freshman who is a li’l dumb” to bypass AI detection.

Stanford sophomore Eric told NYMag that his classmates have gotten “really good at manipulating the systems.”

“You put a prompt in ChatGPT, then put the output into another AI system, then put it into another AI system,” he said. “At that point, if you put it into an AI-detection system, it decreases the percentage of AI used every time.”

The irony, of course, is that students who go to such lengths to make their AI-generated papers sound human could be using that creativity to actually write the dang things.

Still, instructors are concerned by the energy students are expending on cheating with chatbots.

“They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays,” University of Iowa teaching assistant Sam Williams told the magazine. “And I get it, because I hated writing essays when I was in school.”

When assisting a general education class on music and social change last fall, Williams said he was shocked by the change in tone and quality between students’ first assignments — a personal essay about their own tastes — and their second, which dug into the history of New Orleans jazz.

Not only did those essays sound different, but many included egregious factual errors like the inclusion of Elvis Presley, who was neither a part of the Nola scene nor a jazz musician.

“I literally told my class, ‘Hey, don’t use AI,'” the teaching assistant recalled. “‘But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out.'”

Students have seemingly taken that advice to heart — and Williams, like his colleagues around the country, is concerned about students taking their AI use ever further.

“Whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them,” the Iowa instructor said.

It’s a scary precedent indeed — and one that is, seemingly, continuing unabated.

More on AI cheating: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup

The post College Students Are Sprinkling Typos Into Their AI Papers on Purpose appeared first on Futurism.

Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies

AI Chat - Image Generator:
The third patient of Elon Musk's brain computer interface company Neuralink is using Musk's AI chatbot Grok to speed up communication.

The third patient of Elon Musk’s brain computer interface company Neuralink is using the billionaire’s foul-mouthed AI chatbot Grok to speed up communication.

The patient, Bradford Smith, who has amyotrophic lateral sclerosis (ALS) and is nonverbal as a result, is using the chatbot to draft responses on Musk’s social media platform X.

“I am typing this with my brain,” Smith tweeted late last month. “It is my primary communication. Ask me anything! I will answer at least all verified users!”

“Thank you, Elon Musk!” the tweet reads.

As MIT Technology Review points out, the strategy could come with some downsides, blurring the line between what Smith intends to say and what Grok suggests. On one hand, the tech could greatly facilitate his ability to express himself. On the other hand, generative AI could be robbing him of a degree of authenticity by putting words in his mouth.

“There is a trade-off between speed and accuracy,” University of Washington neurologist Eran Klein told the publication. “The promise of brain-computer interface is that if you can combine it with AI, it can be much faster.”

Case in point, while replying to X user Adrian Dittmann — long suspected to be a Musk sock puppet — Smith used several em-dashes in his reply, a symbol frequently used by AI chatbots.

“Hey Adrian, it’s Brad — typing this straight from my brain! It feels wild, like I’m a cyborg from a sci-fi movie, moving a cursor just by thinking about it,” Smith’s tweet reads. “At first, it was a struggle — my cursor acted like a drunk mouse, barely hitting targets, but after weeks of training with imagined hand and jaw movements, it clicked, almost like riding a bike.”

Perhaps unsurprisingly, generative AI did indeed play a role.

“I asked Grok to use that text to give full answers to the questions,” Smith told MIT Tech. “I am responsible for the content, but I used AI to draft.”

However, he stopped short of elaborating on the ethical quandary of having a potentially hallucinating AI chatbot put words in his mouth.

Murkying matters even further is Musk’s position as being in control of Neuralink, Grok maker xAI, and X-formerly-Twitter. In other words, could the billionaire be influencing Smith’s answers? The fact that Smith is nonverbal makes it a difficult line to draw.

Nonetheless, the small chip implanted in Smith’s head has given him an immense sense of personal freedom. Smith has even picked up sharing content on YouTube. He has uploaded videos he edits on his MacBook Pro by controlling the cursor with his thoughts.

“I am making this video using the brain computer interface to control the mouse on my MacBook Pro,” his AI-generated and astonishingly natural-sounding voice said in a video titled “Elon Musk makes ALS TALK AGAIN,” uploaded late last month. “This is the first video edited with the Neurolink and maybe the first edited with a BCI.”

“This is my old voice narrating this video cloned by AI from recordings before I lost my voice,” he added.

The “voice clone” was created with the help of startup ElevenLabs, which has become an industry standard for those suffering from ALS, and can read out his written words aloud.

But by relying on tools like Grok and OpenAI’s ChatGPT, Smith’s ability to speak again raises some fascinating questions about true authorship and freedom of self-expression for those who lost their voice.

And Smith was willing to admit that sometimes, the ideas of what to say didn’t come directly from him.

“My friend asked me for ideas for his girlfriend who loves horses,” he told MIT Tech. “I chose the option that told him in my voice to get her a bouquet of carrots. What a creative and funny idea.”

More on Neuralink: Brain Implant Companies Apparently Have an Extremely Dirty Secret

The post Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies appeared first on Futurism.