To bypass artificial intelligence writing detection, college students are reportedly adding typos into their chatbot-generated papers.
In a wide-ranging exploration into the ways AI has rapidly changed academia, students told New York Magazine that AI cheating has become so normalized, they’re figuring out creative ways to get away with it.
While it’s common for students — and for anyone else who uses ChatGPT and other chatbots — to edit the output of an AI chatbot, some are adding typos manually to make essays sound more human.
Some more ingenious users are advising chatbots to essentially dumb down their writing. In a TikTok viewed by NYMag, for instance, a student said she likes to prompt chatbots to “write [an essay] as a college freshman who is a li’l dumb” to bypass AI detection.
Stanford sophomore Eric told NYMag that his classmates have gotten “really good at manipulating the systems.”
“You put a prompt in ChatGPT, then put the output into another AI system, then put it into another AI system,” he said. “At that point, if you put it into an AI-detection system, it decreases the percentage of AI used every time.”
The irony, of course, is that students who go to such lengths to make their AI-generated papers sound human could be using that creativity to actually write the dang things.
Still, instructors are concerned by the energy students are expending on cheating with chatbots.
“They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays,” University of Iowa teaching assistant Sam Williams told the magazine. “And I get it, because I hated writing essays when I was in school.”
When assisting a general education class on music and social change last fall, Williams said he was shocked by the change in tone and quality between students’ first assignments — a personal essay about their own tastes — and their second, which dug into the history of New Orleans jazz.
Not only did those essays sound different, but many included egregious factual errors like the inclusion of Elvis Presley, who was neither a part of the Nola scene nor a jazz musician.
“I literally told my class, ‘Hey, don’t use AI,'” the teaching assistant recalled. “‘But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out.'”
Students have seemingly taken that advice to heart — and Williams, like his colleagues around the country, is concerned about students taking their AI use ever further.
“Whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them,” the Iowa instructor said.
It’s a scary precedent indeed — and one that is, seemingly, continuing unabated.
More on AI cheating: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup
The post College Students Are Sprinkling Typos Into Their AI Papers on Purpose appeared first on Futurism.
In Arizona, the family of a man killed during a road rage incident has used artificial intelligence to revive their dead loved one in court — and the video is just as unsettling as you think.
As Phoenix’s ABC 15 reports, an uncanny simulacrum of the late Christopher Pelkey, who died from a gunshot wound in 2021, played in a courtroom at the end of his now-convicted killer’s trial.
“In another life, we probably could have been friends,” the AI version of Pelkey, who was 37 when he died, told his shooter, Gabriel Paul Horcasitas. “I believe in forgiveness.”
Despite that moving missive, it doesn’t seem that much forgiveness was in the cards for Horcasitas.
After viewing the video — which was created by the deceased man’s sister, Stacey Wales, using an “aged-up” photo Pelkey made when he was still alive — the judge presiding over the case ended up giving the man a 10-and-a-half year manslaughter sentence, which is a year more than what state prosecutors were asking for.
In the caption on her video, Wales explained that she, her husband Tim, and their friend Scott Yenzer made the “digital AI likeness” of her brother using a script she’d written alongside images and audio files they had of him speaking in a “prerecorded interview” taken months before he died.
“These digital assets and script were fed into multiple AI tools to help create a digital version of Chris,” Wales wrote, “polished by hours of painstaking editing and manual refinement.”
In her interview with ABC15, Pelkey’s sister insisted that everyone who knew her late brother “agreed this capture was a true representation of the spirit and soul of how Chris would have thought about his own sentencing as a murder victim.”
She added that creating the digital clone helped her and her family heal from his loss and left her with a sense of peace, though others felt differently.
“Can’t put into words how disturbing I find this,” writer Eoin Higgins tweeted of the Pelkey clone. “The idea of hearing from my brother through this tech is grotesque. Using it in a courtroom even worse.”
Referencing both the Pelkey video and news that NBC is planning to use late sports narrator Jim Fagan’s voice to do new promos this coming NBA season, a Bluesky user insisted that “no one better do this to me once I’m dead.”
“This AI necromancy bullshit is so creepy and wrong,” that user put it — and we must say, it’s hard to argue with that.
More on AI revivals: NBC Using AI to Bring Beloved NBA Narrator Jim Fagan Back From the Grave
The post Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial appeared first on Futurism.
Last year, Jesse Cunningham — a self-described “SEO specialist who leverages the power of AI to drive real results” — appeared in a livestream for a closed members group for SEO secret-trading. He’d been invited to discuss his AI strategies for monetizing content on Facebook, where he claimed to have found financial success by flooding the Meta-owned platform with fake, AI-generated images of things like faux houseplants and ChatGPT-created recipes.
“Don’t ban me, people,” Cunningham jokes into a large microphone, explaining that one of his AI pages had previously been flagged by Meta for violating platform policies after he revealed its name in a public-facing YouTube video.
Cunningham explains that his preferred groups to target are devoted fandoms and the elderly. The former is easily excited, he posits, while the latter probably won’t understand that what they’re clicking on is synthetic at all.
“Best are voracious fan bases. Fan boys, fan girls,” Cunningham tells the group. “And an older demographic, where Aunt Carol doesn’t really know how to use Facebook, and she’s just likely to share everything.”
“I’m going after audience 50-plus female,” he reiterates, explaining that targeting older women on Facebook means his content can be cross-posted over on the aspirational image-sharing-and-sourcing platform Pinterest, the userbase of which is overwhelmingly made up of women.
“Why am I going after females? Because… I want to cross-pollinate the audience,” says Cunningham. “I want to kill two birds with one stone and dominate Pinterest and Facebook at the same time. Fifty-plus female is my demo.”
The recorded call is just under an hour in length. At one point, Cunningham triumphantly declares that he’s “starting new pages in the recipe niche” and wants “to disrupt that whole industry” because, in his telling, it’s “ripe for the taking.”
“Going back to the AI recipes, do you know if they actually work?” someone asks Cunningham later in the clip.
“Of course they work. ChatGPT told me they work,” Cunningham, who looks genuinely baffled by the question, responds. “What kind of question is that?”
Cunningham is one of many sloperators using AI to flood social media with AI content to make money.
The process goes like this. Cunningham publishes large numbers of AI-generated articles to websites helmed by made-up bloggers, with AI-generated headshots, purporting to be experts in topics ranging from houseplants and recipes to DIY holiday crafts and nature scenes. Then he posts AI-generated images linking back to those sites on social, with Cunningham claiming he’s able to rake in cash — not by actually putting time and energy into photographing any actual home gardening, or drafting and testing new recipes, but by using AI to quickly and cheaply imitate traditional content creators’ final product.
Evidence of such zombie tactics employed by Cunningham and others are evident on his preferred platforms, Pinterest and Facebook, where users are increasingly made to wade through swamps of parasitic AI slop.
As Futurism reported earlier this year, Pinterest is facing a pervasive influx of AI-generated content masquerading as the real thing. The torrent of AI slop on Facebook is well-documented as well — last year, an in-depth 404 Media investigation revealed that AI slop farmers around the world had figured out how to use AI to generate engagement-bait imagery designed to earn cash by exploiting Facebook’s since-shuttered Performance Bonus program.
We highlighted Cunningham in our previous reporting about Pinterest. He’s an avid YouTuber, and we were struck by his candor as he publicly shared the sordid details of his slop farming process, which frequently includes copying the work of his competitors — real bloggers and online creators who say the AI influx on Pinterest, Facebook, and other platforms has had a destructive impact on their businesses.
“Across the board, like across the board, this is something that is talked about in blogging groups all the time, because it is devastating all of our businesses,” Rachel Farnsworth, a veteran food blogger of the website The Stay at Home Chef, told Futurism of the impact that schemes like Cunningham’s have had on her industry.
“It’s put a ton of people out of business,” she added.
We decided to dig deeper into Cunningham’s extensive content catalog, on YouTube and beyond, where we found a telling portrait of the layers of unreality shrouding the AI slop increasingly crowding the web — and the attitudes of the slopageddon’s eager purveyers, down to their eagerness to trick old ladies and copy others’ work.
***
According to Cunningham, AI offers a way to pretty much print money online.
“Pinterest is one of the easiest ways to make money online right now,” he declares in a November YouTube video titled “AI Pinterest Strategy for $15,942/MONTH.”
“Our goal is to catch fish,” he adds. He then clarifies: “fish is making money.”
But to “catch fish,” he emphasizes, quantity — made easy by generative AI tools — is key.
“Ten pins daily is not going to cut it,” he adds, explaining that he posts around 80 AI pins a day in his efforts to manipulate Pinterest’s algorithm — enough to get his pins to “cruising altitude,” he says, but not enough to get hit with a spam notice by the platform. “You’re not going to compete with me and the other people doing it at scale.”
But “luckily, nowadays,” he continues, “we have AI.”
His process, as Cunningham lays out across his videos, begins by tracking down existing pins that are already doing well.
In the November video, for instance, he homes in on a parenting-oriented blog called The Mummy Front. The blog isn’t his; instead, Cunningham seeks to use AI to replicate someone else’s viral content at scale.
“So this one here — ‘Andie DIY Ikea Hacks, Crafts to make,’ blah blah blah blah blah — they crush it for Christmas,” Cunningham remarks. “So I can come in here to their Christmas board… and now we’re looking around. We can figure out, all right, this is what works with Christmas, because this is a top-five Christmas page on all of Pinterest.”
Cunningham then zeroes in on one of The Mummy Front’s top-performing pins, which links back to a listicle-style blog post about Christmas wrapping paper ideas.
Drawing on that post, Cunningham takes to an AI-powered content creation tool called Content Goblin where, after inputting just a headline into a text box — he requests a post for “47 Gift Wrapping Ideas You Need To Try for CHRISTMAS” — he’s able to churn out a lengthy listicle in a matter of moments, complete with AI-generated images.
Then he uploads the AI-generated blog post, without editing, to a faux blogging site he runs called Bonsai Mary.
Bonsai Mary is helmed by an “author” named “Mary Smith,” who features prominently on the site’s landing page, along with an AI-generated headshot.
Despite its bonsai-focused title, the blog’s content is surprisingly wide-ranging — an oddity that its alleged blogger-in-chief, Smith, speaks to in a first-person missive published on its homepage.
“Welcome to Bonsai Mary — this website has been around since 2009! The main focus here is plants but I also love to share recipes and interior decorating things I love,” reads the webpage. “My name is Mary Smith, a seasoned gardener and bonsai artist and author of BonsaiMary.com. I love nature and any new plant I haven’t seen before!”
But Smith is clearly not a real person. In addition to her AI-generated headshot, she has no publishing history outside of Bonsai Mary — except for a blog titled Off Grid Dreaming, which is also operated by Cunningham, according to other YouTube videos.
What’s more, though it’s technically true that the Bonsai Mary website has been around since 2009, archived versions of the site show that Bonsai Mary was actually founded back in the late 2000s by a woman named Mary C. Miller, a real American bonsai artist and author.
It’s unclear when the blog’s domain first switched hands. But according to archived snapshots documented in the Internet Archive’s Wayback Machine, “Mary Smith” didn’t appear until late 2023.
Finally, to publish his AI-generated images to Pinterest, Cunningham uses ChatGPT to drum up short, Pinterest-optimized descriptions for each image. He throws all of that into a spreadsheet, and using a planning tool, mass-uploads links to his synthetic blog. (In other videos, he uses yet another AI tool to overlay headline text onto AI-generated imagery with little effort.)
And from there, he says, he’s “cruising.”
“You,” he tells the viewer, “can use all these tools to get a competitive advantage on everyone.”
Cunningham creates content for a variety of topics — or “niches,” as folks in his industry will say, from cooking and recipes to interior design and decor.
The Pinterest account page for Bonsai Mary, which lists 8.6 million monthly views on its profile, says in its bio that “we create AI pins and blog posts for all to enjoy.” The associated profile for Off Grid Dreaming, which lists around 20.2k monthly views, fails to issue a similar disclaimer.
“At Off Grid Dreaming,” reads its bio, “we specialize in designing sustainable, off-grid living spaces that blend style, comfort, and functionality.” (There’s no evidence that Cunningham, or “Mary,” actually “specialize” in anything beyond SEO.)
But you’d have to actually visit the Bonsai Mary profile page to see that disclaimer, something not everyone who interacts with an individual pin is going to do. None of the individual pins posted to Pinterest by Cunningham that we’ve discovered specifically denote the use of AI through tools like watermarks or text captions. And despite his apparent willingness to broadcast the details of his AI-powered assembly line to other SEOers on YouTube and in members’ forums, we’ve yet to see Cunningham add AI disclaimers to his many AI-generated blog posts or websites.
That appears to be intentional, we found when we signed up for a free, six-episode instructional video series Cunningham offers about his Facebook scheming.
In the second episode, titled “The Basics,” Cunningham explains why he prefers to use AI images of fake people on his pages.
The “three most important parts” of setting up a Facebook page, says Cunningham, are the page’s title, introductory paragraph, and the associated profile picture, the latter of which he refers to as a “logo.” Overlaid on the screen is a page called “Houseplant Community,” which utilizes the same unreal image attributed to the fake author featured over at Bonsai Mary and Off Grid Dreaming.
“Those all really come into play with user interaction,” he explains. “People feel inclined to interact when they see another person… the mind automatically perceives, ‘oh, this is a person posting this, not a page.’ Therefore, they’re more likely to share a post, comment on a post, just engage with a post in general.”
“So I like having people’s faces as the logo,” he adds.
It’s unclear how much money Cunningham actually makes from his AI schemes, and how much of his income comes from people paying him to learn how to create their own AI content. All his videos about Pinterest link back to a paid “AI Pinterest Masterclass” that he markets on his personal website, and he also runs a “private, paid” members group for Pinterest and Facebook tactic trading.
We asked Cunningham how much of his revenue he derives from his AI content versus how much he brings in through his paid classes and forums, but he didn’t respond. Needless to say, if someone did hold a low-effort secret to making enormous sums of money online, especially through practices that some view as unethical and could potentially cause a platform to alter its policies, logic would dictate that they’d probably keep it to themself and pull in the cash instead of selling classes as a get-rich-quick scheme.
We also asked Cunningham about the ethics of his strategies on Pinterest and Facebook and his use of fake authors to legitimize social media accounts and synthetic websites, as well as about his practice of targeting older internet users, specifically because of what he perceives as an inability to understand what they’re clicking on, but received no response.
Pinterest and Facebook declined to respond on the record, though both emphasized on background that they’re working on systems to better detect and label AI content.
***
A few days after our initial investigation into Pinterest’s slop problem, Cunningham took to YouTube to reflect on why, in his view, AI content on Pinterest is so “polarizing.”
“There’s a huge problem on Pinterest right now,” Cunningham tells the camera. “It has to do with what it has to do with money, because it always has to do with money.”
To demonstrate his point, Cunningham then goes to Content Goblin to quickly whip up an AI image-smattered listicle for the headline “Yellow Bedroom Ideas.” The whole post takes just a few seconds to produce.
“Imagine if you were old school — imagine if you were on this platform, on Pinterest, say three years ago, two years ago… how hard would it be to come up with this bedroom?” he asks, pausing on an image of a bedroom with a yellow-toned bed. “The simple bed right here, with the pillows, it’d be a pain. It’d be very hard to put that on Pinterest, because you’d have to go somewhere and take photos… that’s a lot of money right there if it was real.”
“There used to be a huge moat around Pinterest for creators,” says Cunningham. “Now, things have changed.”
“That’s the problem,” he continues. Creators “used to get tons of traffic, and then people like me started talking about AI on Pinterest, right, that’s why they’re so angry… there’s money here, and we disrupted the flow of money.”
In the video, Cunningham describes his approach as driving a car — as opposed to traditional creators, who in his view are still riding a horse.
“Old school creators are so angry about AI. Have you ever wondered why that is?” he ponders. “It’s because it’s the intersection of money, and we found the honeyhole. We found where all the money is.”
In that sense, Cunningham is right: he and others have found a loophole to exploit. AI provides them with a crude shortcut to avoid the overhead — time, money, energy — that comes with making real stuff. For pennies on the dollar, Cunningham can produce more content, and rake in some of the cash that might otherwise be going to the “old school” folks behind real, human-made images and blog posts.
It would be naive to suggest that social media has always rewarded helpful quality content, and Cunningham and other slop farmers certainly wouldn’t be the first to use seamy tactics to juke online algorithms for profit. But the speed and scale at which AI slop is altering the web as we know it is astonishing — and, in the realm of Pinterest and other social media channels, it’s raising real challenges for good-faith creators trying to monetize their online businesses, and making the internet a muddier place to spend time as a user.
That’s the reality that Cunningham, across his many videos, fails to grapple with. Sure, a large part of the content creation formula has always been feeding the algorithmic beast, which has often incentivized quantity over quality.
In an ideal scenario, though, there’s a genuine exchange of value. An interior designer uploads an image of a real-world yellow-toned bedroom they pulled together, and a user clicks through to their website, earning them some visibility and ad revenue to support their business; the user, meanwhile, finds real-world inspiration and maybe even reaches out for a consult. Or perhaps a Pinterest user lands on a human-made and tested recipe, and by clicking through to the poster’s blog, the human who came up with the dish gets a kickback for their work drafting, testing, photographing, and posting their creation.
But in Cunningham’s situation, where the social web is awash with fake images that connect back to equally fake blogs, where fake alleged subject matter experts peddle unreal content for engagement and ad revenue? No one, not even “Aunt Carol,” really gets anything. Except, of course, the spammers.
“It’s devastating to us bloggers, the content creators,” said Farnsworth, the food blogger. “We’re the people who created the content that’s on the internet. And people are just going out of business.”
“Yeah,” she continued, a sense of deflation creeping into her voice. “It’s just a bunch of fraud.”
More on AI and Pinerest: Pinterest Is Being Strangled by AI Slop
The post Slop Farmer Boasts About How He Uses AI to Flood Social Media With Garbage to Trick Older Women appeared first on Futurism.
The third patient of Elon Musk’s brain computer interface company Neuralink is using the billionaire’s foul-mouthed AI chatbot Grok to speed up communication.
The patient, Bradford Smith, who has amyotrophic lateral sclerosis (ALS) and is nonverbal as a result, is using the chatbot to draft responses on Musk’s social media platform X.
“I am typing this with my brain,” Smith tweeted late last month. “It is my primary communication. Ask me anything! I will answer at least all verified users!”
“Thank you, Elon Musk!” the tweet reads.
As MIT Technology Review points out, the strategy could come with some downsides, blurring the line between what Smith intends to say and what Grok suggests. On one hand, the tech could greatly facilitate his ability to express himself. On the other hand, generative AI could be robbing him of a degree of authenticity by putting words in his mouth.
“There is a trade-off between speed and accuracy,” University of Washington neurologist Eran Klein told the publication. “The promise of brain-computer interface is that if you can combine it with AI, it can be much faster.”
Case in point, while replying to X user Adrian Dittmann — long suspected to be a Musk sock puppet — Smith used several em-dashes in his reply, a symbol frequently used by AI chatbots.
“Hey Adrian, it’s Brad — typing this straight from my brain! It feels wild, like I’m a cyborg from a sci-fi movie, moving a cursor just by thinking about it,” Smith’s tweet reads. “At first, it was a struggle — my cursor acted like a drunk mouse, barely hitting targets, but after weeks of training with imagined hand and jaw movements, it clicked, almost like riding a bike.”
Perhaps unsurprisingly, generative AI did indeed play a role.
“I asked Grok to use that text to give full answers to the questions,” Smith told MIT Tech. “I am responsible for the content, but I used AI to draft.”
However, he stopped short of elaborating on the ethical quandary of having a potentially hallucinating AI chatbot put words in his mouth.
Murkying matters even further is Musk’s position as being in control of Neuralink, Grok maker xAI, and X-formerly-Twitter. In other words, could the billionaire be influencing Smith’s answers? The fact that Smith is nonverbal makes it a difficult line to draw.
Nonetheless, the small chip implanted in Smith’s head has given him an immense sense of personal freedom. Smith has even picked up sharing content on YouTube. He has uploaded videos he edits on his MacBook Pro by controlling the cursor with his thoughts.
“I am making this video using the brain computer interface to control the mouse on my MacBook Pro,” his AI-generated and astonishingly natural-sounding voice said in a video titled “Elon Musk makes ALS TALK AGAIN,” uploaded late last month. “This is the first video edited with the Neurolink and maybe the first edited with a BCI.”
“This is my old voice narrating this video cloned by AI from recordings before I lost my voice,” he added.
The “voice clone” was created with the help of startup ElevenLabs, which has become an industry standard for those suffering from ALS, and can read out his written words aloud.
But by relying on tools like Grok and OpenAI’s ChatGPT, Smith’s ability to speak again raises some fascinating questions about true authorship and freedom of self-expression for those who lost their voice.
And Smith was willing to admit that sometimes, the ideas of what to say didn’t come directly from him.
“My friend asked me for ideas for his girlfriend who loves horses,” he told MIT Tech. “I chose the option that told him in my voice to get her a bouquet of carrots. What a creative and funny idea.”
More on Neuralink: Brain Implant Companies Apparently Have an Extremely Dirty Secret
The post Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies appeared first on Futurism.
OpenAI may be raking in the investor dough, but thanks in part to erstwhile cofounder Elon Musk, the company won’t be going entirely for-profit anytime soon.
In a blog post this week, the Sam Altman-run company announced that it would remain under the control of its original non-profit governing board as it shifts its planned restructuring efforts of its for-profit arm.
“Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC),” the post reads, which is a “purpose-driven company structure that has to consider the interests of both shareholders and the mission.”
Though Musk was not named, that allusion to “the mission” — the building of artificial general intelligence (AGI) that “benefits all of humanity” — hearkens back to the billionaire’s lawsuit alleging that OpenAI strayed from said purpose when initially launching its for-profit arm in 2019 upon his exit.
OpenAI claims in its post that it came to the decision to remain under the control of the non-profit board — the same one that fired Altman in late November 2023, only to reinstate him a few days later — “after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.”
Late last December, amid Musk’s ongoing suit that was initially filed in March 2024, the company announced its plans to restructure into a PBC that would help it “raise more capital than we’d imagined” while staying on-mission.
That plan, as CNN reports, raised alarm bells about how OpenAI would balance raising gobs of money with its beneficial AGI mission. It seems that this latest move is its response — though according to Musk’s attorney Marc Toberoff, the PBC announcement “changes nothing.”
“OpenAI’s announcement is a transparent dodge that fails to address the core issues: charitable assets have been and still will be transferred for the benefit of private persons,” Toberoff said in a statement provided to Bloomberg. “The founding mission remains betrayed.”
In a rebuttal to the same outlet, an OpenAI insider hit back at Musk and his “baseless lawsuit,” which “only proves that it was always a bad-faith attempt to slow us down.”
Accusations aside, this is still a pretty far cry from turning OpenAI into a bona fide for-profit venture — and regardless of what the company claims, Musk’s almost certainly jealousy-based lawsuit has played a role in making sure that doesn’t happen.
More on OpenAI moves: OpenAI Trying to Buy Chrome So It Can Ingest Your Entire Online Life to Train AI
The post OpenAI Forced to Abandon Plans to Become For-Profit appeared first on Futurism.
Walking the floor at last week’s RSA Conference in San Francisco, it was clear that artificial intelligence dominates the conversation among security professionals. Discussions spanned both harnessing AI for security tasks – ‘agents’ were a recurring theme – and the distinct challenge of securing AI systems themselves, particularly foundation models. The rapidly growing pool of powerful open-weights models—ranging from Meta’s Llama and Google’s Gemma to notable newcomers from China such as Alibaba’s Qwen and DeepSeek—underscores both immense opportunities and heightened risks for AI teams.
Get beyond the basics with our premium subscription option!
However, mention open-weights models to security practitioners, and the conversation quickly turns to supply chain risks. The proliferation of derivatives – dozens can appear on platforms like Hugging Face shortly after a major release – presents a significant validation challenge, one that vendors of proprietary models mitigate through tighter control over distribution and modification. A distinct and often more acute set of concerns arises specifically for models originating from China. Beyond the general supply chain issues, these models face scrutiny related to national security directives, data sovereignty laws, regulatory compliance gaps, intellectual property provenance, potential technical vulnerabilities, and broader geopolitical tensions, creating complex risk assessments for potential adopters.
So, are open-weights models originating from China inherently riskier from a technical security perspective than their counterparts from elsewhere? Coincidentally, I discussed this very topic recently with Jason Martin, an AI Security Researcher at HiddenLayer. His view, which resonates with my own assessment, is that the models themselves – the weights and architecture – do not present unique technical vulnerabilities simply because of their country of origin. As Martin put it, “There’s nothing intrinsic in the weights that says it’s going to compromise you,” nor will a model installed on-premises autonomously transmit data back to China. HiddenLayer’s own forensic analysis of DeepSeek-R1 supports this; while identifying unique architectural signatures useful for detection and governance, their deep dive found no evidence of country-specific backdoors or vulnerabilities.
Therefore, while the geopolitical and regulatory concerns surrounding Chinese technology are valid and must factor into any organization’s risk calculus, they should be distinguished from the technical security posture of the models themselves. From a purely technical standpoint, the security challenges posed by models like Qwen or DeepSeek are fundamentally the same as those posed by Llama or Gemma: ensuring the integrity of the specific checkpoint being used and mitigating supply chain risks inherent in the open-weights ecosystem, especially concerning the proliferation of unvetted derivatives. The practical security work remains focused on validation, provenance tracking, and robust testing, regardless of the model’s flag.
Ultimately, the critical factor for teams building AI applications isn’t the national origin of an open-weights model, but the rigor of the security validation and governance processes applied before deployment. Looking ahead, I expect the industry focus to intensify on developing better tools and practices for this: more sophisticated detectors for structured-policy exploits, wider adoption of automated red-teaming agents, and significantly stricter supply-chain validation for open checkpoints. Bridging the current gap between rapid AI prototyping and thorough security hardening, likely through improved interdisciplinary collaboration between technical, security, and legal teams, will be paramount for the responsible adoption of any powerful foundation model.
Help us out! Your 3 minutes on our AI Governance survey makes a big difference.
The post Are Chinese open-weights Models a Hidden Security Risk? appeared first on Gradient Flow.
Evangelos Simoudis occupies a valuable vantage point at the intersection of AI innovation and enterprise adoption. Because he engages directly with both corporations navigating AI implementation and the startups building new solutions, I always appreciate checking in with him. His insights are grounded in a unique triangulation of data streams, including firsthand information from his AI-focused portfolio companies and their clients, confidential advisory work with large corporations, and discussions with market analysts. Below is a heavily edited excerpt from our recent conversation about the current state of AI adoption.
Become a premium member: Support us & get extras!
There’s growing interest in AI broadly, but it’s important to distinguish between generative AI and discriminative AI (also called traditional AI). Discriminative AI adoption is progressing well, with many experimental projects now moving to deployment with allocated budgets.
For generative AI, there’s still a lot of experimentation happening, but fewer projects are moving from POCs to actual deployment. We expect more generative AI projects to move toward deployment by the end of the year, but we’re still in the hype stage rather than broad adoption.
As for agentic systems, we’re seeing even fewer pilots. Enterprises face a “bandwidth bottleneck” similar to what we see in cybersecurity – there are so many AI systems being presented to executives that they only have limited capacity to evaluate them all.
Three major use cases stand out:
These three areas are where we see the most significant movement from experimentation to production, both in solutions from private companies and internal corporate efforts.
Financial services and technology-driven companies are at the forefront. For example:
Interestingly, automotive is not among the leading industries in generative AI adoption. They’re facing more immediate challenges like tariff issues that are taking priority over AI initiatives.
Three main characteristics stand out:
A good example is Klarna (the financial services company from Sweden), which initially tried using AI-only customer support but had to modify their approach after discovering issues with customer experience. What’s notable is both their initial willingness to completely change their business process and their flexibility to adjust when the original approach didn’t work optimally.
Data strategy is critically important but often underestimated. One of the biggest mistakes companies make is assuming they can simply point generative AI at their existing data without making changes to their data strategy or platform.
When implementing generative AI, companies need to understand what they’re trying to accomplish. Different approaches – whether using off-the-shelf closed models, fine-tuning open-source models, or building their own language models – each require an associated data strategy. This means not only having the appropriate type of data but also performing the appropriate pre-processing.
Unfortunately, this necessity isn’t always well communicated by vendors to their clients, leading to confusion and resistance. Many executives push back when told they need to reconfigure, clean, or label their data beyond what they’ve already done.
There’s significant confusion about what models companies need. Key considerations include:
The pace at which new models are released adds to this confusion. The hyperscalers (large cloud providers like Microsoft Azure, Google Cloud, AWS) are making strong inroads as one-stop solutions.
Regarding open weights versus proprietary models, the decision depends on what you’re trying to accomplish, along with considerations of cost, latency, and the talent you have available. The ideal strategy is to architect your application to be model-agnostic or even use multiple models.
There are also concerns about using models from certain geographies, such as Chinese models, due to security considerations, but this is just one factor in the complex decision-making process.
The typical hierarchy seems to be:
Enterprises are weighing whether to pursue a best-of-breed strategy or an all-in-one solution, and hyperscalers are making strong inroads offering the latter, integrating various capabilities including risk detection.
The lack of robust tooling around ML Ops (Machine Learning operations) and LLM Ops (Large Language Model operations) is one reason why many companies struggle to move from experimentation to production.
We’re seeing strong interest in the continuum between data ops, model ops (including ML ops and LLM ops), and DevOps. The hyperscalers don’t have the strongest solutions for these operational challenges, creating an opportunity for startups.
Retrieval-Augmented Generation (RAG) is definitely the dominant pattern moving into production. Corporations seem most comfortable with it, likely because it requires the least amount of fundamental change and investment compared to fine-tuning or building models from scratch.
Regarding knowledge graphs and neuro-symbolic systems (combining neural networks with symbolic reasoning, often via graphs), we see the underlying technologies becoming more important in system architecture. However, we’re not seeing significant inbound demand for GraphRAG and graph-based solutions from corporations yet; it’s more of an educational effort currently. Palantir is another company notably pushing a knowledge graph-based approach.
Currently, we’re seeing individuals working with at most one agent (often called a co-pilot). However, there’s confusion about terminology – we need to distinguish between chatbots, co-pilots, and true agents.
A true agent needs reasoning ability, memory, the ability to learn, perceive the environment, reason about it, remember past actions, and learn from experiences. Most systems promoted as agents today don’t have all these capabilities.
What we have today is mostly single human-single agent interactions. The progression would be to single human-multiple agents before we can advance to multiple agents interacting among themselves. While there’s interest and experimentation with agents, I haven’t seen examples of true agents working independently that enterprises can rely on.
In the next 6-12 months, I expect to see more generative AI applications moving to production across more industries, starting with the three primary use cases mentioned earlier (customer support, programming, intelligent documents).
Success will be judged on CFO-friendly metrics: productivity lift, cost reduction, higher customer satisfaction, and revenue generation. If these implementations prove successful with measurable business impacts, then moving toward agent-driven systems will become easier.
However, a major concern is that the pace of adoption might not be as fast as technology providers hope. The willingness and ability of organizations to change their underlying business processes remains a significant hurdle.
I don’t believe in camera-only systems for self-driving cars. While camera-only systems might work in certain idealized environments without rain or fog, deploying one platform across a variety of complex environments with different weather conditions requires a multi-sensor approach (including LiDAR, radar, cameras).
The cost of sensors is decreasing, making it more feasible for companies to incorporate multiple sensors. The key question is determining the optimal number of each type of sensor needed to operate safely in various environments. Fleet operators like Waymo or Zoox have an advantage here because they work with a single type of vehicle with defined geometry and sensor stack.
Teleoperations are a critical, yet often undiscussed, aspect of current autonomous vehicle deployments. What’s not widely discussed is the ratio of teleoperators to vehicles, which significantly impacts the economics of these systems. Having one teleoperator per 40 vehicles is very different from having one per four vehicles.
Until there’s transparency around these numbers, it’s very difficult to accurately assess which companies have the most efficient and scalable autonomous driving systems. In essence, many current autonomous vehicle systems are multi-agent systems with humans in the loop.
The post Generative AI in the Real World: Lessons From Early Enterprise Winners appeared first on Gradient Flow.