Journalists at Chicago Newspaper “Deeply Disturbed” That “Disaster” AI Slop Was Printed Alongside Their Real Work

AI Chat - Image Generator:
Journalists at The Chicago Sun-Times are speaking out following the paper's publishing of AI-generated misinformation.

Writers at The Chicago Sun-Times, a daily newspaper owned by Chicago Public Media, are speaking out following the paper’s publishing of AI-generated misinformation, urging that the “disaster” content threatens the paper’s reputation and hard-earned reader trust.

The Sun-Times came under fire this week after readers called attention to a “summer reading list” published in the paper’s weekend edition that recommended books that turned out to be completely nonexistent. The books were all attributed to real, well-known authors, but ten out of the 15 listed titles didn’t actually exist. When 404 Media got in touch with the bylined author, he confirmed he’d used AI to drum up the list.

But the writer said he hadn’t double-checked the accuracy of the AI-generated reading list. The list was just one small piece of a 64-page “Heat Index” guide to summer, which, as the Sun-Times noted in its response to Futurism and others, had been provided by a third-party — not by the Sun-Times’ own newsroom or other staff. (Other sections within the “best of summer” feature, The Verge found, included similar erroneous and fabricated attribution issues that hinted at AI use.)

Shortly thereafter, 404 Media confirmed through the Sun-Times that the content was provided by King Features, a subsidiary of the media giant Hearst, and wasn’t reviewed by the Sun-Times before publishing.

“Historically, we don’t have editorial review from those mainly because it’s coming from a newspaper publisher, so we falsely made the assumption there would be an editorial process for this,” Victor Lim, a spokesperson for Chicago Public Media, told 404 Media. “We are updating our policy to require internal editorial oversight over content like this.”

Lim added that Chicago Public Media is “reviewing” its relationship with Hearst, which owns dozens of American newspapers and magazines. The Sun-Times has since posted a lengthy response online apologizing for the AI-spun misinformation making its way to print, while promising to change its editorial policies to protect against such gaffes in the future.

The human journalists at the paper have responded, too.

In a statement provided to media outlets, including Futurism, the paper’s union, the Chicago Sun-Times Guild, issued a forceful statement yesterday admonishing the publishing of the content. It emphasized that the 60-plus page section wasn’t the product of its newsroom, and said it was “deeply disturbed” to find undisclosed AI-generated content “printed alongside” the work of the paper’s journalists.

The Guild’s statement reads in full:

The Sun-Times Guild is aware of the third-party “summer guide” content in the Sunday, May 18 edition of the Chicago Sun-Times newspaper. This was a syndicated section produced externally without the knowledge of the members of our newsroom.

We take great pride in the union-produced journalism that goes into the respected pages of our newspaper and on our website. We’re deeply disturbed that AI-generated content was printed alongside our work. The fact that it was sixty-plus pages of this “content” is very concerning — primarily for our relationship with our audience but also for our union’s jurisdiction.

Our members go to great lengths to build trust with our sources and communities and are horrified by this slop syndication. Our readers signed up for work that has been vigorously reported and fact-checked, and we hate the idea that our own paper could spread computer- or third-party-generated misinformation. We call on Chicago Public Media management to do everything it can to prevent repeating this disaster in the future.

They’re right that reader trust is fundamental to the work of journalism, and it’s an easy thing to lose. Other AI scandals have gone hand-in-hand with reputational damage, as in the cases of CNET and Sports Illustrated, and we’ve seen journalists and their unions from around the country issue similar statements following instances of controversial AI use by publishers.

This is also the latest instance of third-party media companies distributing AI content to legitimate publishers, in many cases without the direct knowledge of those publishers. As a 2024 investigation by Futurism found, a third-party media company called AdVon Commerce used a proprietary AI tool to create articles for dozens of publishers including Sports Illustrated and The Miami Herald; that content was published under the bylines of fake writers with AI-generated headshots and phony bios, manufacturing an air of faux legitimacy. Some publishers, including the Miami Herald and other local newspapers belonging to the McClatchy publishing network, scrubbed their sites of the content following our investigation, saying they were unaware of AI use.

Here, it seems the editorial process was so lacking that AI-generated errors made their way through not just one, but two reputable American publishers before winding up in the Sun-Times’ printed edition. (The freelance writer Joshua Friedman confirmed on Bluesky that the error-riddled “Heat Index” guide was also published in The Philadelphia Inquirer.) Which, as the paper’s union emphasizes in their statement, meant it was published alongside the journalism that human media workers stake their careers on.

More on AI and journalism: Quartz Fires All Writers After Move to AI Slop

The post Journalists at Chicago Newspaper “Deeply Disturbed” That “Disaster” AI Slop Was Printed Alongside Their Real Work appeared first on Futurism.

Chicago Newspaper Caught Publishing a “Summer Reads” Guide Full of AI Slop

AI Chat - Image Generator:

The Chicago Sun-Times, a daily non-profit newspaper owned by Chicago Public Media, published a “summer reading list” featuring wholly fabricated books — the result of broadcasting unverified AI slop in its pages.

An image of a “Summer reading list for 2025” was first shared to Instagram by a book podcaster who goes by Tina Books and was circulated on Bluesky by the novelist Rachael King. The newspaper’s title and the date of the page’s publication are visible in the page’s header.

The page was included in a 64-page “Best of Summer” feature, and as the author, Marco Buscaglia, told 404 Media, it was generated using AI.

“I do use AI for background at times but always check out the material first,” Buscaglia told 404 Media. “This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses.”

“On me 100 percent and I’m completely embarrassed,” he added.

At first glance, the list is unassuming.

“Whether you’re lounging by the pool, relaxing on sandy shores or enjoying the longer daylight hours in your favorite reading spot,” reads the list’s introduction, “these 15 titles — new and old — promise to deliver the perfect summer escape.”

The book titles themselves are unassuming, too. The newspaper recommends titles like the ethereal-sounding “Tidewater Dreams,” which it says was written by the Chilean-American novelist Isabel Allende; “The Last Algorithm,” purported to be a new sci-fi thriller by Andy Weir; and “The Collector’s Piece,” said to be written by the writer Taylor Jenkins Reid about a “reclusive art collector and the journalist determined to uncover the truth behind his most controversial acquisition.”

But as we independently confirmed, though these authors are real and well-known, these books are entirely fake — as are several others listed on the page. Indeed: the first ten out of all fifteen titles listed in the Sun-Times list either don’t exist at all, or the titles are real, but weren’t written by the author that the Sun-Times attributes them to.

Fabrications like made-up citations are commonplace in AI-generated content, and a known risk of using generative AI tools like ChatGPT.

We reached out to the Sun-Times and its owner, Chicago Public Media, which notably also owns the beloved National Public Radio station WBEZ Chicago. In an email, a spokesperson emphasized that the content wasn’t created or approved by the Sun-Times newsroom and that the paper was actively investigating.

“We are looking into how this made it into print as we speak,” read the email. “This is licensed content that was not created by, or approved by, the Sun-Times newsroom, but it is unacceptable for any content we provide to our readers to be inaccurate. We value our readers’ trust in our reporting and take this very seriously. More info will be provided soon as we investigate.”

This was echoed by Buscaglia, who told 404 Media that the content was created to be part of a “promotional special section” not specifically targeted to Chicago.

“It’s supposed to be generic and national,” Buscaglia told 4o4 Media. “We never get a list of where things ran.”

This wouldn’t be the first time AI has been used to create third-party content and published without AI disclosures by journalistic institutions, as Futurism’s investigation last year into AdVon Commerce revealed.

Readers are understandably upset and demanding answers.

“How did the editors at the Sun-Times not catch this? Do they use AI consistently in their work?” reads a Reddit post to r/Chicago about the scandal.  “As a subscriber, I am livid!”

“What is the point of subscribing to a hard copy paper,” the poster continued, “if they are just going to include AI slop too!?”

“I just feel an overwhelming sense of sadness this morning over this?” University of Minnesota Press editorial director Jason Weidemann wrote in a Bluesky post. “There are thousands of struggling writers out there who could write a brilliant summer reads feature and should be paid to do so.”

“Pay humans to do things for fuck’s sake,” he added.

More on AI and journalism: Scammers Stole the Website for Emerson College’s Student Radio Station and Started Running It as a Zombie AI Farm

The post Chicago Newspaper Caught Publishing a “Summer Reads” Guide Full of AI Slop appeared first on Futurism.

Law Firms Caught and Punished for Passing Around “Bogus” AI Slop in Court

AI Chat - Image Generator:
A judge fined two law firms tens of thousands of dollars after lawyers submitted a brief containing sloppy AI errors.

A California judge fined two law firms $31,000 after discovering that they’d included AI slop in a legal brief — the latest instance in a growing tide of avoidable legal drama wrought by lawyers using generative AI to do their work without any due diligence.

As The Verge reported this week, the court filing in question was a brief for a civil lawsuit against the insurance giant State Farm. After its submission, a review of the brief found that it contained “bogus AI-generated research” that led to the inclusion of “numerous false, inaccurate, and misleading legal citations and quotations,” as judge Michael Wilner wrote in a scathing ruling.

According to the ruling, it was only after the judge requested more information about the error-riddled brief that lawyers at the firms involved fessed up to using generative AI. And if he hadn’t caught onto it, Milner cautioned, the AI slop could have made its way into an official judicial order.

“I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them — only to find that they didn’t exist,” Milner wrote in his ruling. “That’s scary.”

“It almost led to the scarier outcome (from my perspective),” he added, “of including those bogus materials in a judicial order.”

A lawyer at one of the firms involved with the ten-page brief, the Ellis George group, used Google’s Gemini and a few other law-specific AI tools to draft an initial outline. That outline included many errors, but was passed along to the next law firm, K&L Gates, without any corrections. Incredibly, the second firm also failed to notice and correct the fabrications.

“No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief,” Milner wrote in the ruling.

After the brief was submitted, a judicial review found that a staggering nine out of 27 legal citations included in the filing “were incorrect in some way,” and “at least two of the authorities cited do not exist.” Milner also found that quotes “attributed to the cited judicial opinions were phony and did not accurately represent those materials.”

As for his decision to levy the hefty fines, Milner said the egregiousness of the failures, coupled with how compelling the AI’s made-up responses were, necessitated “strong deterrence.”

“Strong deterrence is needed,” wrote Milner, “to make sure that lawyers don’t respond to this easy shortcut.”

More on lawyers and AI: Large Law Firm Sends Panicked Email as It Realizes Its Attorneys Have Been Using AI to Prepare Court Documents

The post Law Firms Caught and Punished for Passing Around “Bogus” AI Slop in Court appeared first on Futurism.