MIT Disavowed a Viral Paper Claiming That AI Leads to More Scientific Discoveries

AI Chat - Image Generator:
The paper on AI and scientific discovery has now become a black eye on MIT's reputation.

No Provenance

The Massachusetts Institute of Technology (MIT) is distancing itself from a headline-making paper about AI’s purported ability to accelerate the speed of science.

The paper in question is “Artificial Intelligence, Scientific Discovery, and Product Innovation,” and was published in December as a pre-print by an MIT graduate student in economics, Aidan Toner-Rodgers. It quickly generated buzz, and outlets including The Wall Street Journal, Nature, and The Atlantic covered the paper’s (alleged) findings, which purported to demonstrate how the embrace of AI at a materials science lab led to a significant increase in workforce productivity and scientific discovery, albeit, at the cost of workforce happiness.

Toner-Rodgers’ work even earned praise from top MIT economists David Autor and 2024 Nobel laureate Daron Acemoglu, the latter of whom called the paper “fantastic.”

But it seems that praise was premature, to put it mildly. In a press release on Friday, MIT conceded that following an internal investigation, it “has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.” MIT didn’t give a reason for its backpedaling, citing “student privacy laws and MIT policy,” but it’s a black eye on MIT nonetheless.

The university has also requested that the paper be removed from the ePrint archive ArXiv and requested it be withdrawn from consideration by the Quarterly Journal of Economics, where it’s currently under review.

The ordeal is “more than just embarrassing,” Autor told the WSJ in a new report, “it’s heartbreaking.”

David vs. MIT

According to the WSJ’s latest story, the course reversal kicked off in January, when an unnamed computer scientist “with experience in materials science” approached Autor and Acemoglu with questions about how the AI tech centered in the study actually worked, and “how a lab he wasn’t aware of had experienced gains in innovation.”

When Autor and Acemoglu were unable to get to the bottom of those questions on their own, they took their concerns to MIT’s higher-ups. Enter, months later: Friday’s press release, in which Autor and Acemoglu, in a joint statement, said they wanted to “set the record straight.”

That a paper evidently so flawed passed under so many well-educated eyes with little apparent pushback is, on the one hand, pretty shocking. Then again, as materials scientist Ben Shindel wrote in a blog post, its conclusion — that AI means more scientific productivity, but less joy — feels somewhat intuitive. And yet, according to the WSJ’s reporting, it wasn’t until closer inspection by someone with domain expertise, who could see through the paper’s optimistic veneer, that those seemingly intuitive threads unwound.

More on AI and the workforce: AI Is Helping Job Seekers Lie, Flood the Market, and Steal Jobs

The post MIT Disavowed a Viral Paper Claiming That AI Leads to More Scientific Discoveries appeared first on Futurism.

Student Livid After Catching Her Professor Using ChatGPT, Asks For Her Money Back

AI Chat - Image Generator:
Many students aren't allowed to use artificial intelligence, and when they catch their teachers doing so, they're often peeved.

Many students aren’t allowed to use artificial intelligence to do their assignments — and when they catch their teachers doing so, they’re often peeved.

In an interview with the New York Times, one such student — Northeastern’s Ella Stapleton — was shocked earlier this year when she began to suspect that her business professor had generated lecture notes with ChatGPT.

When combing through those notes, the newly-matriculated student noticed a ChatGPT search citation, obvious misspellings, and images with extraneous limbs and digits — all hallmarks of AI use.

“He’s telling us not to use it,” Stapleton said, “and then he’s using it himself.”

Alarmed, the senior brought up the professor’s AI use with Northeastern’s administration and demanded her tuition back. After a series of meetings that ran all the way up until her graduation earlier this month, the school gave its final verdict: that she would not be getting her $8,000 in tuition back.

Most of the educators the NYT spoke to — who, like Stapleton’s, had been caught by students using AI tools like ChatGPT — didn’t think it was that big of a deal.

To the mind of Paul Shovlin, an English teacher and AI fellow at Ohio University, there is no “one-size-fits-all” approach to using the burgeoning tech in the classroom. Students making their AI-using professors out to be “some kind of monster,” as he put it, is “ridiculous.”

That take, which over-inflates the student’s concerns to make her sound hystrionic, dismisses another burgeoning consensus: that others view the use of AI at work as lazy and look down upon people who use it.

In a new study from Duke, business researchers found that people both anticipate and experience judgment from their colleagues for using AI at work.

The study involved more than 4,400 people who, through a series of four experiments, indicated ample “evidence of a social evaluation penalty for using AI.”

“Our findings reveal a dilemma for people considering adopting AI tools,” the researchers wrote. “Although AI can enhance productivity, its use carries social costs.”

For Stapleton’s professor, Rick Arrowood, the Northeastern lecture notes scandal really drove that point home.

Arrowood told the NYT that he used various AI tools — including ChatGPT, the Perplexity AI search engine, and an AI presentation generator called Gamma — to give his lectures a “fresh look.” Though he claimed to have reviewed the outputs, he didn’t catch the telltale AI signs that Stapleton saw.

“In hindsight,” he told the newspaper, “I wish I would have looked at it more closely.”

Arrowood said he’s now convinced professors should think harder about using AI and disclose to their students when and how it’s used — a new stance indicating that the debacle was, for him, a teachable moment.

“If my experience can be something people can learn from,” he told the NYT, “then, OK, that’s my happy spot.”

More on AI in school: Teachers Using AI to Grade Their Students’ Work Sends a Clear Message: They Don’t Matter, and Will Soon Be Obsolete

The post Student Livid After Catching Her Professor Using ChatGPT, Asks For Her Money Back appeared first on Futurism.