Student Livid After Catching Her Professor Using ChatGPT, Asks For Her Money Back

AI Chat - Image Generator:
Many students aren't allowed to use artificial intelligence, and when they catch their teachers doing so, they're often peeved.

Many students aren’t allowed to use artificial intelligence to do their assignments — and when they catch their teachers doing so, they’re often peeved.

In an interview with the New York Times, one such student — Northeastern’s Ella Stapleton — was shocked earlier this year when she began to suspect that her business professor had generated lecture notes with ChatGPT.

When combing through those notes, the newly-matriculated student noticed a ChatGPT search citation, obvious misspellings, and images with extraneous limbs and digits — all hallmarks of AI use.

“He’s telling us not to use it,” Stapleton said, “and then he’s using it himself.”

Alarmed, the senior brought up the professor’s AI use with Northeastern’s administration and demanded her tuition back. After a series of meetings that ran all the way up until her graduation earlier this month, the school gave its final verdict: that she would not be getting her $8,000 in tuition back.

Most of the educators the NYT spoke to — who, like Stapleton’s, had been caught by students using AI tools like ChatGPT — didn’t think it was that big of a deal.

To the mind of Paul Shovlin, an English teacher and AI fellow at Ohio University, there is no “one-size-fits-all” approach to using the burgeoning tech in the classroom. Students making their AI-using professors out to be “some kind of monster,” as he put it, is “ridiculous.”

That take, which over-inflates the student’s concerns to make her sound hystrionic, dismisses another burgeoning consensus: that others view the use of AI at work as lazy and look down upon people who use it.

In a new study from Duke, business researchers found that people both anticipate and experience judgment from their colleagues for using AI at work.

The study involved more than 4,400 people who, through a series of four experiments, indicated ample “evidence of a social evaluation penalty for using AI.”

“Our findings reveal a dilemma for people considering adopting AI tools,” the researchers wrote. “Although AI can enhance productivity, its use carries social costs.”

For Stapleton’s professor, Rick Arrowood, the Northeastern lecture notes scandal really drove that point home.

Arrowood told the NYT that he used various AI tools — including ChatGPT, the Perplexity AI search engine, and an AI presentation generator called Gamma — to give his lectures a “fresh look.” Though he claimed to have reviewed the outputs, he didn’t catch the telltale AI signs that Stapleton saw.

“In hindsight,” he told the newspaper, “I wish I would have looked at it more closely.”

Arrowood said he’s now convinced professors should think harder about using AI and disclose to their students when and how it’s used — a new stance indicating that the debacle was, for him, a teachable moment.

“If my experience can be something people can learn from,” he told the NYT, “then, OK, that’s my happy spot.”

More on AI in school: Teachers Using AI to Grade Their Students’ Work Sends a Clear Message: They Don’t Matter, and Will Soon Be Obsolete

The post Student Livid After Catching Her Professor Using ChatGPT, Asks For Her Money Back appeared first on Futurism.

Teachers Using AI to Grade Their Students’ Work Sends a Clear Message: They Don’t Matter, and Will Soon Be Obsolete

AI Chat - Image Generator:
A new study is revealing just how horrible AI is at grading student homework, and the results are worse than you think.

Talk to a teacher lately, and you’ll probably get an earful about AI’s effects on student attention spans, reading comprehension, and cheating.

As AI becomes ubiquitous in everyday life — thanks to tech companies forcing it down our throats — it’s probably no shocker that students are using software like ChatGPT at a nearly unprecedented scale. One study by the Digital Education Council found that nearly 86 percent of university students use some type of AI in their work.

That’s causing some fed-up teachers to fight fire with fire, using AI chatbots to score their students’ work. As one teacher mused on Reddit: “You are welcome to use AI. Just let me know. If you do, the AI will also grade you. You don’t write it, I don’t read it.”

Others are embracing AI with a smile, using it to “tailor math problems to each student,” in one example listed by Vice. Some go so far as requiring students to use AI. One professor in Ithaca, NY, shares both ChatGPT’s comments on student essays as well as her own, and asks her students to run their essays through AI on their own.

While AI might save educators some time and precious brainpower — which arguably make up the bulk of the gig — the tech isn’t even close to cut out for the job, according to researchers at the University of Georgia. While we should probably all know it’s a bad idea to grade papers with AI, a new study by the School of Computing at UG gathered data on just how bad it is.

The research tasked the Large Language Model (LLM) Mixtral with grading written responses to middle school homework. Rather than feeding the LLM a human-created rubric, as is usually done in these studies, the UG team tasked Mixtral with creating its own grading system. The results were abysmal.

Compared to a human grader, the LLM accurately graded student work just 33.5 percent of the time. Even when supplied with a human rubric, the model had an accuracy rate of just over 50 percent.

Though the LLM “graded” quickly, its scores were frequently based on flawed logic inherent to LLMs.

“While LLMs can adapt quickly to scoring tasks, they often resort to shortcuts, bypassing deeper logical reasoning expected in human grading,” wrote the researchers.

“Students could mention a temperature increase, and the large language model interprets that all students understand the particles are moving faster when temperatures rise,” said Xiaoming Zhai, one of the UG researchers. “But based upon the student writing, as a human, we’re not able to infer whether the students know whether the particles will move faster or not.”

Though the UG researchers wrote that “incorporating high-quality analytical rubrics designed to reflect human grading logic can mitigate [the] gap and enhance LLMs’ scoring accuracy,” a boost from 33.5 to 50 percent accuracy is laughable. Remember, this is the technology that’s supposed to bring about a “new epoch” — a technology we’ve poured more seed money into than any in human history.

If there were a 50 percent chance your car would fail catastrophically on the highway, none of us would be driving. So why is it okay for teachers to take the same gamble with students?

It’s just further confirmation that AI is no substitute for a living, breathing teacher, and that isn’t likely to change anytime soon. In fact, there’s mounting evidence that AI’s comprehension abilities are getting worse as time goes on and original data becomes scarce. Recent reporting by the New York Times found that the latest generation of AI models hallucinate as much as 79 percent of the time — way up from past numbers.

When teachers choose to embrace AI, this is the technology they’re shoving off onto their kids: notoriously inaccurate, overly eager to please, and prone to spewing outright lies. That’s before we even get into the cognitive decline that comes with regular AI use. If this is the answer to the AI cheating crisis, then maybe it’d make more sense to cut out the middle man: close the schools and let the kids go one-on-one with their artificial buddies.

More on AI: People With This Level of Education Use AI the Most at Work

The post Teachers Using AI to Grade Their Students’ Work Sends a Clear Message: They Don’t Matter, and Will Soon Be Obsolete appeared first on Futurism.

College Students Are Sprinkling Typos Into Their AI Papers on Purpose

AI Chat - Image Generator:
To bypass AI writing detection, college students are, apparently, adding typos into their chatbot-generated papers. 

To bypass artificial intelligence writing detection, college students are reportedly adding typos into their chatbot-generated papers.

In a wide-ranging exploration into the ways AI has rapidly changed academia, students told New York Magazine that AI cheating has become so normalized, they’re figuring out creative ways to get away with it.

While it’s common for students — and for anyone else who uses ChatGPT and other chatbots — to edit the output of an AI chatbot, some are adding typos manually to make essays sound more human.

Some more ingenious users are advising chatbots to essentially dumb down their writing. In a TikTok viewed by NYMag, for instance, a student said she likes to prompt chatbots to “write [an essay] as a college freshman who is a li’l dumb” to bypass AI detection.

Stanford sophomore Eric told NYMag that his classmates have gotten “really good at manipulating the systems.”

“You put a prompt in ChatGPT, then put the output into another AI system, then put it into another AI system,” he said. “At that point, if you put it into an AI-detection system, it decreases the percentage of AI used every time.”

The irony, of course, is that students who go to such lengths to make their AI-generated papers sound human could be using that creativity to actually write the dang things.

Still, instructors are concerned by the energy students are expending on cheating with chatbots.

“They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays,” University of Iowa teaching assistant Sam Williams told the magazine. “And I get it, because I hated writing essays when I was in school.”

When assisting a general education class on music and social change last fall, Williams said he was shocked by the change in tone and quality between students’ first assignments — a personal essay about their own tastes — and their second, which dug into the history of New Orleans jazz.

Not only did those essays sound different, but many included egregious factual errors like the inclusion of Elvis Presley, who was neither a part of the Nola scene nor a jazz musician.

“I literally told my class, ‘Hey, don’t use AI,'” the teaching assistant recalled. “‘But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out.'”

Students have seemingly taken that advice to heart — and Williams, like his colleagues around the country, is concerned about students taking their AI use ever further.

“Whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them,” the Iowa instructor said.

It’s a scary precedent indeed — and one that is, seemingly, continuing unabated.

More on AI cheating: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup

The post College Students Are Sprinkling Typos Into Their AI Papers on Purpose appeared first on Futurism.