this post was submitted on 25 Nov 2025
652 points (97.9% liked)

Fuck AI

4728 readers
1122 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘

Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01

top 50 comments
sorted by: hot top controversial new old
[–] protist@mander.xyz 213 points 1 week ago (2 children)

Distillation:

Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective”. Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.

I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, “I thought it sounded smart.”

I decided to not punish them. All I know how to do is teach, so that’s what I did. I assigned a wonderful essay by Cal Poly professor Patrick Lin that he addressed to his class on the benefits and detriments of AI use. I attached instructions that asked them to read it and reflect. These instructions also had a Trojan horse.

Thirty-six of my AI students completed it. One of them used AI, and the other 12 have been slowly dropping the class. Ultimately, 35 out of 47 isn’t too bad. The responses to the assignment were generally good, and some were deeply reflective.

But a handful said something I found quite sad: “I just wanted to write the best essay I could.” Those students in question, who at least tried to provide some of their own thoughts before mixing them with the generated result, had already written the best essay they could. And I guess that’s why I hate AI in the classroom as much as I do.

Students are afraid to fail, and AI presents itself as a saviour. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.

[–] PKscope@lemmy.world 185 points 1 week ago (13 children)

The only problem I have with the whole "Don't be afraid to fail" thing, is that so much rides on the grades a student receives it makes it very difficult to not treat every assignment as a highly critical task which must be as close to perfect as possible. I totally agree with this professor and I believe he did the right thing by the students. The problem is the system itself.

Those who are going to outsource their work are likely to always outsource their work or take the path of least resistance. You can't moral lesson or embarrass that away, usually. But the rest of the class seems to have learned a valuable lesson, or at least learned how to cheat better.

Regardless, we need to stop having everything boil down to the grades. There's good reasons grades are important, but there are even more that are detrimental. I don't know the answer, I just know the system is broken. Maybe it's just capitalism that's broken.

[–] Meron35@lemmy.world 67 points 1 week ago (1 children)

Society: "don't be afraid to fail!"

Also society: actively punishes failure with intricate systems such as admissions, CV screening, and increasing fewer safety nets

load more comments (1 replies)
[–] A_Union_of_Kobolds@lemmy.world 54 points 1 week ago

The most ironic part of this is, if those kids did understand the basics of Marxism, they'd be able to see this much more clearly.

[–] qwestjest78@lemmy.ca 24 points 1 week ago* (last edited 1 week ago) (1 children)

I agree, the biggest thing that stood out to me here is that they were afraid to fail. If students were focused on creating the work that appeals to them, rather than just the work that will get the highest grade, think of the creativity that could be explored. Instead students are just focused on saying the "right answers" and dont get to think critically about the material. Sad

load more comments (1 replies)
[–] smh@slrpnk.net 19 points 1 week ago

My undergraduate school didn't assign grades below a C. If you did piss-poorly, the class just didn't show up on your transcript. This encouraged me to take classes I might otherwise have avoided, if I was worried about my GPA.

load more comments (9 replies)
[–] riskable@programming.dev 28 points 1 week ago (2 children)

I dunno... What if a bunch of students got together, seized a data center, then used the AI hardware inside to generate their papers on Marxism?

load more comments (2 replies)
[–] danielquinn@lemmy.ca 164 points 1 week ago (3 children)

Here's the link to the actual article. I get that you're trying to do users a favour to bypass tracking at the original URL, but the Internet Archive is a Free service that shouldn't be abused for link cleaning as it costs a lot of money to store and serve all this stuff and it's meant as an "archive", not an ad-blocking proxy.

I'm posting this in part because currently clicking that link errors it with a "too many requests" error. Let's try to be a little kinder to the good guys, shall we?

If users wasnt a cleaner/safer/faster browsing experience, I recommend ditching Chrome for Firefox and getting the standard set of extensions: uBlock Origin, Privacy Badger, etc.

[–] brucethemoose@lemmy.world 66 points 1 week ago* (last edited 1 week ago)

Yeah, especially if it’s not paywalled.

It deprives the original source of traffic too, even if it’s Adblock traffic.

load more comments (2 replies)
[–] rustydrd@sh.itjust.works 107 points 1 week ago* (last edited 1 week ago) (16 children)

In one of my classes, when ChatGPT was still new, I once handed out homework assignments related to programming. Multiple students handed in code that obviously came from ChatGPT (too clean a style, too general for the simple tasks that they were required to do).

Decided to bring one of the most egregious cases to class to discuss, because several people handed in something similar, so at least someone should be able to explain how the code works, right? Nobody could, so we went through it and made sense of it together. The code was also nonfunctional, so we looked at why it failed, too. I then gave them the talk about how their time in university is likely the only time in their lives when they can fully commit themselves to learning, and where each class is a once-in-a-lifetime opportunity to learn something in a way that they will never be able to experience again after they graduate (plus some stuff about fairness) and how they are depriving themselves of these opportunities by using AI in this way.

This seemed to get through, and we then established some ground rules that all students seemed to stick with throughout the rest of the class. I now have an AI policy that explains what kind of AI use I consider acceptable and unacceptable. Doesn't solve the problem completely, but I haven't had any really egregious cases since then. Most students listen once they understand it's really about them and their own "becoming" professional and a more fully developed person.

load more comments (16 replies)
[–] Alaknar@sopuli.xyz 70 points 1 week ago (32 children)

Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective”. Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.

I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, “I thought it sounded smart.”

Christ.......

load more comments (32 replies)
[–] Zephorah@discuss.online 54 points 1 week ago (5 children)

You pay to go to college. Then essentially do the equivalent of lighting that money on fire by not engaging with the product/services you just purchased.

[–] TribblesBestFriend@startrek.website 35 points 1 week ago (4 children)

Yes and no. You pay for a college to recognize your competency and say it to the world. That’s why so many students use AI

[–] justOnePersistentKbinPlease@fedia.io 46 points 1 week ago (1 children)

You pay a bad college to recognize your competency.

A good college teaches you how to reach beyond what they teach you.

load more comments (1 replies)
load more comments (3 replies)
[–] chocrates@piefed.world 28 points 1 week ago (2 children)

In America you go to college at 18. It's hard to have perspective. I'm almost 40 and reflecting on how powerful my degree was, because of how it taught me to think.

Even when I had teachers tell me this to my face at 18 I didn't understand it.

load more comments (2 replies)
load more comments (3 replies)
[–] paequ2@lemmy.today 51 points 1 week ago (1 children)

39% of the submissions were at least partially written by AI

That's better than my class. I taught CS101 last year, code not papers. 90%+ of the homework was done with AI. There was literally just 1 person who would turn in unique code. Everyone else would turn in ChatGPT code.

I adapted by making the homework worth very little of the grade and moving the bulk of the grade to in-class paper and pencil exams.

[–] Jankatarch@lemmy.world 22 points 1 week ago* (last edited 1 week ago) (3 children)

My algorithms professor does that too and it's better than nothing but still causes problems.

For example I still have to do other classes' homework before I can start studying.

Meanwhile cheaters can just skip homework for the other classes and focus on studying for exam.

I still appreciate this technique much better than the more popular alternative of "make exams much harder to make up for the grade inflation." Thank you.

load more comments (3 replies)
[–] IAmNorRealTakeYourMeds@lemmy.world 51 points 1 week ago* (last edited 1 week ago) (15 children)

I think the only solution is the Cambridge exam system.

The only grade they get is at the final written exam. all other assignments and tests are formative, to see if they are on track or to practice skills... This way it does not matter if a student cheats in those assignments, they only hurt themselves. Sorry for the final exam stress though.

load more comments (15 replies)
[–] 474D@lemmy.world 45 points 1 week ago (3 children)

Don't really know how to feel about this because 15 years ago, all I did was reword Wikipedia pages to make a good paper. I went to college because I was led to believe it was a requirement to do well in life. I still learned a lot, but that was mostly through the social interaction of coursework. And honestly, I don't use anything from college in my current engineering job, it was all on-the-job panic learning. If I were to go back to college today, it would be such an enlightening experience of learning, but when you're a kid getting out of high school, you're just trying to get by with some gameplan that you've only been told about. Idk. I don't blame them for using a tool that's so easily accessible because college is about fun too. I guess I wouldn't do it different at that age .

[–] JustAnotherPodunk@lemmy.world 31 points 1 week ago (2 children)

I think that rewording wikipedia is slightly better though. It still requires you to digest some of the information. Kind of like when your teacher let you create notes on a note card for the test. You have to actually read and write the information. You get tricked into learning information.

Ai, just does it for you. There's no need to do much else, and it's reliability is significantly worse that random wiki editors could ever be. I see little real learning with ai.

[–] groet@feddit.org 19 points 1 week ago

With AI you cane solve an assignment without:

  • reading the assignment
  • reading the source of information
  • reading the answer that "you" "wrote"

With the rewording Wikipedia approach you had to do all of those three things

load more comments (1 replies)
[–] zergtoshi@lemmy.world 23 points 1 week ago (5 children)

Apparently you learned to learn, which I suppose is one major goal of college.

load more comments (5 replies)
load more comments (1 replies)
[–] guillem@aussie.zone 45 points 1 week ago (11 children)

*Words, phrases and punctuation rarely used by the average college student – or anyone for that matter (em dash included) – are pervasive. *

Hey, fuck you too >:(

[–] AugustWest@lemmy.world 36 points 1 week ago (3 children)

This quote is particularly amusing because the author used en dashes where he should have used em dashes, while making a point about how no one uses em dashes.

[–] FanciestPants@lemmy.world 21 points 1 week ago

Damn, I finally had to go learn what the em dash is, and by contrast what the en dash is. Your comment will haunt me to the end of my days.

load more comments (2 replies)
[–] Sterile_Technique@lemmy.world 20 points 1 week ago (2 children)

Doesn't MS word automatically change a regular dash to an em dash if there's a space on either side and you keep typing the sentence?

Wonder how many false positives that's caused.

load more comments (2 replies)
load more comments (9 replies)
[–] korazail@lemmy.myserv.one 43 points 1 week ago* (last edited 1 week ago) (2 children)

From later in the article:

Students are afraid to fail, and AI presents itself as a saviour. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.

I think this is the big issue with 'ai cheating'. Sure, the LLM can create a convincing appearance of understanding some topic, but if you're doing anything of importance, like making pizza, and don't have the critical thinking you learn in school then you might think that glue is actually a good way to keep the cheese from sliding off.

A cheap meme example for sure, but think about how that would translate to a Senator trying to deal with more complex topics.... actually, on second thought, it might not be any worse. 🤷

Edit: Adding that while critical thinking is a huge part. it's more of the "you don't know what you don't know" that tripped these students up, and is the danger when using LLM in any situation where you can't validate it's output yourself and it's just a shortcut like making some boilerplate prose or code.

load more comments (2 replies)
[–] finitebanjo@piefed.world 36 points 1 week ago* (last edited 1 week ago) (1 children)

This method is now increasingly known (there’s even an episode of “The Simpsons” about it) and likely has already run its course as a plausible method for saving oneself from reading and grading AI slop. To be brief, I inserted hidden text into an assignment’s directions that the students couldn’t see but that ChatGPT can.

I received several emails and spoke with a few students who came to my office and were genuinely apologetic. I had a few that tried to fight me on the accusations, too, assuming I flagged them as AI for “well written sentences”. But the Trojan horse did not lie.

lmfao, I hope he failed those kids anyways.

[–] can@sh.itjust.works 58 points 1 week ago* (last edited 1 week ago) (15 children)

[...] Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective”. Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.

But did he consider some may just be Lemmy users?

load more comments (15 replies)
[–] SoftestSapphic@lemmy.world 32 points 1 week ago (11 children)

Students would want to learn instead of doing less work if there were incentives to learn instead of just get out with a degree.

load more comments (11 replies)
[–] ThePantser@sh.itjust.works 31 points 1 week ago (9 children)

It should be treated the same as if another student wrote the paper. If it was used as a research tool where you didn't repeat it word for word then it's cool, it can be treated like a peer that helped you research. But using it to fully write then it's an instant fail because you didn't do anything.

load more comments (9 replies)
[–] hark@lemmy.world 29 points 1 week ago

But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognised that oppression is best maintained by keeping the masses illiterate, and those oppressed recognised that literacy is liberation.

It's scary how much damage is being done to education, not just from AI but also the persistent attacks on public education in the US over decades, hampering the system with things like No Child Left Behind and diverting funds to private schools with vouchers in the name of "school choice". On top of that there are suggestions that teachers aren't even needed and that students could be taught with AI. It's grim.

[–] Draegur@lemmy.zip 28 points 1 week ago (2 children)

I heard of something brilliant though: The teacher TELLS the students to have the AI generate an essay on a subject AND THEN the students have to go through the paper and point out all the shit it got WRONG :D

load more comments (2 replies)
[–] mlg@lemmy.world 25 points 1 week ago (1 children)

I'm guessing 33 people were too lazy to copy data into a box and relied on ChatGPT OCR lol.

This was a great article about the use of AI, but I think this also exposed bad/zero effort cheating.

There's a reason why even the ye olde Wikipedia copy-pasters would rearrange sentences to make sure they can game the plagiarism checker.

load more comments (1 replies)
[–] Mouselemming@sh.itjust.works 24 points 1 week ago (1 children)

Okay fine and all, but are we not going to talk about the cat?

[–] interdimensional_sharts@lemmy.world 21 points 1 week ago (2 children)

We can talk about it if you’d like

load more comments (2 replies)
[–] Randelung@lemmy.world 23 points 1 week ago (10 children)

Great article.

How do we teach that when a student doesn’t want to learn?

Good question. But maybe we've gone overboard with the density of information and we just need to relax a little and give the kids their childhood back.

[–] Duamerthrax@lemmy.world 37 points 1 week ago (2 children)

It's not the density of information. It's the end goal of the process. Students are only given motivation to learn for a career and people have figured out that most jobs are bullshit. If they can bullshit their way though college, they can bullshit their into a career. When layoffs are done by lottery, it's not even like the sincere students can be safe. It's bullshit stacked on top of other bullshit.

load more comments (2 replies)
load more comments (9 replies)
[–] taiyang@lemmy.world 21 points 1 week ago

Had trouble with this myself teaching. Students this semester have been good about it (probably because I've been very explicit in my contempt and also it kept blundering) but last semester was tricky.

One thing I learned was I need to also insist no Grammarly. That used to be allowed but it makes original writing sound very AI. I also riddled my assignments with short oral segments and personal stories.

It cuts into class time but I've managed to make those sessions educational since my "presentations" are always conversations w/ students. No ppts. Actually kinda fun and very much weeds out cheaters lol

load more comments
view more: next ›