this post was submitted on 20 Jan 2026
656 points (98.8% liked)

Fuck AI

5268 readers
2155 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Kaz@lemmy.org 18 points 2 hours ago

These fuckin AI "enthusiasts" are just making the rest of the world hate AI more.

Losers who cant achieve anything without AI are just going to keep doing this shit.

[–] cheesybuddha@lemmy.world 5 points 1 hour ago

So they are using AI to make it so AI can't detect that they are using AI?

What kind of technological ouroborous of nonsense is this?

I am so goddamned tired of AI being shoved into every collective orifice of our society.

[–] markstos@lemmy.world 19 points 3 hours ago

Congrats on inventing what high school students figured out a year ago to skirt AI homework detectors.

[–] minorkeys@lemmy.world 10 points 2 hours ago* (last edited 2 hours ago) (1 children)

It's an arms race, AI identification vs AI adaptation. I wonder which side the companies that own these LLMs want to win...

[–] elfin8er@lemmy.world 1 points 1 hour ago

They don't want anyone to win. The arms race makes money.

[–] viridian7@lemmy.world 1 points 2 hours ago (1 children)
[–] spicehoarder@lemmy.zip 1 points 1 hour ago
[–] avidamoeba@lemmy.ca 12 points 6 hours ago (1 children)

From the repo:

Have opinions. Don't just report facts - react to them. "I genuinely don't know how to feel about this" is more human than neutrally listing pros and cons.

[–] JcbAzPx@lemmy.world 5 points 1 hour ago

That will at least be easy to spot in a Wikipedia entry.

[–] felixthecat@fedia.io 7 points 6 hours ago (1 children)

Stuff like that doesn't always work though, at least on free versions in my experience. I use Ai to write flowery emails to people to sound nice when I normally wouldn't bother and I used it to negotiate buying my car. I would continually tell it not to use - dashes while writing emails. And inevitably after 1 answer it would go back to using them.

Maybe paid versions are different but on free ones you have to continually correct it.

[–] sobchak@programming.dev 1 points 1 hour ago

Even the paid models I've tried do that. The style LLMs use seems deeply ingrained. Either companies do it on purpose, or it's just the result of all the companies using similar training data and techniques.

[–] Jayjader@jlai.lu 38 points 10 hours ago (3 children)

I really despise how Claude's creators and users are turning the definition of "skill" from "the ability to use [learned] knowledge to enhance execution" into "a blurb of text that [usefully] constrains a next-token-predictor".

I guess, if you squint, it's akin to how biologists will talk about species "evolving to fit a niche" amongst themselves or how physicists will talk about nature "abhorring a vacuum". At least they aren't talking about a fucking product that benefits from hype to get sold.

[–] prole@lemmy.blahaj.zone 24 points 8 hours ago (2 children)

I can't help but get secondhand embarrassment whenever I see someone unironically call themselves a "prompt engineer". 🤮

[–] m4xie@lemmy.ca 1 points 3 hours ago

I'm a terrible procrastinator engineer.

[–] captainlezbian@lemmy.world 5 points 7 hours ago

Hey, they had to learn thermodynamics and spend 3 semesters in calculus to write those prompts

[–] OctopusNemeses@lemmy.world 13 points 8 hours ago

Isn't this a thing that authoritarians do. They co-opt language. It's the same thing conservatives do. The venn diagram of tech bros and the far right is too close to being a circle.

You can pretty put any word out of the dictionary into a search engine and the first results are some tech company that took the word either as their company name or redefined it into some buzzword.

[–] chuckleslord@lemmy.world 3 points 8 hours ago

Skills were functions/frameworks built for Alexa, so they just appropriated the term from there.

[–] Phoenix3875@lemmy.world 78 points 15 hours ago (1 children)

You do understand this is more akin to white hat testing, right?

Those who want to exploit this will do it anyway, except they won't publish the result. By making the exploit public, the risk will be known if not mitigated.

[–] unepelle@mander.xyz 13 points 10 hours ago* (last edited 10 hours ago) (2 children)

I'm admittedly not knowledgeable in White Hat Hacking, but are you supposed to publicize the vulnerability, release a shortcut to exploit it telling people to 'enjoy', or even call the vulnerability handy ?

[–] teft@piefed.social 9 points 8 hours ago (1 children)

Responsible disclosure is what a white hat does. You report the bug to whomever is the party responsible for patching and give them time to fix it.

[–] PlexSheep@infosec.pub 6 points 8 hours ago

That sort of depends on the situation. Responsible disclosure is for if there is some relevant security hole that is an actual risk to businesses and people, while this here is just "haha look LLMs can now better pretend to write good text if you tell it to". That's not really responsible disclosurable. It's not even specific to one singular product.

load more comments (1 replies)
[–] udon@lemmy.world 45 points 14 hours ago (3 children)

If these "signs of AI writing" are merely linguistic, good for them. This is as accurate as a lie detector (i.e., not accurate) and nobody should use this for any real world decision-making.

The real signs of AI writing are not as easy to fix as just instructing an LLM to "read" an article to avoid them.

As a teacher, all of my grading is now based on in person performances, no tech allowed. Good luck faking that with an LLM. I do not mind if students use an LLM to better prepare for class and exams. But my impression so far is that any other medium (e.g., books, youtube explanation videos) leads to better results.

load more comments (3 replies)
load more comments
view more: next ›