this post was submitted on 01 Oct 2025
93 points (98.9% liked)

Fuck AI

4198 readers
946 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
all 17 comments
sorted by: hot top controversial new old
[–] laranis@lemmy.zip 2 points 20 minutes ago

People hating on this overly agreeable stance from LLMs, but in reality the executives pouring money into this stuff want "yes men" from their subordinates, including their AI subordinates.

It is absolutely a feature.

[–] friend_of_satan@lemmy.world 1 points 8 minutes ago

"As fun and enjoyable as possible." Yet another lie.

[–] 30p87@feddit.org 11 points 13 hours ago (1 children)
[–] Kirk@startrek.website 2 points 37 minutes ago

Thanks, I also like offtiktok.com (replace tiktok in the URL with offtiktok)

[–] tuckerm@feddit.online 41 points 18 hours ago* (last edited 18 hours ago) (2 children)

I'm posting this because it's a great example of how LLMs do not actually have their own thoughts, or any sort of awareness of what is actually happening in a conversation.

It also shows how completely useless it is to have a "conversation" with someone who is just in agreeability mode the whole time (a.k.a. "maximizing engagement mode") -- offering up none of their own thoughts, but just continually prompting you to keep talking. And honestly, some people act that way too. And other kinds of people crave a conversation partner who acts that way, because it makes them the center of attention. It makes you feel interesting when the other person is endlessly saying, "You're right! Go on."

[–] Rhaedas@fedia.io 11 points 14 hours ago

LLMs work best when used as a mirror to help you reflect on your own thoughts, as that's the only thinking going into the process. This isn't how most people are using it.

[–] fullsquare@awful.systems 22 points 18 hours ago

this is an effect known for almost 60 years now https://en.wikipedia.org/wiki/ELIZA_effect

[–] stinerman@midwest.social 11 points 16 hours ago

The only winning move is not to play.

[–] unmagical@lemmy.ml 17 points 18 hours ago (1 children)

Pretty sure anyone that has ever used chatgpt could have foreseen that.

[–] tuckerm@feddit.online 24 points 18 hours ago (1 children)

Unfortunately, I'm not sure about that. Plenty of people who use ChatGPT end up thinking that it is sentient and has its own thoughts, but that's because they don't realize how much they are having to drive the conversation.

[–] asudox@lemmy.asudox.dev 2 points 3 hours ago

I'd say the majority thinks like that.

[–] Goun@lemmy.ml 17 points 18 hours ago (1 children)

Who needed those trees anyways

[–] wetbeardhairs@lemmy.dbzer0.com 16 points 18 hours ago (1 children)

That was painful to listen to. It would've been more interesting if he just gave them a fucking prompt and let them spiral.

[–] Kirk@startrek.website 2 points 15 hours ago (1 children)

Maybe it would be more interesting for a reply or two, but it would have quickly fallen right into the same spiral.

[–] wetbeardhairs@lemmy.dbzer0.com 1 points 14 hours ago

No I think it wouldve been very capable of bringing up semi-related facts and that would prompt a different response.

[–] Ilixtze@lemmy.ml 13 points 18 hours ago

So chat GPT is the contrary to that depressed guy I used to date