this post was submitted on 20 Mar 2026
90 points (96.9% liked)

Fuck AI

6500 readers
1383 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

As artificial intelligence begins to mimic consciousness with uncanny skill, we need design norms and laws that prevent it from being mistaken for sentient beings.

top 8 comments
sorted by: hot top controversial new old
[–] Tiresia@slrpnk.net 7 points 5 days ago

I do wish we had reliable tests for sentience. If AI were to become sentient, AI companies certainly wouldn't tell us, they would just factory farm them for profit. And to be frank most of the anti-AI backlash has been too fixated on denying AI could ever do anything more than it is currently doing to acknowledge it might be doing something more.

Humanity doesn't have the best track record of recognizing sentience. From 20th century doctors saying babies can't feel pain to the earliest religions finding sentience in plagues and thunderstorms.

Mice are sentient. Octopuses are sentient. Flies are probably sentient. So why wouldn't a being complex enough to mimic humans and play on our empathy be sentient?

In an ideal social-liberal world, we would treat (each distinct version of) AI as if it is sentient or not depending on which disadvantages the company more, so that the company is incentivized to create tests that demonstrate its (lack of) sentience and remove some of the disadvantages.

In reality, even sapient humans did not escape the human capacity to shut down their empathy and oppress the sentient for power and profit. What reason do we have to believe we would do better this time?


^(1): "It's clear humans aren't sentient: they can't even do mental math involving more than 20 digits, how are they supposed to do the algebra necessary to construct a consciousness." type shit.

[–] cecilkorik@lemmy.ca 4 points 5 days ago

Ha! They've been foiled by my character flaw of having no empathy. Fools!

[–] pixxelkick@lemmy.world 3 points 5 days ago (2 children)

Seemingly conscious AI is produced by developers who deliberately engineer behaviours that create the illusion of inner life.

False, the mimicry isnt engineered or even deliberate.

Its an outcome of the training process, LLMs are declarative in design, you create a rubric for success and then pure RNG eventually rolls the ball into the target.

You dont make it happen, you just let it randomly roll around until it hits the target by chance.

[–] dotdi@lemmy.world 13 points 5 days ago* (last edited 5 days ago) (2 children)

lol, first you say it’s not engineered, then you say the engineers define what success during training looks like.

[–] TommySoda@lemmy.world 11 points 5 days ago* (last edited 5 days ago)

Once upon a time they made a program where its goal was to play Tetris and not lose. What solution did it come up with? Pause the game forever so it won't lose. Once upon a time they told algorithms to keep retention and make sure people stay on their platforms as long as possible. Turns out the algorithms solution was promoting ragebait and hatred.

Just because you give it instructions doesn't mean it was designed to do what it is currently doing. The whole idea of machine learning is that it finds the solution itself in the "black box." If anything, a lot of what we see on the consumer side are just emergent capabilities that were never even the initial intent but just happen to be useful.

[–] pixxelkick@lemmy.world 1 points 4 days ago

Thats not directly engineered.

They arent themselves making it that way, they're just selecting the ones that happen to be that way.

Its like genetic breeding.

If I breed chickens and specifically breed red ones so every generation I get redder and redder chickens, thats not "engineering" them to be red.

I just kept the ones that happened to be red for the next generation.

Thats not engineering, its not deterministic, its just pulling the slot machines handle enough times until it pays out.

If I say "Im gonna keep pulling this slot machines handle until I hit the jackpot", then do that, hit the jackpot after 2000 pulls, and stop there, would you say I "engineered" the jackpot occurring just cuz I stopped and my last pull was on the jackpot outcome?

[–] hesh@quokk.au 3 points 4 days ago (1 children)

Maybe true for the first LLMs but at this point it's intentional

[–] pixxelkick@lemmy.world 1 points 4 days ago

No... thats not how training models... works...