this post was submitted on 07 Jan 2026
1 points (51.5% liked)

Technology

41996 readers
448 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

“People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.

“As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.”

As AIs become more advanced in their ability to act autonomously and perform “reasoning” tasks, a debate has grown over whether humans should, at some point, grant them rights. A poll by the Sentience Institute, a US thinktank that supports the moral rights of all sentient beings, found that nearly four in 10 US adults backed legal rights for a sentient AI system.

you are viewing a single comment's thread
view the rest of the comments
[–] Butterbee@beehaw.org 21 points 4 weeks ago (1 children)

It's wildly difficult to control the output of the black box and that's hardly llms showing signs of self-preservation. These cries are from people in the industry trying to pretend the models are something that they are not, and cannot ever be. I do agree with the sentiment that we should be prepared to pull the plug on them though, for other reasons.

[–] BCsven@lemmy.ca 6 points 4 weeks ago (1 children)

I think the scientists might be talking about neural models that show signs of emergent behaviours, while the article is talking more on the level of LLM. Media confuses the two systems.

[–] Butterbee@beehaw.org 8 points 4 weeks ago (1 children)

Just double checked, and no they are very much talking about LLM's. Specifically they were testing gpt-4o, gemini-1.5, llama-3.1, sonnet-3.5, and opus-3 o1. https://arxiv.org/pdf/2412.04984 And the concerns raised in that paper are legit, but not indicative of consciousness or intent.

[–] BCsven@lemmy.ca 4 points 4 weeks ago

Thanks for the research, I appreciated it