this post was submitted on 19 Jan 2026
247 points (94.9% liked)

Technology

78964 readers
3874 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Lost_My_Mind@lemmy.world 43 points 3 days ago (6 children)

Look man......I hate AI too....but you can't just use it as a scapegoat to cover for humans being humans.

Should the AI be telling him to do more and more drugs until he died? Well, no, but also.....maybe don't do dangerous drugs at all.

Like if chatgpt says to shoot yourself in the face, and you do, is it chatgpt's fault you killed yourself? Or was it you killing yourself at fault for killing you?

This world is getting dumber and dumber.

[–] ch00f@lemmy.world 76 points 2 days ago* (last edited 2 days ago) (3 children)

Basically the entire US economy, every employer, many schools, and half of the commercials on TV are telling us to use and trust AI.

Kid was already using the bot for advice on homework and relationships (two things that people are fucking encouraged to do depending on who you ask). The bot shouldn't give lethal advice. And if it's even capable of doing that, we all need to take a huuuuuuge step back.

“I want to make sure so I don’t overdose,” Nelson explained in the chat logs viewed by the publication. “There isn’t much information online and I don’t want to accidentally take too much.”

Kid was curious and cautious, and AI gave him incorrect information and the confidence to act on that information.

He was 19. Cut this victim blaming bullshit. Being a kid is hard enough before technology went full cyberpunk.

[–] kalkulat@lemmy.world 4 points 1 day ago

The bot shouldn’t give lethal advice The person or company that runs the bot that gave lethal advice should be charged with homicide.

[–] fyrilsol@kbin.melroy.org -1 points 1 day ago

19 is not a 'kid'. Sorry of having to be that guy, but he was already an adult, a young adult at that.

[–] zqps@sh.itjust.works 11 points 2 days ago

The point isn't to absolve people of making bad decisions, but that doesn't mean the companies whose tools provide dangerous advice in a friendly and factual manner should be without accountability.

Consider that people in all possible situations and mental health conditions have access to these tools.

[–] tal@lemmy.today 39 points 2 days ago* (last edited 2 days ago) (2 children)

This world is getting dumber and dumber.

Ehhh...I dunno.

Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.

searches

https://www.belfasttelegraph.co.uk/news/internet-killed-my-daughter/28397087.html

Internet killed my daughter

https://archive.ph/pJ8Dw

Were Simon and Natasha victims of the web?

https://archive.ph/i9syP

Predators tell children how to kill themselves

And before that, I remember video games.

It happens periodically


something new shows up, and then you'll have people concerned about any potential harm associated with it.

https://en.wikipedia.org/wiki/Moral_panic

A moral panic, also called a social panic, is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society.[1][2][3] It is "the process of arousing social concern over an issue",[4] usually elicited by moral entrepreneurs and sensational mass media coverage, and exacerbated by politicians and lawmakers.[1][4] Moral panic can give rise to new laws aimed at controlling the community.[5]

Stanley Cohen, who developed the term, states that moral panic happens when "a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests".[6] While the issues identified may be real, the claims "exaggerate the seriousness, extent, typicality and/or inevitability of harm".[7] Moral panics are now studied in sociology and criminology, media studies, and cultural studies.[2][8] It is often academically considered irrational (see Cohen's model of moral panic, below).

Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles[9][10][11] and belief in ritual abuse of women and children by Satanic cults.[12] Some moral panics can become embedded in standard political discourse,[2] which include concepts such as the Red Scare[13] and terrorism.[14]

Media technologies

Main article: Media panic

The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned.[8][40][41]

According to media studies professor Kirsten Drotner:[42]

[E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.

Recent manifestations of this kind of development include cyberbullying and sexting.[8]

I'm not sure that we're doing better than people in the past did on this sort of thing, but I'm not sure that we're doing worse, either.

[–] TheBat@lemmy.world 27 points 2 days ago

It wasn't the internet/web that harmed those people. It was people on the internet. And people were telling each other to be cautious when using the internet.

Unlike modern LLMs which are advertised as intelligent enough to be used in professional settings. And unlike perpetrators in other cases, no one is punishing OpenAI, or Google or whatever the fuck AI company is responsible.

So yeah, this is worse than before.

[–] eli@lemmy.world 7 points 2 days ago (1 children)

Great post and I agree 100%!

something new shows up

Doesn't even have to be a new thing either. Video games are still used as a scapegoat. Same as with music, and TV shows, and movies.

The "internet" is still killing teenagers because of social media bullying.

I wished our lawmakers were of a less senile age so we can write and pass more appropriate laws for this stuff...but not much we can do.

[–] mjr@infosec.pub 1 points 2 days ago

I wished our lawmakers were of a less senile age so we can write and pass more appropriate laws for this stuff...but not much we can do.

Talk with them. Explain stuff. Vote for better ones. It's still not much, but it's better than doing nothing and letting them keep on blundering unchallenged.

[–] Passerby6497@lemmy.world 14 points 2 days ago (1 children)

Well shit, maybe we shouldn't hold humans responsible for the actions that they convince another human to take. After all, the victim is just a human being a human, right?

[–] markovs_gun@lemmy.world -4 points 2 days ago (4 children)

I mean it's not illegal for someone to tell someone else to take more drugs. If two guys are hanging out and one says "hey I think I think I should take more drugs" and the other says "hell yeah brother do it" they aren't responsible if the first guy ODs.

[–] demonsword@lemmy.world 11 points 2 days ago (1 children)

If two guys are hanging out and one says “hey I think I think I should take more drugs” and the other says “hell yeah brother do it” they aren’t responsible if the first guy ODs

They are indirectly responsible. Dangerously close, depending on circumstances, of being criminally responsible.

[–] kalkulat@lemmy.world 3 points 1 day ago

A LOT of fraternities have gotten in BIG trouble for hazing practices that led to the death of a 'candidate'.

[–] zarkanian@sh.itjust.works 7 points 2 days ago

You mean that if you convinced somebody to do something stupid...and then they did it and died...you wouldn't feel guilty at all?

[–] theneverfox@pawb.social 10 points 2 days ago* (last edited 2 days ago)

I mean, aren't they? In a moral, ethical, and social stance, don't they share in the blame?

[–] squaresinger@lemmy.world 3 points 2 days ago

Depending on the circumstances, yes, that would totally be illegal.

It's called "aiding and abetting". In most countries it's illegal to convince someone to do something illegal.

If you are someone the victim sees as an authority figure (especially if the victim is a minor), a bunch of other other charges can be added too.

In Canada, the UK or the USA, for example, someone who "aided or abetted" someone to commit a crime can be punished exactly as if they had committed the crime themselves.

[–] zarkanian@sh.itjust.works 4 points 2 days ago

A 19-year-old doesn't have a fully-developed brain yet.

[–] Assassassin@lemmy.dbzer0.com 1 points 2 days ago

I don't think that this is necessarily an issue of people being stupid though. People are being encouraged to use AI as a replacement for search engines, and to plug any question they have into it and trust the answers that they are given. Blindly following that may be stupid in many cases, but there are also plenty of cases where a person is developmentally disabled, or young and ignorant, or in a mental state that makes them bad at processing information correctly. We should be putting safeguards in place to protect vulnerable people from obvious dangers, even if it saves some idiots by accident.