this post was submitted on 04 Mar 2026
511 points (97.9% liked)

Technology

82261 readers
4618 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Reygle@lemmy.world 23 points 9 hours ago* (last edited 9 hours ago) (8 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”


WHAT

Genuine question, REALLY: What in the fuck is an otherwise "functioning adult" doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?

[–] starman2112@sh.itjust.works 23 points 9 hours ago

If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I'm going to sue that someone who took advantage of my son's fuckwittedness

[–] XLE@piefed.social 18 points 9 hours ago (3 children)

I feel like his father should also slap himself unconscious for raising a fuckwit?

So, a chatbot grooms somebody into killing himself, and your response is... Blame his father?

load more comments (3 replies)
[–] SalamenceFury@piefed.social 14 points 9 hours ago* (last edited 9 hours ago) (3 children)

I don't think this person was a "fuckwit". AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.

[–] tamal3@lemmy.world 5 points 7 hours ago* (last edited 7 hours ago)

Chat GPT was super affirming about a job I recently applied to... I did not get the job. That was my first experience with it affirming something that was personally important. And so I can absolutely see how this would affect someone in other ways.

load more comments (2 replies)
load more comments (5 replies)
[–] man_wtfhappenedtoyou@lemmy.world 11 points 8 hours ago (3 children)

How do you even get these chat bots to start telling you shit like this? Is it just from having a conversation for too long in the same chat window or something? I don't understand how this keeps happening.

load more comments (3 replies)

Maybe if we're lucky people will realize this has been what capitalism and consumerism has been doing all along. People have been drivin to crazy shit because of all the evil shit we do marketing and fucking with consumers minds. But nah we will blame a chatbot that's just telling you what it thinks you want to see rather than seeing it's just the next stage of fuckery

[–] teft@piefed.social 94 points 13 hours ago (5 children)

“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

Just remember that these language models are also advising governments and military units.

Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

[–] XLE@piefed.social 32 points 13 hours ago

AI tools are both sycophatic and helpful for laundering bad opinions. Who needs experts when Anthropic's Claude will tell you what you want to hear?

Anthropic’s AI tool Claude central to U.S. campaign in Iran - used alongside Palantir surveillance tech.

load more comments (4 replies)
[–] Cyv_@lemmy.blahaj.zone 145 points 14 hours ago* (last edited 14 hours ago) (7 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

Well, that's pretty fucked up... Sometimes I see these and I think, "well even a human might fail and say something unhelpful to somebody in crisis" but this is just complete and total feeding into delusions.

[–] wonderingwanderer@sopuli.xyz 15 points 10 hours ago* (last edited 7 hours ago) (7 children)

That's fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?

load more comments (7 replies)
[–] XLE@piefed.social 96 points 13 hours ago

It's hard reading this while remembering that your electricity bills are increasing so that Google's data centers can provide these messages to people.

load more comments (5 replies)
[–] NewNewAugustEast@lemmy.zip 10 points 9 hours ago* (last edited 9 hours ago) (12 children)

I would like to see the full transcript.

How do we know this didn't start off with prompts about creating a book, or asking about exciting things in life, or I don't know what.

Context would help a lot. Maybe it will come out in discovery.

That said, Gemini is garbage for anything anyways. Even as an AI, its bad at that.

[–] man_wtfhappenedtoyou@lemmy.world 6 points 8 hours ago (1 children)

I was thinking the same thing, like what is the flow of the chat to get it to this point?

[–] NewNewAugustEast@lemmy.zip 3 points 7 hours ago (1 children)

I am also curious how the father saw the Gemini chats. Was it still on the screen days later? I am trying to imagine how that would work, my computer would lock and that would be that. Do kids give their parents passwords and their screen unlock codes?

[–] tamal3@lemmy.world 2 points 7 hours ago* (last edited 7 hours ago) (1 children)

I don't lock my personal computer. It's my husband & me at home, and he's fine to use my device (even though he normally wouldn't).

ChatGPT for sure saves conversations.

[–] NewNewAugustEast@lemmy.zip 2 points 6 hours ago

Yeah it definitely does save conversations. Perhaps he did leave it unlocked. I do find that strange though, particularly if one was getting increasingly paranoid.

load more comments (11 replies)
[–] Stonewyvvern@lemmy.world 21 points 11 hours ago (6 children)

Reality is really difficult for some people...

[–] Akuchimoya@startrek.website 12 points 8 hours ago

Truly, I don't understand why, but there are fully grown adults who believe that anything an LLM says is true. Maybe they think computers are unbiased (which is only as true as programmers and data are unbiased); maybe its the confidence with which LLMs deliver information; maybe they believe the program actually searches and verified information; maybe it's all of the above and more.

I know a guy who routinely says, "I asked ChatGPT...", and even after having explained how LLMs are complex word predictors and are not programmed for factual truth, he still goes to ChatGPT for everything. It's a total refusal to believe otherwise, but I can't fathom why.

load more comments (5 replies)
[–] CatDogL0ver@lemmy.world 2 points 6 hours ago

I would live to see the real transcript from Google AI

[–] SalamenceFury@piefed.social 50 points 13 hours ago (2 children)

As a neurodivergent person, i've noticed that the people who usually fall into AI psychosis are normies who never had any history of mental illnesses. They don't know the safeguards that people who ARE vulnerable to having a mental breakdown put on themselves to avoid such thing from happening and they can spot red flags that usually spiral into a psychotic episode, and that's why it's so insanely easy for regular people to fall for the traps of chatbots. Most people I know/follow in other socials who are neurodivergent instantly saw the ADHD sycophant trap that they were and warned everyone. Normies never had such luxury or told us we were overreacting. Yeah, we sure were...

[–] Truscape@lemmy.blahaj.zone 13 points 10 hours ago

Reading about the ELIZA effect as well is a good way to understand how those who embrace "social norms" can be enamored by machine-generated statements without questioning them at all...

load more comments (1 replies)
[–] Grimy@lemmy.world 55 points 14 hours ago* (last edited 14 hours ago) (1 children)

“On September 29, 2025, it sent him ... the chatbot pretended to check it against a live database.

I usually don't give much credence to these stories but this is actually nuts. If this was done without Google aiming to, imagine how easy it would be for them to knowingly build sleeper cells and activate them all at once.

Edit: removed the quote since an other user posted it at the same time and it's a bit of a wall of text to have twice.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›