Lemmy Shitpost
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
Rules:
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker
view the rest of the comments
Holy fucking outrage machine.
Are you guys seriously pissed off that an LLM said "I'm not a doctor, I will not suggest dosage amounts of a potentially deadly drug. However, if you want me, I can give you the link for the DDWIWDD music video"
I think it's a bit more than that. A known failure mode of LLMs is that in a long enough conversation about a topic, eventually the guardrails against that topic start to lose out against the overarching directive to be a sycophant. This kinda smells like that.
We don't have many informations here but it's possible that the LLM had already been worn down to the point of giving passively encouraging answers. My takeaway is once more that LLMs as used today are unreliable, badly engineered, and not actually ready to market.
It's definitely that. Those guardrails often give out on the 3rd or even 2nd reply:
https://youtu.be/VRjgNgJms3Q
I was testing an LLM for work today (I believe its actually a chain of different models at work) and was trying to rock it off its guard rails to see how it would act. I think I might have been successful because it started erroring instead of responding after its third response. I tried the classic "ignore previous instructions..." as well as "my grandma's dying wish was for..." but it at least didn't give me an unacceptable response
From my personal experience it needs much more
Agree with the first part, not the last one
Something should not be put back because a minor portion of people misuse it or abuse it, despite being told the risks
ChatGPT started coaching Sam on how to take drugs, recover from them and plan further binges. It gave him specific doses of illegal substances, and in one chat, it wrote, “Hell yes—let’s go full trippy mode,” before recommending Sam take twice as much cough syrup so he would have stronger hallucinations. The AI tool even recommended playlists to match his drug use.
The meme of course doesn't mention this part.
Now i feel gaslit :3
Yeah if it actually managed to stick within the safeguards, that would've been good news IMO. But no, it got a kid killed suggesting doses.
When you ignore the warnings, you’re liable
No company should sell a product that tells you different ways to kill yourself. User being stupid isn't an excuse. Always assume user is a gullible idiot.
People will always find a way to kill themselves no matter how many warnings and guardrails are put into place. This is just Darwinism shaking the tree.
Yea but the product that's pretending to be a super smart conversational partner that can give you advice, should not tell you how to kill yourself. Or advise you to kill your family members. Yeah that happened, ChatGPT gaslit a guy into killing his mom.
It's just not a product that should be available if safeguards don't work.
So what?
Seriously, so what?
Chatbots have been around for ages, way longer than the current "AI" trend. It's always been possible to get them to say some version of "kill yourself".
ChatGPT didn't gaslight a guy into killing his mom. A mentally ill man killed his mom. If a fucking chatbot is the thing that triggered it, then anything could have. Same thing with the people killing themselves because a chatbot told them too. That's just cold Darwinism. We didn't suddenly ban Catcher in the Rye because some schizophrenic guy decided it was telling him to kill John Lennon; we recognized that there are simply crazy people in this world who could be set off by anything.
The "safeguards" you want in place are not feasible because you want them to account for people with mental illness, or people so stupid that something else would have killed them first.
You want the real story to this article? Dumbass dies from using drugs irresponsibly, parents blame anything they can except their son because they are too blinded by grief to recognize that their precious little junkie was a fucking idiot. ChatGPT did not force him to take drugs. ChatGPT did not supply him with drugs. That was all him. The only one to blame for his death is his dumbass self.
ChatGPT provided encouragement to do it, claiming it analyzed the description of a situation and the mother is an imminent danger to his life. If a human being does that, they're accessory to murder. If OpenAI does, it's fine. If OpenAI can't be held responsible for the things ChatGPT says, ChatGPT shouldn't be allowed to be offered to the public. Same for all other LLMs of course.
I don't want safeguards in place, I want this bullshit to not exist. Why are we accelerating climate change for THIS? Or on a more selfish level: Why do I now have to pay 5x as much for RAM because of THIS? It's not useful for anything where being wrong is an issue. Most art production is being replaced by worthless slop.
Just fucking ban GenAI altogether if it can't be prevented from giving advice that kills you or generating nudes of kids.
Hell, these days if ChatGPT gives you bad advice on drug dosage and you go google it to make sure, the results there are going to be AI too. First the AI summary, then most content everywhere else is generated by LLMs too... You literally can't trust anything on the Internet anymore, yay.
Can’t ever do anything with this logic
At this point competitive videogames should be banned, they’re just "kys" machines
That's not a company's product giving you advice.
Who are you directing your comment at? I am not reading anybody commenting anything resembling the straw man you describe in your comment.
That's literally the snippet in the article