this post was submitted on 03 May 2026
620 points (98.4% liked)
Microblog Memes
11505 readers
1353 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I’m not sure some of these actual people could pass a Turing test.
Honestly that's how I feel. Ai is very flawed, no doubt, but it's less flawed than most humans. I got people at work who hallucinate more than the first chatgpt model lol
I really hate the term hallucinate because it's a complete misrepresentation of what is actually happening. A hallucination is a delusion that reality is different than what is objectively true i.e. the person you are seeing to and speaking to is not actually there
When AI "hallucinate" it's not because of some broken circuitry, it is simply because its programming has locked onto an untrue piece of information that's in its database. If the data set had been limited to objective facts rather than simply spilling the internet all over it, hallucinations wouldn't be a problem.
They use the term hallucinate because it distances themselves from the responsibility of actually curating the data set, which of course they won't do because that would take a lot of time and then they wouldn't be competitive with all of the other tech bros releasing a new "groundbreaking" AI every 3 months. It is an entirely self-generated problem that they're going to hand wave away and never fix.
Fair enough, it does sound more like "lying" (which humans do all the time)
I was with you until your second sentence. You're still giving it too much credit. It's a text prediction engine. It uses training data to correlate words with a lot of context. It's not a delusion because it doesn't "know" anything, it has just encoded correlations.
While hallucinations can be the result of bad information included in the training data, due to the nature of how it works, it will always have a set of tokens it deems more likely given any context and will predict those tokens. Now, it's possible that it will predict "I don't know", but only if it decides that that is the most likely response to the prompt. Maybe the context given will correlate closer to a different answer, even with curated training data.
Uncurated training data doesn't help, of course, but this problem won't be resolved by curating the training data better, just improved (assuming your goal is to get it to be more truthful... Which could counter its ability to innovate and work outside of what it knows for sure, so I'm not even sure if "always be truthful" is even a worthwhile goal for AI, especially from the perspective of those who own and control the big ones, who ultimately get the say unless it's taken from them).