This belongs along articles with titles like "I hit myself in the nuts every day for a month. It hurt and this happened" or "I tried using condiments as clothing for a month. I now have a rash and/or chemical burns, and this AMAZING thing happened".
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Either of those would be more surprising than this.
I used AI chatbots as a source of news for a month, and they were unreliable and erroneous
blink blink
Oh, I'm sorry. Were you expecting me to be surprised? Was I supposed to act shocked when you said that?
Ok, ok. Hold on. Let me get my shocked face ready.....
shocked pikachu
The article really isn’t for those of us who already know how terrible ‘AI’ is - it’s for those who treat it like it’s the infallible holy grail of all the answers in the world. Sadly, I’ve met some such people for whom this article might be illuminating.
"I used a hammer as a saw for a month and found that it was too dull to get the job done". That's what this sounds like. Nobody needed to use AI chatbots as a news source to know that they're unreliable. The people who do this already know and don't care. This article isn't gonna change their minds. They like it.
I'm interested in how they were wrong. Pointedly, was there a Right/MAGA/Nazi bias or trend apparent in the errors? Or, is it just stupid?
Because, "wrong" is just a mistake, but lying is intentional.
It's fair to wonder the specifics, but the word lying implies that the LLM knows that it's providing a falsehood as a fact. It doesn't know. It's "lies" are either hallucinations (where it doesn't have the information in its data set and can't return the information requested and so it provides incorrect information because that info is statistically the as close as the thing can get, or it provides incorrect information because the guardrails set by the company engineering it said to provide Y when queried with X.
There is no thought involved in what the LLM does.
The AI built in data will be months out of date, and if it even bothers to grab latest headlines then it can and will cherry pick depending on how it’s been programmed for bias, like grok would probably tell you the world is ending because of “the left”
What was the "new research"? Actually using the fucking things for five minutes?
Can't say this surprises me too much.
So this person doesnt know how ai get its data
Its never up to date with the latest
who would've thought