If I download a torrent or plagiarize or violate copyright I'm breaking the law and could face serious legal consequences, but if a big company torrents to train their dataset or generates plagiarized/copyright violating slop it's just fine.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
LLMs are a tool with vanishingly narrow legitimate and justifiable use cases. If they can prove to be truly effective and defensible in an application, I’m OK with them being used in targeted ways much like any other specialised tool in a kit.
That said, I’m yet to identify any use of LLMs today which clears my technical and ethical barriers to justify their use.
My experience to date is the majority of ‘AI’ advocates are functionally slopvangelical LLM thumpers, and should be afforded respect and deference equivalent to anyone who adheres to a faith I don’t share.
I don't. I hate machine learning slop being marketed as "AI" and assholes buying up years of hardware stock & burning through water supplies and energy like they want this planet to become uninhabitable within a decade.
I like the AI that existed prior to the LLM and genAI slop fest. AI used to mean the tools that were used to discover planets, a lot of complex matching systems, and even tools for making music! Pattern matching using neural networks and other methids is awesome and has a huge number of positive uses.
So when I say I hate AI I am referring to those for profit companies who jam this slop into everything, increase pollution, give companies reasons to fire everyone, and drive up prices by buying up all the hardware even harder than scalpers ever did. The companies that have coopted the term AI are who I am referring to.
They are speed running culture into a shithole while bending us over and telling us to thank them for it. No, I do not want a fucking summary of a three word text message. No, I don't want a summary of what I am reading. No, I don't want someone else to read your inaccurate summary of the email I wrote. No, I don't want your made up bullshit spewed to people I know so that I have to constantly ask where they got some fabricated information and explain that your shit shoved into everything is not reliable. No, putting a fucking warning to double check results is not a solution, especially when you drown out the right information.
AI was better back before fascists decided to use it to try and take over the world.
I don't hate so-called AI per se. I see great use cases for people with disabilities. There are promising signs of it improving medical diagnoses (under properly tested conditions). I think even in my life I will learn to use some of the tools. Eventually. I try to avoid it right now as much as I can.
I hate the people peddling so-called AI as the solution to all problems, including already solved problems. I hate the mad rush on it because it risks negating all the positive greenhouse emission savings we have managed to get done. It will probably incur a greater water debt, i.e. more drinking water future generations will be forced to desalinate if they want to live. And it will make the next computing device you want to buy mad expensive because of the RAM shortage. I hate that this rush is a bubble that may not burst but drives prices up.
Dramatically accelerating the erosion of knowledge and the ability to seek information and develop critical thinking skills.
People no longer look for different sources of information (a trend started with social media but now expanded and accelerated by LLMs) but taken the first thing they see as gospel truth.
Businesses and bad actors are realising this and are flooding "scraped spaces" with false, misleading or flattering information which is rote copied by ChatGPT et al nd given a veneer of credibility because it's labelled as "intelligent".
It's just another tool for companies to evade costumer confrontation. It's just another tool to score online likes for the least effort. It's just another tool to grift It's just another tool to claim creativity while no creativity was part of it.
And then there's an issue of fascism. From one source of "oversight" AI gives people what they "need to know", how to write things, what decisions to make. It is "handy", but it takes over control. AI tells you both what to find online and how to be found easier. And in that way AI tells you what is "normal", "protect against misinformation", and explains what's the new "truth". That gets rehashed until everything is mixed together into a beige mash, and only central "oversight" is allowed to tell "no"... why? euh, Black box decisions. Perhaps we should call it Techno-Fascism.
But for the rest, it's a great tool to be used in the field of science.
Because the future potential it has in transforming the world for the better is absolutely astonishing.
But our execution of it, the overhyped barely useable projects, the instant enshitification of all things by capitalism empowerd by it. The blind masses glorifying these experiments as an all knowing always just entity because it feeds their ego.
Simply because it is taking jobs, taking money from hardworking people and giving even more back to the ultra wealthy.
I don't hate AI. That's what bothers me most about all this I think. LLMs aren't AI. I've been bothered with using the term AI as a catch all for ML for so long now.
LLMs have some form of machine intelligence and pattern matching. However a majority of their output are just compositions of their training data. They aren't intelligent and any "intelligence" they posses is just ripped from other places. Real people generate the value while other people give the LLM the credit.
They are a tool nothing more and definitely can't replace any but the most mind numbing jobs.
Not only that but it's been hyped up by the most annoying and shitty people. It's destroying the economy, not because it's replacing jobs but because it's over inflating the value of a select group of companies. Everyone is scrambling to adopt a technology that doesn't deliver on its promises. Meaning worse quality everything and what's worse is that these companies willingly manipulate the public to hide it's short falls. Decent "thinking" LLM models are incredibly expensive to run and generate very little value. It's all subsidized. When the real price hits the fan we will all be fucked.
I don't hate AI. Hell I don't even particularly hate LLMs. I hate the hype, I hate LLM bros, and I hate the market.
It adds to a culture of bullshit which tries to take control over what you think, perceive and believe. That is even more of the bullshit that flourishes in big corporations and it is a gesture of dominance over your mind. It is not only antithetical to free thinking, use of your own intelligence, and science as a search for understanding, but it is also antithetical to enlightenment which is one foundation of our modern democratic societies - and a pre-requirement for science that is powerful enough to navigate our dangerous world.
I don’t hate AI, I kinda like it and I do think it will redefine computing in the future.
I hate the people behind it, and who are pushing it. I hate the hype, I hate the pressure, I hate how dangerous it is without guardrails, I hate the stupidity of people using it, and getting addicted to it because it’s an ego stoking machine. I hate the entire industry around it.
But I really enjoy using and learning about it in my own sandbox. I use a number of local LLMs successfully for research and learning. But I don’t trust them, at all, I think of them as an egotistical knowitall who has no problem lying to make themselves feel smart. There’s tons of useful info to get from them, but you have to understand what you are dealing with.
I’m very curious what the future looks like. It really depends on us though. Critical thinking and being observant are the new critical skills for success in the he future. Unfortunately neither are particularly common these days so I have a feeling I will continue to hate for a while now…
Mostly your points 1,3 and 4. I'm less offended by the second one.
I don't really hate AI; it's an interesting (and rarely, useful) tool. What I hate is the drive to push it into every part of our lives. As it is now it isn't suited for the uses they're pushing, and we are currently a long way off of training a model in a way that could be. Add to that the drive to push advertising via AI and most of what's out there is now entirely suspect. All that to say, I think the issue is capitalism moreso than AI.
LLMbeciles are dangerously incompetent tools that unfortunately "hack" a weakness in human perception: We are hard-wired to equate eloquence and confidence with intellect. (The so-called fluency heuristic.) LLMbeciles are very fluent, eloquent, and confident and we are very vulnerable to that combination. As a result outside our areas of expertise we have a tendency to trust LLMbecile output despite the fact that it is literally 100% bullshit (in the Frankfurt sense) hallucination. It just happens that by the statistics of human language stolen to build the model that these hallucinations match reality enough to fool non-experts. And that's the danger: they're "right" (which is to say their bullshit semi-accidentally matches reality) often enough we don't catch the cases where their bullshit is just plain wrong.
This is a pattern see with a lot of people who have areas of high expertise:
- "LLMbeciles are not really useful in this field in which I have expertise..."
- "...but I think they're very useful in all these fields in which I have no expertise."
Gell-Mann must be rolling in his grave right now! (Yes, I know it's Crichton, but I'm sticking to his bit.)
I like AI. You just need to think of it like a search engine. Not a person.
I’ve had to help users see the light when AI claimed it had the solution for a problem, but after 3 troubleshooting steps, it invoked menus from programs other than the one they were using.
The users then kept telling me that they had their original issue, plus their software was missing features.
And while that’s great when, I guess, it makes for more feature-rich software. It’s a nightmare when the answer is a solid and resounding “No, that doesn’t work that way” and AI doesn’t want to tell someone no, so it lies.
Might be an unpopular opinion. I don't hate AI as the technologies like LLMs and ML, the possibilities are limited but when used consciously with the drawbacks and faults in mind it can be useful. If you want to hate anything, hate the players, not the game...
- People who sell LLMs to customers under false pretenses
- People who force the use of LLMs for tasks they are objectively bad at
- People who build massive datacenters, ruining the environment for their dubious claims.
- People who feed the LLMs with a massive amount of stolen training data
- People who release those LLMs to customers who are not educated to deal with them (causing AI psychosis and general brainrot)
- People who sell that stuff as if it was magic instead of what it really is. A sophisticated autocomplete.
- People who sell that stuff as if it was close to being a superintelligence and therefore dangerous. Which is bullshit. The dangers lie in LLM chatbots being confidently wrong, persuading unsuspecting users to believe the hype
- People... i think there is a pattern here.
I generally agree with some asterisks.
It is not people, my neighbors do not trying to sell me AI it's the capitalist class looking to make a buck at the expense of workers.
And another thing is that LLM in their current form require massive data and data centers. So hating LLM and hating the infrastructure here I think is the same thing.