The tech industry is effectively exclusively based in IP law, but to create LLMs they felt comfortable to completely ignore IP laws, and they will sue you if you would break IP law that favors them.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
The economic bubble being created between the AI and hardware companies is going to pop and take out huge swathes of the broader economy, a la mortgages in 2008.
It's creating a great deal of artificial scarcity, causing prices to skyrocket in the one thing i still cared about, which was computers and personal electronics in general.
but moreover, it's not JUST that the content that it generates is shit, it's that it's all gormlessly uncreative regurgitation. A PERSON can create shit, and it'd still be more interesting both intellectually and emotionally than the slop output of a hallucination box. An actual imagination synthesizes new ideas out of extant ones--still derivative, but transformative and novel. AI will never actually create anything original, though, because all it can put out is just shoddy facsimiles of what was put in.
... ever feel like, although entropy is inevitable, living things have some limited ability to create a bit of an eddy in that flow, a localized spot of turbulence where the intention and agency of biological processes uses some of that energy as it passes to sort this, and store that, and painstakingly whittle a little signal from a lot of noise...?
meanwhile AI ... doesn't. It's not signal. It's JUST noise. If we see any signal in it, it's because we're projecting it from our own perspective. As sapient minds, we're deriving meaning from what we see, and injecting meaning into what we do. if there can be said to be any creativity in AI whatsoever, the sole province of it is the curation and--again--projection of the user. Only as much creativity as watching clouds roll across a blue sky and pondering what that cloud kinda looks like.
except in this case each one of those clouds is consuming megawatt-hours of energy, boiling off hundreds of gallons of otherwise potable water, and burning out GPU, Memory, and Storage hardware that would've been better utilized on literally any other activity imaginable.
I don’t hate AI, I kinda like it and I do think it will redefine computing in the future.
I hate the people behind it, and who are pushing it. I hate the hype, I hate the pressure, I hate how dangerous it is without guardrails, I hate the stupidity of people using it, and getting addicted to it because it’s an ego stoking machine. I hate the entire industry around it.
But I really enjoy using and learning about it in my own sandbox. I use a number of local LLMs successfully for research and learning. But I don’t trust them, at all, I think of them as an egotistical knowitall who has no problem lying to make themselves feel smart. There’s tons of useful info to get from them, but you have to understand what you are dealing with.
I’m very curious what the future looks like. It really depends on us though. Critical thinking and being observant are the new critical skills for success in the he future. Unfortunately neither are particularly common these days so I have a feeling I will continue to hate for a while now…
Ai psychosis is a real thing
it manipulates users
give the billionaires more power
surveillence
it's used in war and genocide
AI chatbots have caused several suicides, and the companies aren't being held accountable enough.
Also, they're very wasteful.
They're very confidentally wrong about pretty much all subjects.
GenAI has been trained on illegally obtained copyrighted material.
The AI bubble is causing prices of consumer electronics to sky rocket.
AI bros love the tools and like crypto bros they're douchebags.
I don't hate AI as tool where it is needed and really usefull, I hate the BS spy LLM from big corps and the hype to include it even in a fridge, sobstitute it to the own intelligence and creativity, using it for misinformation and deep fakes.
It's just another tool for companies to evade costumer confrontation. It's just another tool to score online likes for the least effort. It's just another tool to grift It's just another tool to claim creativity while no creativity was part of it.
And then there's an issue of fascism. From one source of "oversight" AI gives people what they "need to know", how to write things, what decisions to make. It is "handy", but it takes over control. AI tells you both what to find online and how to be found easier. And in that way AI tells you what is "normal", "protect against misinformation", and explains what's the new "truth". That gets rehashed until everything is mixed together into a beige mash, and only central "oversight" is allowed to tell "no"... why? euh, Black box decisions. Perhaps we should call it Techno-Fascism.
But for the rest, it's a great tool to be used in the field of science.
The logic of a llm is that it creates phrases from patterns of “most likely” sequences. In some contexts this can seem right (like with predictive text on) but that hides how in technical areas AI actively drags peoples belief towards the generic most common usage which is often plain wrong. And the more AI is used, the more a body of mediocre or false information will exist, creating a devils circle of language use. People who strive for perfection or improvement will be drowned in the flood of lazy AI.
I hate AI itself plenty, but I hate the way regular people see AI even more. It's marketed as being this super smart always right do everything machine. And people believe it, they actively argue the AI is right or has certain capabilities, even tho it's obviously wrong and most certainly does not have those capabilities. They use it a lot and they keep on using it, even when it's wrong all the time. And a lot of people know it's wrong all the time, but they keep on using it. They gleefully exclaim their job is so much easier now that the AI does all the work. To me that just screams they were either bad at their job if the AI can do better, or they are now doing their job poorly with AI. And even if the AI did the job just fine, why would you be happy about that? That means you are going to get replaced by a bot.
We wouldn't have so much issues with AI at the moment if people just used some common sense and not believe every bit of marketing they ever see.
I can’t say I hate specifically “AI”, it’s more like I have little love for the Big Tech in general. GenAI just being a perfect expression of everything that went wrong with them. It’s trained using what is little better than slave labour; on data taken without creators’ consent; and with no concern for the damage done to the enviroment. Then it’s shoved down everyone’s throat with the clear intent to take away people’s livelihoods. And it’s not even reliably usable for most tasks.
My hate is for this general attitude, not for the tech itself. Althouth, given the costs and requirements of LLM training, it might well prove impossible to decouple this specific tech from that attitude.
I was stoked hearing about AI a few years ago. So I messed around with it. It was not any better than shit like Bonzi Buddy or ChatBot back in the day. But everyone keeps treating it like an information tool; something that is actually smart. It isn't.
It can't even search the web and give accurate, relevant results better than the methods used just prior to AI taking over (even with all the mounting SEO bullshit it was still better).
It isn't making tasks that humans don't wanna do easier or more possible; it's instead taking over the jobs of artists to create things that have no passion behind them.
It's being shoved everywhere it doesn't belong, whether it makes sense or not.
It consumes way too much energy for how utterly fucking useless it is.
It is literally making people who rely on it fucking dumber, which is scary because people were already pretty fucking stupid before.
And the people building and promoting AI and AI companies are all shit-for-brain nepo babies and right-wing grifters.
Because the future potential it has in transforming the world for the better is absolutely astonishing.
But our execution of it, the overhyped barely useable projects, the instant enshitification of all things by capitalism empowerd by it. The blind masses glorifying these experiments as an all knowing always just entity because it feeds their ego.
I don't hate so-called AI per se. I see great use cases for people with disabilities. There are promising signs of it improving medical diagnoses (under properly tested conditions). I think even in my life I will learn to use some of the tools. Eventually. I try to avoid it right now as much as I can.
I hate the people peddling so-called AI as the solution to all problems, including already solved problems. I hate the mad rush on it because it risks negating all the positive greenhouse emission savings we have managed to get done. It will probably incur a greater water debt, i.e. more drinking water future generations will be forced to desalinate if they want to live. And it will make the next computing device you want to buy mad expensive because of the RAM shortage. I hate that this rush is a bubble that may not burst but drives prices up.
my take on the problems of AI:
- Destroying the environment/Draining water(is that only chatgpt?) faster
- Increasing ram(even GPU And mass media) prices
- costs Jobs
- Training On everything without respecting the original license/creators
- does not respect robots.txt
- usually not reliable source of information due to every LLM being able to hallucinate.
- destroying creativity/oversaturating the market
- its not that reliable to replace as your assistant due to hallucination so its better to research the answers the AI gives you.(i hate how sometimes i feel like its easier,safer,less ethical and faster to ask AI then to look up the answer or maybe thats just me)
- Censoring Topics (is that only palestine genocide topics and is that also only chatgpt?)
ofc am not stopping anyone from using AI.
Mostly Verbatim from here (not a ad.)
I also what to add something(maybe a hot take):
I have also seen usage of ai that is actually "not too bad" in my standards like only used in a small/few parts.
And rarely the creator had a detailed explanation why they used AI.
Obviously the elephant in the room is ethics as detailed above.
Dramatically accelerating the erosion of knowledge and the ability to seek information and develop critical thinking skills.
People no longer look for different sources of information (a trend started with social media but now expanded and accelerated by LLMs) but taken the first thing they see as gospel truth.
Businesses and bad actors are realising this and are flooding "scraped spaces" with false, misleading or flattering information which is rote copied by ChatGPT et al nd given a veneer of credibility because it's labelled as "intelligent".
I like AI. You just need to think of it like a search engine. Not a person.
Might be an unpopular opinion. I don't hate AI as the technologies like LLMs and ML, the possibilities are limited but when used consciously with the drawbacks and faults in mind it can be useful. If you want to hate anything, hate the players, not the game...
- People who sell LLMs to customers under false pretenses
- People who force the use of LLMs for tasks they are objectively bad at
- People who build massive datacenters, ruining the environment for their dubious claims.
- People who feed the LLMs with a massive amount of stolen training data
- People who release those LLMs to customers who are not educated to deal with them (causing AI psychosis and general brainrot)
- People who sell that stuff as if it was magic instead of what it really is. A sophisticated autocomplete.
- People who sell that stuff as if it was close to being a superintelligence and therefore dangerous. Which is bullshit. The dangers lie in LLM chatbots being confidently wrong, persuading unsuspecting users to believe the hype
- People... i think there is a pattern here.
I generally agree with some asterisks.
It is not people, my neighbors do not trying to sell me AI it's the capitalist class looking to make a buck at the expense of workers.
And another thing is that LLM in their current form require massive data and data centers. So hating LLM and hating the infrastructure here I think is the same thing.