this post was submitted on 29 Sep 2025
9 points (100.0% liked)

Fuck AI

4175 readers
603 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

This is my comprehensive case that yes, we’re in a bubble, one that will inevitably (and violently) collapse in the near future.

In 2022, a (kind-of) company called OpenAI surprised the world with a website called ChatGPT that could generate text that sort-of sounded like a person using a technology called Large Language Models (LLMs), which can also be used to generate images, video and computer code.

Large Language Models require entire clusters of servers connected with high-speed networking, all containing this thing called a GPU — graphics processing units. These are different to the GPUs in your Xbox, or laptop, or gaming PC. They cost much, much more, and they’re good at doing the processes of inference (the creation of the output of any LLM) and training (feeding masses of training data to models, or feeding them information about what a good output might look like, so they can later identify a thing or replicate it).

These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text and code. They also immediately had one glaring, obvious problem: because they’re probabilistic, these models can’t actually be relied upon to do the same thing every single time.

So, if you generated a picture of a person that you wanted to, for example, use in a story book, every time you created a new page, using the same prompt to describe the protagonist, that person would look different — and that difference could be minor (something that a reader should shrug off), or it could make that character look like a completely different person.

Moreover, the probabilistic nature of generative AI meant that whenever you asked it a question, it would guess as to the answer, not because it knew the answer, but rather because it was guessing on the right word to add in a sentence based on previous training data. As a result, these models would frequently make mistakes — something which we later referred to as “hallucinations.”

And that’s not even mentioning the cost of training these models, the cost of running them, the vast amounts of computational power they required, the fact that the legality of using material scraped from books and the web without the owner’s permission was (and remains) legally dubious, or the fact that nobody seemed to know how to use these models to actually create profitable businesses.

These problems were overshadowed by something flashy, and new, and something that investors — and the tech media — believed would eventually automate the single thing that’s proven most resistant to automation: namely, knowledge work and the creative economy.

top 2 comments
sorted by: hot top controversial new old
[–] stabby_cicada@slrpnk.net 3 points 15 hours ago

This is the true nature of labor that executives fail to comprehend at scale: that the things we do are not units of work, but extrapolations of experience, emotion, and context that cannot be condensed in written meaning. Business Idiots see our labor as the result of a smart manager saying “do this,” rather than human ingenuity interpreting both a request and the shit the manager didn’t say.

What does a CEO do? Uhhh, um, well, a Harvard study says they spend 25% of their time on “people and relationships,” 25% on “functional and business unit reviews,” 16% on “organization and culture,” and 21% on “strategy,” with a few percent here and there for things like “professional development.” 

That’s who runs the vast majority of companies: people that describe their work predominantly as “looking at stuff,” “talking to people” and “thinking about what we do next.” The most highly-paid jobs in the world are impossible to describe, their labor described in a mish-mash of LinkedInspiraton, yet everybody else’s labor is an output that can be automated.

As a result, Large Language Models seem like magic. When you see everything as an outcome — an outcome you may or may not understand, and definitely don’t understand the process behind, let alone care about — you kind of already see your workers as LLMs.

Emphasizing this because it's absolutely true. And it's why I've believed, for years, that the United States no longer has a class system. It has a caste system.

We have a management caste - a CEO caste - whose members are born and raised among CEO families, who are educated as CEOs, who are assigned to CEO-track positions from the beginnings of their careers, and who will never work at anything less than a high leadership position no matter how much they fail at leadership.

And we have a labor class whose children dream of being successful influencers and podcasters and video game players instead of following their parents' trades. They know, if they enter the corporate world, they'll never be more than skilled labor, because they lack the family connections to go further.

The traditional American "working class businessman" who started young at the bottom of the company and worked his way up to CEO doesn't exist anymore. If you start with an entry-level job in America in the 21st century, you're not going to work your way up to management. Ever. You're going to get capped at some sort of senior worker position while your CEO hires the 20-year-old son of his golf partner as your manager.

We have no social mobility. We have no economic mobility.

And don't get me started on the billionaire caste.

LLMs aren't the cause of this. They're just a symptom. Like the author says elsewhere, LLMs can't actually do your job, but they can convince your boss to fire you and replace you with an LLM.

But they can only do that because the management caste and the labor caste are so isolated from one another that management doesn't understand and doesn't care how their workers actually do their work.

Because the management caste is taught from birth that all labor is unskilled labor and all workers are fungible, programmable NPCs - and they only communicate with lower caste workers in formal, ritual settings like "all hands broadcasts" and "team meetings" where the workers are heavily discouraged from doing anything a programmable NPC wouldn't.

So why wouldn't they believe a LLM, programmed to flatter them and agree with them, could do a worker's job? After all, that's the only interaction they ever have with their workers.

And that's the only silver lining of LLMs: that they are, ultimately, a grift, and the victims of that grift will ultimately include the CEOs and MBAs who so richly deserve it.

[–] fartographer@lemmy.world 1 points 18 hours ago

I love guessing abbreviations!

Ummmm....

"Would You Eat Ass?"