This is my comprehensive case that yes, we’re in a bubble, one that will inevitably (and violently) collapse in the near future.
In 2022, a (kind-of) company called OpenAI surprised the world with a website called ChatGPT that could generate text that sort-of sounded like a person using a technology called Large Language Models (LLMs), which can also be used to generate images, video and computer code.
Large Language Models require entire clusters of servers connected with high-speed networking, all containing this thing called a GPU — graphics processing units. These are different to the GPUs in your Xbox, or laptop, or gaming PC. They cost much, much more, and they’re good at doing the processes of inference (the creation of the output of any LLM) and training (feeding masses of training data to models, or feeding them information about what a good output might look like, so they can later identify a thing or replicate it).
These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text and code. They also immediately had one glaring, obvious problem: because they’re probabilistic, these models can’t actually be relied upon to do the same thing every single time.
So, if you generated a picture of a person that you wanted to, for example, use in a story book, every time you created a new page, using the same prompt to describe the protagonist, that person would look different — and that difference could be minor (something that a reader should shrug off), or it could make that character look like a completely different person.
Moreover, the probabilistic nature of generative AI meant that whenever you asked it a question, it would guess as to the answer, not because it knew the answer, but rather because it was guessing on the right word to add in a sentence based on previous training data. As a result, these models would frequently make mistakes — something which we later referred to as “hallucinations.”
And that’s not even mentioning the cost of training these models, the cost of running them, the vast amounts of computational power they required, the fact that the legality of using material scraped from books and the web without the owner’s permission was (and remains) legally dubious, or the fact that nobody seemed to know how to use these models to actually create profitable businesses.
These problems were overshadowed by something flashy, and new, and something that investors — and the tech media — believed would eventually automate the single thing that’s proven most resistant to automation: namely, knowledge work and the creative economy.
Emphasizing this because it's absolutely true. And it's why I've believed, for years, that the United States no longer has a class system. It has a caste system.
We have a management caste - a CEO caste - whose members are born and raised among CEO families, who are educated as CEOs, who are assigned to CEO-track positions from the beginnings of their careers, and who will never work at anything less than a high leadership position no matter how much they fail at leadership.
And we have a labor class whose children dream of being successful influencers and podcasters and video game players instead of following their parents' trades. They know, if they enter the corporate world, they'll never be more than skilled labor, because they lack the family connections to go further.
The traditional American "working class businessman" who started young at the bottom of the company and worked his way up to CEO doesn't exist anymore. If you start with an entry-level job in America in the 21st century, you're not going to work your way up to management. Ever. You're going to get capped at some sort of senior worker position while your CEO hires the 20-year-old son of his golf partner as your manager.
We have no social mobility. We have no economic mobility.
And don't get me started on the billionaire caste.
LLMs aren't the cause of this. They're just a symptom. Like the author says elsewhere, LLMs can't actually do your job, but they can convince your boss to fire you and replace you with an LLM.
But they can only do that because the management caste and the labor caste are so isolated from one another that management doesn't understand and doesn't care how their workers actually do their work.
Because the management caste is taught from birth that all labor is unskilled labor and all workers are fungible, programmable NPCs - and they only communicate with lower caste workers in formal, ritual settings like "all hands broadcasts" and "team meetings" where the workers are heavily discouraged from doing anything a programmable NPC wouldn't.
So why wouldn't they believe a LLM, programmed to flatter them and agree with them, could do a worker's job? After all, that's the only interaction they ever have with their workers.
And that's the only silver lining of LLMs: that they are, ultimately, a grift, and the victims of that grift will ultimately include the CEOs and MBAs who so richly deserve it.