this post was submitted on 16 Aug 2025
569 points (98.5% liked)
Programmer Humor
25827 readers
2165 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yes, it's not linear. The progress of GenAI in the past 2 years is logarithmic at best, if you compare it with the boom that was 2019-2023 (from GPT2 to GPT4 in text, DALL-E 1 to 3 in images). The big companies trained their networks on all of the internet and ran out of training data, if you compare GPT4 to GPT5 it's pretty obvious. Unless there's a significant algorithmic breakthrough (which is looking less and less likely), at least text-based AI is not going to have another order-of-magniture improvement for a long time. Sure, it can already replace like 10% of devs who are doing boring JS stuff, but replacing at least half of the dev workforce is a pipe dream of the C-suite for now.
Up until last week I worked for a stupidly big consumer data company and our in-house AI tools were not LLMs, they used an LLM as its secondary interface and let me tell you none of you are ready for this.
The problem with current LLMs is confabulation and it is not solvable. It's inherent in what a LLM is. the returns I was generating were not from publicly available LLMs or LLM services, but from expert systems trained only on the pertinent datasets. These do not confabulate as they are not word guessing algorithms.
Think of it like wolfram alpha for human behavior
People look at LLMs as the public face of AI but they aren't even close to the most important.