Because they are FUCKING TRASH.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Not for all use cases, but for most it is.
It'll right itself when the CEOs stop investing in it and force it on their own companies.
When they're not getting their returns, they'll sell their stocks and stop paying for it.
It'll eventually go back from slop generation to correction and light editing tools when venture stops paying for the hardware to run tokens and they have to pay to replace the cards. .
and they will drop it altogether.
That's unfortunate because I want an excuse not to be a corporate slave
Tbh, better a corporate slave than a startup slave.
Because they suck.
Personal Anecdote
Last week I used the AI coding assistant within JetBrains DataGrip to build a fairly complex PostgreSQL function.
It put together a very well organized, easily readable function, complete with explanatory comments, that failed to execute because it was absolutely littered with errors.
I don't think it saved me any time but it did help remove my brain block by reorganizing my logic and forcing me to think through it from a different perspective. Then again, I could have accomplished the same thing by knocking off work for the day and going to the driving range.
Then again, I could have accomplished the same thing by knocking off work for the day and going to the driving range.
Hey, look at the bright side, as long as you were chained to your desk instead, that's all that matters.
At one point I tried to use a local model to generate something for me. It was full of errors, but after some searching online to look for a library or existing examples I found a github repo that was almost an exact copy of what it generated. The comments were the same, and the code was mostly the same, except this version wasn't fucked up.
It turns out text prediction isn't that great at understanding the logic of code. It's only good at copying existing code, but it doesn't understand why it works, so the predictive model fucks things up when it takes the less likely result. Maybe if you turn the temperature to only give the highest prediction it wouldn't be horrible, but you might as well just search online and copy the code that it's going to generate anyway.
But.. how else do we sell our tool as a super intelligent sentient do-it-all?
The bigger problem is that your skills are weakened a bit every time you use an assistant to write code.
The bigger problem is that your skills are weakened a bit every time you use an assistant to write code
Not when you factor in that you are now doing code review for it and fixing all its mistakes..
Why is the Census Bureau tracking LLM adoption?
Kind of a weird title. Of course adoption would slow? The people who want it have adopted it, the people who don't haven't.
Marx tapping the big sign marked "Tendency of the rate of profit is to fall", but then looking at the already unprofitable AI spin-offs and just throwing his hands up in disgust.
I think there's an argument to be made that the AI hype got a bunch of early adopters, but failed to entice more traditional mainstream clients. But the idea that we just ran out of new AI users in... barely two years? No. Nobody is really paying for this shit in a meaningful way. Not at the Enterprise Application scale of subscriptions. That's why Microsoft is consistently losing money (on the scale of billions) on its OpenAI investment.
If people were adopting AI like they'd adopted the latest Windows OS, these firms would be seeing a steady growth in the pool of users that would signal profitability soon (if not already). But the estimates they're throwing out - one billion AI adoptions in barely a year - are entirely predicated on how many people just kinda popped in, looked at the web interface, and lost interest.
We were initially excited by AI at my company, but after we used it a bit we didnt find any really meaningful use cases for it in our business model. And in most cases we spent a lot of time correcting its many errors which would actually slow down our processes...
It would also slow if companies were told insane lies about the capability of "AI" ("it's living having a team of PHD level experts at your disposal!") and then companies realized that many of these promises were total bullshit.
They dressed up a parrot and called it the golden goose and now they're chasing a wild goose.
Wild parrot surely
An undomesticated Psittaciformes.
IMO, AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.
Reminds me of 10 other technologies that if you didn't get in the world was going to end but ended up more niche than you'd expect.
AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.
I'm so sick of "AI demos" at work. Every demo goes like this.
- Generate text with an LLM.
- Don't fact check it.
- Don't verify it works.
- Oooh and aahhh at random numbers and charts.
- Higher ups all clap and say we could be 10x more productive if more people would just use AI more.
Meanwhile they ignore that zero AI projects have actually stuck around or get used in a meaningful way.
As someone who sometimes makes demos of our own AI products at work for internal use, you have no idea how much time I spend on finding demo cases where LLM output isn’t immediately recognizable as bad or wrong…
To be fair it’s pretty much only the LLM features that are like this. We have some more traditional AI features that work pretty well. I think they just tagged on LLM because that’s what’s popular right now.
As someone who is excited about AI and thinks it's pretty neat, I agree we've needed a level-set around the expectations. Vibe coding isn't a thing. Replacing skilled humans isn't a thing. It's a niche technology that never should've been sold as making everything you do with it better.
We've got far too many companies who think adoption of AI is a key differentiator. It's not. The key differentiator is almost always the people, though that's not as sexy as cutting edge technology.
The key differentiator is almost always the people, though that’s not as sexy as cutting edge technology.
Evidently you haven't worked with me. I'm actually quite sexy.
The technology is fascinating and useful - for specific use cases and with an understanding of what it's doing and what you can get out of it.
From LLMs to diffusion models to GANs there are really, really interesting use cases, but the technology simply isn't at the point where it makes any fucking sense to have it plugged into fucking everything.
Leaving the questionable ethics many paid models' creators have used to make their models aside, the backlash against so is understandable because it's being shoehorned into places it just doesn't belong.
I think eventually we may "get there" with models that don't make so many obvious errors in their output - in fact I think it's inevitable it will happen eventually - but we are far from that.
I do think that the "fuck ai" stance is shortsighted though, because of this. This is happening, it's advancing quickly, and while gains on LLMs are diminishing we as a society really need to be having serious conversations about what things will look like when (and/or if, though I'm more inclined to believe it's when) we have functional models that can are accurate in their output.
When it actually makes sense to replace virtually every profession with ai (it doesn't right now, not by a long shot) then how are we going to deal with this as a society?
For the things AI is good at, like reading documentation, one should just get a local model and be done.
I think pouring as much money as big companies in the us has been doing is unwise. But when you have deep pockets, i guess you can afford to gamble.
brace for the pop, this one gonna be loud.
It is absolutely a bubble, but the applications that AI can be used for still remain while the models continue to get better and cheaper. Here's the actual graph:
This contradicts what I'm reading in that AI model costs grow with each generation, not shrink.
Also that is the cost to train them, not the cost to use them, which is different.
That was published a year ago, highly selective, doesn't include something like Llama 4 Maverick.
13.5%, slipping to about 12%
I know that 1.5% could mean hundreds of businesses, but this still seems like such a nothing burger.
The issue isn't the percentage, it's that inverse of growth. Most investors desire growth to see returns on investment for their upfront capital. If growth isn't occurring, that's a good sign to read the room and pull your funding.
Similar issues occurred with streaming services. Netflix is still profitable, but because the userbase isn't growing, investors and the financial world stopped seeing it as a valuable platform to invest in.
The ai companies haven't even found a viable business model yet, are bleeding money while the user base is shrinking
The AI data centers arnt cheap to cooldown, or power. plus the "customers" are mostly other csuites and ceos anyways.
That is more than a 10% loss of that customer base in 2 month.
For any industry that is huge.
let's not forget the us is pumping EVERYTHING into ai, 3-4% of the gdp are just the ai economy. here's hoping it comes crashing down on them
Of course. Although ai, or more accurately llms do have use functions they are not the star trek computer.
I use chatgpt as a Grammer check all the time. It's great for stuff like that. But it's definitely not a end all be all solution to productivity.
I think corporations got excited llms could replace human labor... But it can't.
Grammer
Grammar.
There's nothing AI can do that an internet pedant can't.
grammar
Mind your capitalization, fellow pedant.