this post was submitted on 20 Jan 2026
410 points (98.8% liked)

Technology

78923 readers
2926 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Big tech boss tells delegates at Davos that broader global use is essential if technology is to deliver lasting growth

you are viewing a single comment's thread
view the rest of the comments
[–] worhui@lemmy.world 66 points 8 hours ago (1 children)

If he wanted people to like it then he should have made it do things people want it to do.

It is the new metaverse.

[–] CaptDust@sh.itjust.works 30 points 7 hours ago* (last edited 7 hours ago) (1 children)

Hell I'd almost settle for just "making it work". No disclaimers, no bullshitting. Computers should be optimized and accurate. AI is neither.

[–] worhui@lemmy.world 13 points 7 hours ago (4 children)

Ai does work great, at some stuff. The problem is pushing it into places it doesn’t belong.

It’s a good grammar and spell check. It helps me get a lot of English looking more natural.

It’s also great for troubleshooting consumer electronics.

It’s far better at search than google.

Even then it can only help, not replace folks or complete tasks.

[–] snooggums@piefed.world 33 points 6 hours ago (1 children)

It only looks good in comparison to Google search because they trashed Google search.

[–] AmbitiousProcess@piefed.social 22 points 6 hours ago

Which of course, Google did just so you'd have to search more, so you'd see more ads.

[–] Wirlocke@lemmy.blahaj.zone 10 points 5 hours ago

Fundamentally due to it's design, LLMs are digital duct tape.

The entire history of computer science has been making compromises between efficient machine code and human readable language. LLM's solve this in a beautifully janky way, like duct tape.

But it's ultimately still a compromise, you'll never get machine accuracy from an LLM because it's sole purpose is to fulfill the "human readable" part of that deal. So it's applications are revolutionary in the same way as "how did you put together this car engine with only duct tape?" kind of way.

[–] AmbitiousProcess@piefed.social 14 points 6 hours ago (1 children)

Ai does work great, at some stuff. The problem is pushing it into places it doesn’t belong.

I can generally agree with this, but I think a lot of people overestimate where it DOES belong.

For example, you'll see a lot of tech bros talking about how AI is great at replacing artists, but a bunch of artists who know their shit can show you every possible way this just isn't as good as human-made works, but those same artists might say that AI is still incredibly good at programming... because they're not programmers.

It’s a good grammar and spell check.

Totally. After all, it's built on a similar foundation to existing spellcheck systems: predict the likely next word. It's good as a thesaurus too. (e.g. "what's that word for someone who's full of themselves, self-centered, and boastful?" and it'll spit out "egocentric")

It’s also great for troubleshooting consumer electronics.

Only for very basic, common, or broad issues. LLMs generally sound very confident, and provide answers regardless of if there's actually a strong source. Plus, they tend to ignore the context of where they source information from.

For example, if I ask it how to change X setting in a niche piece of software, it will often just make up an entire name for a setting or menu, because it just... has to say something that sounds right, since the previous text was "Absolutely! You can fix x by..." and it's just predicting the most likely term, which isn't going to be "wait, nevermind, sorry I don't think that's a setting that even exists!", but a made up name instead. (this is one of the reasons why "thinking" versions of models perform better, because the internal dialogue can reasonably include a correction, retraction, or self-questioning)

It will pull from names and text of entirely different posts that happened to display on the page it scraped, make up words that never appeared on any page, or infer a meaning that doesn't actually exist.

But if you have a more common question like "my computer is having x issues, what could this be?" it'll probably give you a good broad list, and if you narrow it down to RAM issues, it'll probably recommend you MemTest86.

It’s far better at search than google.

As someone else already mentioned, this is mostly just because Google deliberately made search worse. Other search engines that haven't enshittified, like the one I use (Kagi), tend to give much better results than Google, without you needing to use AI features at all.

On that note though, there is actually an interesting trend where AI models tend to pick lower-ranked, less SEO-optimized pages as sources, but still tend to pick ones with better information on average. It's quite interesting, though I'm no expert on that in particular and couldn't really tell you why other than "it can probably interpret the context of a page better than an algorithm made to do it as quickly as possible, at scale, returning 30 results in 0.3 seconds, given all the extra computing power and time."

Even then it can only help, not replace folks or complete tasks.

Agreed.

[–] bridgeenjoyer@sh.itjust.works 5 points 5 hours ago* (last edited 5 hours ago)

I find that people only think its good when using it for something they dont already know, so then they believe everything it says. Catch 22. When they use it for something they already know, its very easy to see how it lies and makes up shit because its a markov chain on steroids and is not impressive in any way. Those billions could have housed and fed every human in a starving country but instead we have the digital equivalent of funko pop minions.

I also find in daily life those who use it and brag about it are 95% of the time the most unintelligent people i know.

Note this doesnt apply to machine learning.

[–] CaptDust@sh.itjust.works 9 points 6 hours ago* (last edited 6 hours ago)

We'll have to agree to disagree. To go through your points, spell check I don't find particularly impressive. That was solved previously without requiring the power demands of a small town. Grammer, maybe - but in my experience my "LLM powered" keyboard's suggestions are still worse than old T9 input.

I've had no luck troubleshooting anything with AI. It's often trained on old data, tries to instruct you to change settings that don't exist, or dreams up controls that might appear on "similar" hardware. Sure you can perhaps infer a solution, maybe, but it's rarely correct at first response. It'll happily run you through steps that are inconsequential to fixing a problem.

Finally, it might be better than indexed search NOW - but mostly because LLMs wrecked that too. I used to be able to use a couple search operators and get directly to the information I needed - now search is reduced to shifting through slop SEO sites.

And it does all this half assing while using enough power to justify dedicated nuclear reactors. I cant help but feel we've regressed on so many fronts.