this post was submitted on 20 Jan 2026
481 points (99.0% liked)

Technology

78923 readers
2985 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Big tech boss tells delegates at Davos that broader global use is essential if technology is to deliver lasting growth

you are viewing a single comment's thread
view the rest of the comments
[–] AmbitiousProcess@piefed.social 15 points 8 hours ago (1 children)

Ai does work great, at some stuff. The problem is pushing it into places it doesn’t belong.

I can generally agree with this, but I think a lot of people overestimate where it DOES belong.

For example, you'll see a lot of tech bros talking about how AI is great at replacing artists, but a bunch of artists who know their shit can show you every possible way this just isn't as good as human-made works, but those same artists might say that AI is still incredibly good at programming... because they're not programmers.

It’s a good grammar and spell check.

Totally. After all, it's built on a similar foundation to existing spellcheck systems: predict the likely next word. It's good as a thesaurus too. (e.g. "what's that word for someone who's full of themselves, self-centered, and boastful?" and it'll spit out "egocentric")

It’s also great for troubleshooting consumer electronics.

Only for very basic, common, or broad issues. LLMs generally sound very confident, and provide answers regardless of if there's actually a strong source. Plus, they tend to ignore the context of where they source information from.

For example, if I ask it how to change X setting in a niche piece of software, it will often just make up an entire name for a setting or menu, because it just... has to say something that sounds right, since the previous text was "Absolutely! You can fix x by..." and it's just predicting the most likely term, which isn't going to be "wait, nevermind, sorry I don't think that's a setting that even exists!", but a made up name instead. (this is one of the reasons why "thinking" versions of models perform better, because the internal dialogue can reasonably include a correction, retraction, or self-questioning)

It will pull from names and text of entirely different posts that happened to display on the page it scraped, make up words that never appeared on any page, or infer a meaning that doesn't actually exist.

But if you have a more common question like "my computer is having x issues, what could this be?" it'll probably give you a good broad list, and if you narrow it down to RAM issues, it'll probably recommend you MemTest86.

It’s far better at search than google.

As someone else already mentioned, this is mostly just because Google deliberately made search worse. Other search engines that haven't enshittified, like the one I use (Kagi), tend to give much better results than Google, without you needing to use AI features at all.

On that note though, there is actually an interesting trend where AI models tend to pick lower-ranked, less SEO-optimized pages as sources, but still tend to pick ones with better information on average. It's quite interesting, though I'm no expert on that in particular and couldn't really tell you why other than "it can probably interpret the context of a page better than an algorithm made to do it as quickly as possible, at scale, returning 30 results in 0.3 seconds, given all the extra computing power and time."

Even then it can only help, not replace folks or complete tasks.

Agreed.

[–] bridgeenjoyer@sh.itjust.works 6 points 7 hours ago* (last edited 7 hours ago)

I find that people only think its good when using it for something they dont already know, so then they believe everything it says. Catch 22. When they use it for something they already know, its very easy to see how it lies and makes up shit because its a markov chain on steroids and is not impressive in any way. Those billions could have housed and fed every human in a starving country but instead we have the digital equivalent of funko pop minions.

I also find in daily life those who use it and brag about it are 95% of the time the most unintelligent people i know.

Note this doesnt apply to machine learning.