this post was submitted on 15 Aug 2025
358 points (90.7% liked)

Technology

74073 readers
2959 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] krunklom@lemmy.zip 4 points 10 hours ago (2 children)

I really don't understand this perspective. I truly don't.

You see a new technology with flaws and just assume that those flaws will always be there and the technology will never progress.

Like. Do you honestly think this is the one technology that researchers are just going to say "it's fine as-is, let's just stop improving it"?

You don't understand the first thing about how it works but people like you are SO certain that the way it is now is how it will always be, and that because there are flaws developing it further is pointless.

I just don't get it.

[–] CubitOom@infosec.pub 20 points 9 hours ago* (last edited 7 hours ago) (1 children)

I've actually worked professionally in the field for a couple of years since it was interesting to me originally. I've built RAG architecture backends for self hosted FOSS LLMs, i've fine tuned LLMs with new data, And I've even took the opposite approach where I embraced the hallucinations as I thought it could be used for more creative tasks. (I think this area still warrants research). I also enjoy TTS and STT use cases and have FOSS models for those on most of my devices.

I'll admit that the term AI is extremly vauge. It's like saying you study medicine, it's a big field. But I keep coming to the conclusion that LLMs and predictive generative models in general simply do not work for the use cases that it's being marketed for to consumers, CEOs, and Governments alike.

This " AI race" happened because Deepseek was able to create a model that was more or less equivalent to OpenAI and Anthropic models. It should have been seen as a race between proprietary and open source since deep seek is one of the more open models at that performance level. But it became this weird nationalist talking point on both countries instead.

There are a lot of things the US is actually in a race with China in. Many of which are things that would have immediate impact. Like renewable energy, international respect, healthcare advances, military sufficiency, human rights, food supplies, and afordible housing, just to name a few.

The promise of AI is that it can somehow help in the above categories eventually, and that's cool. But we don't need AI to make improvements to them right now.

I think AI is a giant distraction, while the the talk of nationalistic races is just being used for investor buy in.

[–] krunklom@lemmy.zip 2 points 5 hours ago

Appreciate you expanding on the earlier comment. All fair points.

[–] Randomgal@lemmy.ca 0 points 10 hours ago

Feelings don't care about logic. It's that easy.