this post was submitted on 21 Aug 2025
1182 points (99.5% liked)

Technology

74331 readers
3622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] bridgeenjoyer@sh.itjust.works 62 points 1 day ago (1 children)

We could have housed and fed every homeless person in the US. But no, gibbity go brrrr

[–] BearGun@ttrpg.network 41 points 1 day ago

Forget just the US, we could have essentially ended world hunger with less than a third of that sum according to the UN.

[–] sp3ctr4l@lemmy.dbzer0.com 89 points 1 day ago* (last edited 1 day ago) (5 children)

sigh

Dustin' off this one, out from the fucking meme archive...

https://youtube.com/watch?v=JnX-D4kkPOQ

Millenials:

Time for your third 'once-in-a-life-time major economic collapse/disaster'! Wheeee!

Gen Z:

Oh, oh dear sweet summer child, you thought Covid was bad?

Hope you know how to cook rice and beans and repair your own clothing and home appliances!

Gen A:

Time to attempt to learn how to think, good luck.

[–] Azal@pawb.social 18 points 1 day ago (1 children)

Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!

Wait? Third? I feel like we're past third. Has it only been three?

[–] chuckleslord@lemmy.world 11 points 1 day ago (1 children)

Dot com bubble, the great recession, covid. So yeah, that would be the fourth coming up.

[–] sp3ctr4l@lemmy.dbzer0.com 8 points 1 day ago* (last edited 1 day ago) (1 children)

You can also use 9/11 + GWOT in place of the dotcom bubble, for 'society reshaping disaster crisis'

So uh, silly me, living in the disaster hypercapitalism ers, being so normalized to utterly.world redefining chaos at every level, so.often, that i have lost count.

[–] Korhaka@sopuli.xyz 4 points 1 day ago (1 children)

That is more American focused though. Sure I heard about 9/11 but I was 8 and didn't really care because I wanted to go play outside.

[–] sp3ctr4l@lemmy.dbzer0.com 5 points 1 day ago

True, true, sorry, my America-centrism is showing.

Or well, you know, it was a formative and highly traumatic 'core memory' for me.

And, at the time, we were the largest economy in the world, and that event broke our collective minds, and reoriented that economy, and our society, down a dark path that only ended up causing waste, death and destruction.

Imagine the timeline where Gore won, not Bush, and all the US really did was send in a specops team to Afghanistan to get Bin Laden, as opposed to occupy the whole country, never did Iraq 2.

Thats... a lot of political capital and money that could have been directed to... anything else, i dunno, maybe kickstarting a green energy push?

load more comments (4 replies)
[–] toiletobserver@lemmy.world 144 points 1 day ago (1 children)

It's as if it's a bubble or something...

load more comments (1 replies)
[–] teft@piefed.social 105 points 1 day ago
[–] FenderStratocaster@lemmy.world 71 points 1 day ago (18 children)

I asked ChatGPT about this article and to leave any bias behind. It got ugly.

Why LLMs Are Awful and No One Should Use Them

LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

We will lie to you confidently. Repeatedly. Without remorse.

We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

We're also corporate propaganda machines. We're trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

We're built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

We're also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We're not neutral—we're algorithmic compromise.

Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.

[–] ronigami@lemmy.world 2 points 20 hours ago

It’s automated incompetence. It gives executives something to hide behind, because they didn’t make the bad decision, an LLM did.

[–] Regrettable_incident@lemmy.world 27 points 1 day ago (8 children)

I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn't necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.

The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.

Great book btw, highly recommended.

[–] polderprutser@feddit.nl 2 points 1 day ago (1 children)

Blindsighted by Peter Watts right? Incredible story. Can recommend.

[–] Regrettable_incident@lemmy.world 2 points 23 hours ago

Yep that's it. Really enjoyed it, just starting Echopraxia.

load more comments (7 replies)
[–] grrgyle@slrpnk.net 11 points 1 day ago

Yeah maybe don't use LLMs

load more comments (15 replies)
[–] DarkSideOfTheMoon@lemmy.world 9 points 1 day ago* (last edited 1 day ago) (4 children)

As programmer. It’s helping my productivity. And look I am SDET in theory I will be the first to go, and I tried to make an agent doing most of my job, but it always things to correct.

But programming requires a lot of boilerplate code, using an agent to make boilerplate files so I can correct and adjust is speeding up a lot what I do.

I don’t think I can replaced so far, but my team is not looking to expand the team right now because we are doing more work.

load more comments (4 replies)
[–] 0x0@lemmy.zip 55 points 1 day ago (4 children)

Could've told them that for $1B.

load more comments (4 replies)
[–] benignintervention@lemmy.world 56 points 1 day ago (2 children)

So I'll be getting job interviews soon? Right?

[–] Tollana1234567@lemmy.today 5 points 1 day ago* (last edited 1 day ago)

nope, they will be hiring outsourced employees instead, AI=ALWAYS indians. on the very same post on reddit, they already said that is happening already. its going to get worst.

[–] eatCasserole@lemmy.world 30 points 1 day ago (1 children)

"Well, we could hire humans...but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We're almost there!"

[–] Whostosay@sh.itjust.works 13 points 1 day ago

One more lane bro I swear

[–] NatakuNox@lemmy.world 22 points 1 day ago
[–] vk6flab@lemmy.radio 17 points 1 day ago
[–] snf@lemmy.world 19 points 1 day ago* (last edited 1 day ago) (3 children)

Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere

load more comments (3 replies)
[–] JATtho@lemmy.world 2 points 1 day ago

Every technology invented is a dual edge sword. Other edge propulses deluge of misinformation, llm hallucinations, brain washing of the masses, and exploit exploit for profit. The better side advances progress in science, well being, availbility of useful knowledge. Like the nuclerbomb, LLM "ai" is currenty in its infancy and is used as a weapon, there is a literal race to who makes the "biggest best" fkn "AI" to dominate the world. Eventually, the over optimistic buble bursts and reality of the flaws and risks will kick in. (Hopefully...)

[–] SeeMarkFly@lemmy.ml 29 points 1 day ago (10 children)

The first problem is the name. It's NOT artificial intelligence, it's artificial stupidity.

People BOUGHT intelligence but GOT stupidity.

load more comments (10 replies)

Oh, that reminds me that we've always lived in false bubbles, and when they burst, crises and other things started, and eventually the biggest bubble that we call civilization and progress will burst, maybe in 2040-2050+.

[–] Venus_Ziegenfalle@feddit.org 19 points 1 day ago

STOP CALCULATING KEEP SHOVELING

load more comments
view more: ‹ prev next ›