619
this post was submitted on 10 Mar 2026
619 points (99.2% liked)
Technology
82494 readers
4553 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Precisely. From Cory Doctorow's latest, very insightful essay on AI, where he talks about the promise of AI replacing 9 out of 10 radiologists:
I don't think it's fair to compare LLM code generation to machine vision in this way. These are very different "AI"s. Not necessarily disagreeing with Doctorow, but this is an important distinction.
How the machines work does not matter. The situation is using a machine to replace human expertise while ensuring a human still takes responsibility for things that human is not responsible for. It is not the owning class who is at risk for their machines mistakes, it is the owning classes wage slaves who are at risk.
My understanding is that the tumor detecting machine vision is generally thought useful in addition to the radiologist's expertise. It basically outputs "yes", "maybe", and "no", which is more expertise respecting than generating somewhere thereabouts code, which the coder has to (now) validate.
This is why I wouldn't equate these tools. LLM code generation is marketed to do much more than machine vision for tumor detection.
Cory Doctorow actually goes more in depth on the radiologist example in a post from last year:
In short, we definitely could (and indeed should) be using tools like tumor detecting machine vision as something that helps humans build a better world for humans. But we've seen time and time again, across countless fields that it never works out that way.
That's because this isn't a problem with the technology of AI, but the fucked up sociotechnical and economic systems that govern how this tech is used, who gets to use it, who it gets used on, whose consent is required for those uses and most significant of all: who gets to profit?
The kind of AI doesn't matter with this situation. Hell, It could be a magic talking rock™ and it change nothing of Mismanagement using a person to avoid blaming their shiny and expensive new toy.
"this is an important distinction"
it really isn't