this post was submitted on 09 Feb 2026
558 points (98.9% liked)
Technology
80990 readers
4896 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If you seriously think the doctor's notes about the patient's symptoms don't include the doctor's diagnostic instincts then I can't help you.
The symptom questions ARE the diagnostic work. Your doctor doesn't ask you every possible question. You show up and you say "my stomach hurts". The Doctor asks questions to rule things out until there is only one likely diagnosis then they stop and prescribe you a solution if available. They don't just ask a random set of questions. If you give the AI the notes JUST BEFORE the diagnosis and treatment it's completely trivial to diagnose because the diagnostic work is already complete.
God you AI people literally don't even understand what skill, craft, trade, and art are and you think you can emulate them with a text predictor.
You're over-egging it a bit. A well written SOAP note, HPI etc should distill to a handful of possibilities, that's true. That's the point of them.
The fact that the llm can interpret those notes 95% as well as a medical trained individual (per the article) to come up with the correct diagnosis is being a little under sold.
That's not nothing. Actually, that's a big fucking deal (tm) if you think thru the edge case applications. And remember, these are just general LLMs - and pretty old ones at that (ChatGPT 4 era). Were not even talking medical domain specific LLM.
Yeah; I think there's more here to think on.
If you think a word predictor is the same as a trained medical professional, I am so sorry for you...
Feel sorry for yourself. Your ignorance and biases are on full display.
Dude, I hate AI. I’m not an AI person. Don’t fucking classify me as that. You’re the one not reading the article and subsequently the study. It didn’t say it included the doctor’s diagnostic work. The study wasn’t about whether LLMs are accurate for doctors, that’s already been studied. The study this article talks about literally says that. Apparently LLMs are passing medical licensing exams almost 100% of the time, so it definitely has nothing to do with diagnostic notes. This study was about using LLMs to diagnose yourself. That’s it. That’s the study. Don’t spread bullshit. It’s tiring debunking stuff that is literally two sentences in.
https://www.nature.com/articles/s41591-025-04074-y