this post was submitted on 16 Aug 2025
877 points (99.0% liked)
People Twitter
7955 readers
1009 users here now
People tweeting stuff. We allow tweets from anyone.
RULES:
- Mark NSFW content.
- No doxxing people.
- Must be a pic of the tweet or similar. No direct links to the tweet.
- No bullying or international politcs
- Be excellent to each other.
- Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Not only there's a cure, it's already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don't and complain instead.
This is stupid. Fully reading and analyzing the source for accuracy and relevancy can be extremely time consuming. That's why physicians have databases like UpToDate and Dynamed that have expert (ie physician and PhD) analyses and summaries of the studies in the relevant articles.
I'm a 4th year medical student and I have literally never used an LLM. If I don't know something, I look it up in a reliable resource and a huge part of my education is knowing what I need to look up. An LLM can't do that for me.
And why are you assuming that a model that is designed to be used by physicians would not include the very same analysis from experts that goes into UpToDate or Dynamed? This is something that is absolutely trivial to do, the only thing stopping it is copyright.
AI can not only lookup reliable sources, it will probably be much better and faster than you or I or anybody.
It was clear enough from your post, but thanks for confirming. Perhaps you should give it a try so you can understand it's limitations and strengths first-hand, no? Grab one the several generic LLMs available and ask something like:
Let me know how it did. And note that it probably is a general purpose model and trained on very generic data, and not at all optimized for this usage, but it's impossible to dismiss the capabilities here...
Some of my classmates used chatGPT to summarize reading assignments and it garbled the information so badly that they got things wrong on in-class assessments. Aside from the hallucinations and jumbled garbage output, I refuse to use AI unless there is absolutely no alternative on an ethical basis due to the environmental and societal impacts.
As far as I'm concerned, the only role for LLMs in medicine is to function as a scribe to reduce the burden of documentation and that's only because everything the idiot machines vomit up has to be checked before being committed to the medical record anyways. Generative AI is a scourge on society and an absolute menace in medicine.
Except that your opinion, and mine, are irrelevant.
New AI models, and new studies that evaluate them, will continue to be produced. If and when those studies show that using AI leads to better outcomes, then that will become the gold standard. That's just how evidence based medicine works.
I just don't see a world in which it becomes the gold standard prior to getting so horribly enshittified that no medical institution can justify the cost of the accursed thing.