this post was submitted on 16 Aug 2025
680 points (99.3% liked)

People Twitter

7940 readers
800 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ByteJunk@lemmy.world -5 points 18 hours ago* (last edited 4 hours ago) (12 children)

Ok but my counter argument is that if they pass their exam with GPT, shouldn't they be allowed to practice medicine with GPT in hand?

Preferably using a model that's been specifically trained to support physicians.

I've seen doctors that are outright hazards to patients, hopefully this would limit the amount of damage from the things they misremember...

EDIT: ITT bunch of AI deniers who can't provide a single valid argument, but that doesn't matter because they have strong feelings. Be sure to slam the "this doesn't align with how I want my world to be" button!

[–] sukhmel@programming.dev 8 points 16 hours ago (4 children)

Thag might be okay if what said GPT produces would be reliable and reproducible, not to mention providing valid reasoning. It's just not there, far from it

[–] gens@programming.dev 6 points 15 hours ago (1 children)

It's not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.

There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.

[–] Tja@programming.dev 1 points 9 hours ago (1 children)

Not only there's a cure, it's already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don't and complain instead.

[–] medgremlin@midwest.social 1 points 5 hours ago (1 children)

This is stupid. Fully reading and analyzing the source for accuracy and relevancy can be extremely time consuming. That's why physicians have databases like UpToDate and Dynamed that have expert (ie physician and PhD) analyses and summaries of the studies in the relevant articles.

I'm a 4th year medical student and I have literally never used an LLM. If I don't know something, I look it up in a reliable resource and a huge part of my education is knowing what I need to look up. An LLM can't do that for me.

[–] ByteJunk@lemmy.world 2 points 4 hours ago

And why are you assuming that a model that is designed to be used by physicians would not include the very same analysis from experts that goes into UpToDate or Dynamed? This is something that is absolutely trivial to do, the only thing stopping it is copyright.

AI can not only lookup reliable sources, it will probably be much better and faster than you or I or anybody.

I'm a 4th year medical student and I have literally never used an LLM

It was clear enough from your post, but thanks for confirming. Perhaps you should give it a try so you can understand it's limitations and strengths first-hand, no? Grab one the several generic LLMs available and ask something like:

Can you provide me with a small summary of the most up to date guidelines for the management of fibrodysplasia ossificans progressiva? Please be sure to include references, and only consider sources that are credible, reputable and peer reviewed whenever possible.

Let me know how it did. And note that it probably is a general purpose model and trained on very generic data, and not at all optimized for this usage, but it's impossible to dismiss the capabilities here...

load more comments (2 replies)
load more comments (9 replies)