this post was submitted on 09 Feb 2026
554 points (98.9% liked)

Technology

80990 readers
4824 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

you are viewing a single comment's thread
view the rest of the comments
[–] SuspciousCarrot78@lemmy.world -1 points 17 hours ago* (last edited 15 hours ago) (1 children)

Agree.

I'm sorta kicking myself I didn't sign up for Google's MedPALM-2 when I had the chance. Last I checked, it passed the USMLE exam with 96% and 88% on radio interpretation / report writing.

I remember looking at the sign up and seeing it requested credit card details to verify identity (I didn't have a google account at the time). I bounced... but gotta admit, it might have been fun to play with.

Oh well; one door closes another opens.

In any case, I believe this article confirms GIGO. The LLMs appear to have been vastly more accurate when fed correct inputs by clinicians versus what lay people fed it.

[–] rumba@lemmy.zip 2 points 12 hours ago (1 children)

It's been a few years, but all this shit's still in it's infancy. When the bubble pops and the venture capital disappears, Medical will be one of the fields that keeps using it, even though it's expensive, because it's actually something that it will be good enough at to make a difference.

[–] SuspciousCarrot78@lemmy.world 3 points 12 hours ago* (last edited 12 hours ago) (1 children)

Agreed!

I think (hope) the next application of this tech is in point of care testing. I recall a story of a someone in Sudan(?) using a small, locally hosted LLM with vision abilities to scan hand written doctor notes and come up with an immunisation plan for their village. I might be misremembering the story, but the anecdote was along those lines.

We already have PoC testing for things like Ultrasound... but some interpretation workflows rely on strong net connection iirc. It'd be awesome to have something on device that can be used for imaging interpretation where there is no other infra.

Maybe someone can finally win that $10 million dollar X prize for the first viable tricorder (pretty sure that one wrapped up years ago? Too lazy to look)..one that isn't smoke and mirror like Theranos.

[–] rumba@lemmy.zip 2 points 12 hours ago

For the price of a ultrasound equipment, I bet someone could manage to integrate old school sattelite or ...grr starlink... data