this post was submitted on 09 Feb 2026
558 points (98.9% liked)

Technology

80990 readers
4896 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

you are viewing a single comment's thread
view the rest of the comments
[–] rumba@lemmy.zip 22 points 20 hours ago (24 children)

Chatbots make terrible everything.

But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias, catch things that might fall through the cracks and pack thousands of doctors worth of updated CME into a thing that can look at a case and go, you know, you might want to check for X. The right model can be fucking clutch at pointing out nearly invisible abnormalities on an xray.

You can't ask an LLM trained on general bullshit to help you diagnose anything. You'll end up with 32,000 Reddit posts worth of incompetence.

[–] XLE@piefed.social 11 points 16 hours ago* (last edited 16 hours ago) (14 children)

But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias

  1. The belief AI is unbiased is a common myth. In fact, it can easily covertly import existing biases, like systemic racism in treatment recommendations.
  2. Even AI engineers who developed the training process could not tell you where the bias in an existing model would be.
  3. AI has been shown to make doctors worse at their jobs. The doctors who need to provide training data.
  4. Even if 1, 2, and 3 were all false, we all know AI would be used to replace doctors and not supplement them.
[–] thebazman@sh.itjust.works 0 points 13 hours ago (2 children)

I don't think its fair to say that "ai has shown to make doctors worse at their jobs" without further details. In the source you provided it says that after a few months of using the AI to detect polyps, the doctors performed worse when they couldn't use the AI than they did originally.

It's not something we should handwave away and say its not a potential problem, but it is a different problem. I bet people that use calculators perform worse when you remove calculators, does that mean we should never use calculators? Or any tools for that matter?

If I have a better chance of getting an accurate cancer screening because a doctor is using a machine learning tool I'm going to take that option. Note that these screening tools are completely different from the technology most people refer to when they say AI

[–] XLE@piefed.social 3 points 13 hours ago (1 children)

Calculators are programmed to respond deterministically to math questions. You don't have to feed them a library of math questions and answers for them to function. You don't have to worry about wrong answers poisoning that data.

On the contrary, LLMs are simply word predictors, and as such, you can poison them with bad data, such as accidental or intentional bias or errors. In other words, that study points to the first step in a vicious negative cycle that we don't want to occur.

[–] thebazman@sh.itjust.works 2 points 12 hours ago* (last edited 12 hours ago) (1 children)

As I said in my comment, the technology they use for these cancer screening tools isnt an LLM, its a completely different technology. Specifically trained on scans to find cancer.

I don't think it would have the same feedback loop of bad training data because you can easily verify the results. AI tool sees cancer in a scan? Verify with the next test. Pretty easy binary test that won't be affected by poor doctor performance in reading the same scans.

I'm not a medical professional so I could be off on that chain of events but This technology isn't an LLM. It suffers from the marketing hype right now where everyone is calling everything AI but its a different technology and has different pros and cons, and different potential failures.

I do agree that the whole AI doesnt have bias is BS. It has the same bias that its training data has.

[–] XLE@piefed.social 2 points 11 hours ago

You're definitely right that image processing AI does not work in a linear manner like how text processing does, but the training and inferences are similarly fuzzy and prone to false positives and negatives. (An early AI model incorrectly identified dogs as wolves because they saw a white background and assumed that that was where wolves would be.) And unless the model starts and stays perfect, you need well-trained doctors to fix it, which apparently the model discourages.

[–] pkjqpg1h@lemmy.zip 1 points 11 hours ago

Calculators are precise, you'll always get the same result and you can trace and reproduce all process

Chatbots are black-box, you may get different result for same input and you can't trace and reproduce all process

load more comments (11 replies)
load more comments (20 replies)