thebazman

joined 1 year ago
[โ€“] thebazman@sh.itjust.works 2 points 12 hours ago* (last edited 12 hours ago) (1 children)

As I said in my comment, the technology they use for these cancer screening tools isnt an LLM, its a completely different technology. Specifically trained on scans to find cancer.

I don't think it would have the same feedback loop of bad training data because you can easily verify the results. AI tool sees cancer in a scan? Verify with the next test. Pretty easy binary test that won't be affected by poor doctor performance in reading the same scans.

I'm not a medical professional so I could be off on that chain of events but This technology isn't an LLM. It suffers from the marketing hype right now where everyone is calling everything AI but its a different technology and has different pros and cons, and different potential failures.

I do agree that the whole AI doesnt have bias is BS. It has the same bias that its training data has.

[โ€“] thebazman@sh.itjust.works 0 points 13 hours ago (4 children)

I don't think its fair to say that "ai has shown to make doctors worse at their jobs" without further details. In the source you provided it says that after a few months of using the AI to detect polyps, the doctors performed worse when they couldn't use the AI than they did originally.

It's not something we should handwave away and say its not a potential problem, but it is a different problem. I bet people that use calculators perform worse when you remove calculators, does that mean we should never use calculators? Or any tools for that matter?

If I have a better chance of getting an accurate cancer screening because a doctor is using a machine learning tool I'm going to take that option. Note that these screening tools are completely different from the technology most people refer to when they say AI