this post was submitted on 13 May 2026
149 points (97.5% liked)
LinkedinLunatics
6826 readers
202 users here now
A place to post ridiculous posts from linkedIn.com
(Full transparency.. a mod for this sub happens to work there.. but that doesn't influence his moderation or laughter at a lot of posts.)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Which is why your doctor should use it as a tool and validate the results. You know, do their job.
Y'all are just fucking binary. How do you think medical and community members work now? They use a shitty search engine or portal to look up material, and yes, some of it will be garbage they need to wade through.
But God forbid they have a tool that puts that information into a cited overview to supplement a tricky diagnosis. The prejudice and fake workflows that y'all invent is crazy. Looking for little edge cases everywhere catching the AI in a mistake
I have no problem with them using search engines. They can vet and choose answers from reliable sources. From an LLM, it's anybody's guess if anything it pulled up is correct, and a less experienced doctor could be misled into making a dangerous mistake.
Riiiiiiiiiight, LLMs don't cite sources and the portals written in the 90s for journals solve all of that.
It's so amazing to watch you all invent these crazy scenarios, where you've chosen the absolute lowest bar you can find. As if some layman who has no clue how to use this tool is working on some free Claude account because you read about one shitty doctor or lawyer fucking up. It's honestly sad seeing these hoops to jump through.
Professional tools, run by some of the most educated type a professionals in the planet minimize and reduce these risks by providing defaults and interfaces along with education.
FFS they can (and will) kill you accidentally with far more simple shit that can't be mostly mitigated away. But yeah, because LLMs can be used poorly by morons it's worthless 🙄.
When was the last time you used them? They can provide sources for pretty much everything they say and that source usually also contains said thing too.
But even if not, even back 2 years ago, it was already good because you had a second look, a different perspective. A medical professional can either know little about everything or much about next to nothing. It should be really obvious how such a tool can help, even if it can not reach expert level.
"Don't worry, when you ask it for sources it gives you some. Sometimes they are even real! And sometimes the real ones even say the thing they were supposed to have said from the AI!"
Fucking lunacy.
Using AI as a tool to find additional information? Sure, could be doable maybe.
Asking sycophantic ”you’re absolutely correct” machines for second opinion? Absolutely not!
Hoffman is advocating for the latter.
Imagine believing that they'll use general purpose free chatgpt. Just amazing these scenarios you all invent. I can't tell if it's just straight blind prejudice or you all really don't understand how it can integrate into tooling with very specific models.
Just wild what people have cooked up in their mental model.