this post was submitted on 13 May 2026
149 points (97.5% liked)
LinkedinLunatics
6826 readers
202 users here now
A place to post ridiculous posts from linkedIn.com
(Full transparency.. a mod for this sub happens to work there.. but that doesn't influence his moderation or laughter at a lot of posts.)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Y'all will screech but having a giant ass search engine that is able to process patient data in context is incredibly useful.
Y'all are just prejudice. Professionals will be using these tools and they already have been providing excellent real world results. Honestly, if you don't understand how the medical community is using AI/ml with real validated results, you should be keeping your mouth shut on the topic.
Yeah. Wouldn’t that be nice. That’s not what an LLM is, however.
Why did you assume AI = only LLM? LLMs are just one type of AI, and not the type most often used in medicine
Reid Hoffman is one of those lunatics who thinks LLMs are intelligent. His entire thing is peddling that shit.
LLMs being “search engines” is also a really popular misconception.
I didn't know the guy, I was just addressing the general point of using AI in medicine tbh 🤷♀️
You're completely correct that LLMs are not search engines, of course
~~So then I guess you shouldn't have lead with:~~
~~> but having a giant ass search engine~~
~~Say what you mean and mean what you say or you're just spouting shit that no one will listen to.~~
that wasn't me?! I was just replying to the bit about conflating AI with LLMs, like I was saying 😭
Oh jeeze, my apologies, poor attention paying on my part. Please carry on.
👌👍
Hmmm a 1 day-old account rushing in to defend inappropriate LLM use 🤔
You can only get banned for an opinion so much before moving on to the next one 🤷♂️.
Nice, ban evasion as well 👍🏻
Yup. Account age is an utterly pointless and a shitty evaluation tool. Unless you're just looking to attach a person rather than the idea 🤷♂️. Just creates little shitty echo chambers to attach yourself to.
And fedverse makes it literally impossible to stop. Tech bros really not thinking through how people think, act, and build trust.
Did it feel good reporting me? Do you feel effective?
Hi Reid
If LLMs didn't hallucinate I'd fully agree with you
Which is why your doctor should use it as a tool and validate the results. You know, do their job.
Y'all are just fucking binary. How do you think medical and community members work now? They use a shitty search engine or portal to look up material, and yes, some of it will be garbage they need to wade through.
But God forbid they have a tool that puts that information into a cited overview to supplement a tricky diagnosis. The prejudice and fake workflows that y'all invent is crazy. Looking for little edge cases everywhere catching the AI in a mistake
I have no problem with them using search engines. They can vet and choose answers from reliable sources. From an LLM, it's anybody's guess if anything it pulled up is correct, and a less experienced doctor could be misled into making a dangerous mistake.
Riiiiiiiiiight, LLMs don't cite sources and the portals written in the 90s for journals solve all of that.
It's so amazing to watch you all invent these crazy scenarios, where you've chosen the absolute lowest bar you can find. As if some layman who has no clue how to use this tool is working on some free Claude account because you read about one shitty doctor or lawyer fucking up. It's honestly sad seeing these hoops to jump through.
Professional tools, run by some of the most educated type a professionals in the planet minimize and reduce these risks by providing defaults and interfaces along with education.
FFS they can (and will) kill you accidentally with far more simple shit that can't be mostly mitigated away. But yeah, because LLMs can be used poorly by morons it's worthless 🙄.
When was the last time you used them? They can provide sources for pretty much everything they say and that source usually also contains said thing too.
But even if not, even back 2 years ago, it was already good because you had a second look, a different perspective. A medical professional can either know little about everything or much about next to nothing. It should be really obvious how such a tool can help, even if it can not reach expert level.
"Don't worry, when you ask it for sources it gives you some. Sometimes they are even real! And sometimes the real ones even say the thing they were supposed to have said from the AI!"
Fucking lunacy.
Using AI as a tool to find additional information? Sure, could be doable maybe.
Asking sycophantic ”you’re absolutely correct” machines for second opinion? Absolutely not!
Hoffman is advocating for the latter.
Imagine believing that they'll use general purpose free chatgpt. Just amazing these scenarios you all invent. I can't tell if it's just straight blind prejudice or you all really don't understand how it can integrate into tooling with very specific models.
Just wild what people have cooked up in their mental model.