this post was submitted on 24 Aug 2025
111 points (87.2% liked)

Technology

74461 readers
2372 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] phutatorius@lemmy.zip 2 points 5 hours ago

What's the false positive rate? You can dial up the sensitivity of any test if you don't mind 10,000 people having unnecessary cancer surgery for every real case that's detected.

[–] handsoffmydata@lemmy.zip 1 points 9 hours ago

I thought AI had trouble spelling Pennsylvania

[–] Octavio@lemmy.world 7 points 21 hours ago (1 children)

Good, use it for that. It fucking sucks at art.

[–] TheHotze@lemmy.world 9 points 20 hours ago (1 children)

Different kind of ai. This is the very useful analytical kind.

[–] boonhet@sopuli.xyz 3 points 18 hours ago

Yes, this is one of the kinds of AI I love

The plagiarism machines are the kind most of us can't stand

[–] GreenKnight23@lemmy.world 3 points 21 hours ago (1 children)

this is bullshit.

the study was performed by Navinci Diagnostics, which has a vested interest in the use of technological diagnostic tools like AI.

the only way to truly identify cancer is through physical examination and tests. instead of wasting resources on AI we should improve early detection through improved efficiency of tests, allowing patients to regularly test more often and cheaper.

[–] napkin2020@sh.itjust.works 1 points 5 hours ago* (last edited 5 hours ago) (1 children)

Won't this sort of technology help people get regular tests more often?

[–] GreenKnight23@lemmy.world 2 points 5 hours ago* (last edited 5 hours ago)

it won't because it's an illusion of a test with unverifiable results.

Imagine this, you want to know if you have cancer. you can get results from a biopsy, blood tests, and an MRI. all results are validated by specialist review. it will take 3 months to collect and validate the results. OR, you can run all your tests above and have results in 24 hours but they aren't validated by a specialist.

so the question is, why does it take 3 months and how can we make it shorter without decreasing quality, validity, or consistency?

AI is not the answer.

[–] kalkulat@lemmy.world 20 points 1 day ago (3 children)

From the article: " All 232 men in the study were assessed as healthy when their biopsies were examined by pathologists. After less than two-and-a-half years, half of the men in the study had developed aggressive prostate cancer...."

HALF? I'd suggest staying away from that study ... either they don't know what they're doing, or some AI made up that article...

[–] brendansimms@lemmy.world 14 points 23 hours ago (2 children)

From the peer-reviewed paper: "This study examined if artificial intelligence (AI) could detect these morphological clues in benign biopsies from men with elevated prostate-specific antigen (PSA) levels to predict subsequent diagnosis of clinically significant PCa within 30 months".. so yes, these were men who all had high cancer risk.

[–] phutatorius@lemmy.zip 2 points 5 hours ago

And the risk of prostate cancer from age 60 on is quite high and increases with age, even if you're not in a high risk group (other than age).

[–] kalkulat@lemmy.world 2 points 6 hours ago

OK, thanks for that clarification. I was thrown off by 'assessed as healthy...'

[–] Hawk@lemmy.dbzer0.com 6 points 1 day ago (2 children)

Maybe they specifically picked men with increased risk?

Half sounds pretty nuts otherwise.

[–] absentbird@lemmy.world 3 points 22 hours ago

Yes they did. It says so in the article.

[–] callouscomic@lemmy.zip 3 points 1 day ago
[–] SugarCatDestroyer@lemmy.world 1 points 23 hours ago

Well, it's likely that AI is creating these articles. We're just like in 1984...

[–] PmMeFrogMemes@lemmy.world 93 points 2 days ago (3 children)

This is what machine learning is supposed to be. Specialized models that solve a specific problem. Refreshing to read about some real AI research

[–] phutatorius@lemmy.zip 1 points 5 hours ago

Yeah, there are some useful applications for ML. Less so for LLMs.

[–] phoenixz@lemmy.ca 6 points 23 hours ago

Yeah, this is a typical place for AI to actually shine and we hear almost nothing about it because making fake porn videos of your daughter in law is somehow more important

[–] mintiefresh@piefed.ca 17 points 1 day ago (1 children)

I feel like in an ideal world, people can be using AI to help the quality of their work. Rather than being replaced by AI itself.

[–] SugarCatDestroyer@lemmy.world 5 points 23 hours ago* (last edited 23 hours ago)

We live in a world where the strong take the last food from the weak in order to live even more luxuriously, because luxury, so to speak, is created through stolen or simply slave labor.

In short, the rich are rich only because they exploit the poor or simply slaves, otherwise they would be beggars or middle class.

[–] salty_chief@lemmy.world 29 points 2 days ago (5 children)

Who is downvoting progress in Cancer identification?

[–] brendansimms@lemmy.world 8 points 23 hours ago

After reading the actual published science paper referenced in the article, I would downvote the article because the title is clickbaity and does not reflect the conclusions of the paper. The title suggests that AI could replace pathologists, or that pathologists are inept. This is not the case. Better title would be "Pathologists use AI to determine if biopsied tissue samples contain markers for cancerous tissue that is outside the biopsied region."

[–] Reverendender@sh.itjust.works 34 points 1 day ago (2 children)

Lemmings with knee-jerk reactions to anything AI related

[–] jet@hackertalks.com 11 points 1 day ago (2 children)

Ohhh, this 100%

I just posted a plaque imaging study using AI analysis showing people eating the carnivore diet reversing plaque buildup by doing over a year of a strict ketogenic diet.

People I could have offended

  • AI
  • diet zealots
  • anti-keto reactionaries
  • CICO advocates

But instead I used a name without any of the trigger words and they missed it

We could rewrite this headline as:

Advanced identification techniques let doctors diagnose cancer earlier saving lives!

[–] acosmichippo@lemmy.world 2 points 21 hours ago* (last edited 21 hours ago) (1 children)

where is this study? i did a brief look through your post history but you post so much keto/carnivore stuff it’s hard to spot. it's easy to jump on the downvote persecution bandwagon without linking to it.

[–] jet@hackertalks.com 1 points 21 hours ago* (last edited 21 hours ago) (2 children)

The original study: [Paper] - Plaque Begets Plaque, ApoB Does Not: Longitudinal Data From the KETO-CTA Trial - 2025

The update with new AI imaging: New KETO-CTA Data - Clarification and Update on Cleerly

These didn't really get downvoted because the trigger words were avoided, and the communities are actively pruned of disinterested people, if you are looking for downvote brigading I could dig up examples for you

[–] acosmichippo@lemmy.world 2 points 6 hours ago (1 children)

I just posted a plaque imaging study using AI analysis showing people eating the carnivore diet reversing plaque buildup by doing over a year of a strict ketogenic diet.

where does it say that in the study you linked?

as far as i can tell it says Plaque progression occurred, just wasn’t linked to ApoB or LDL-C levels.

[–] jet@hackertalks.com 1 points 6 hours ago* (last edited 6 hours ago) (1 children)

Right, so the paper using the cleerly model only showed one person reversing plaque, but the two new ai models which don't have a artificial floor, do show 30% plaque reversal. That's the second reference to the YouTube talk.

The interesting thing here, is this group of 100 people following a strict ketogenic diet, mostly carnivore, had imaging done at the beginning and the end of a year. So we can apply any models to it that we like, it's interesting that in 2/3 of the AI imaging models they show 30% of the people with plaque regression

The benefit of AI here is it makes it a quantitative analysis, assuming the AI model is stable. When we involve the humans to do scoring, there's always a question about consistency, and bias in the outcomes.

As far as I'm aware plaque regression is basically unheard of at all in any literature outside of case studies

[–] acosmichippo@lemmy.world 2 points 6 hours ago (1 children)

are the ai models part of a peer reviewed update to the paper?

[–] jet@hackertalks.com 1 points 6 hours ago (1 children)

The paper hasn't been updated, the cleerly AI is part of the original paper.

The updated model data is presented in a preliminary form in the lecture, papers still pending.

[–] acosmichippo@lemmy.world 2 points 5 hours ago (1 children)

What does Dave Feldman have to do with the study and how did he get these preliminary results?

[–] jet@hackertalks.com 1 points 5 hours ago (1 children)

he funded the study, organized it, sourced the volunteers, etc.

[–] acosmichippo@lemmy.world 3 points 5 hours ago (1 children)

i see, the guy who is not a doctor but sells subscription services as “diet doctor” is continuing to fund the study until the results support his business.

[–] jet@hackertalks.com 0 points 5 hours ago* (last edited 5 hours ago)

DietDoctor is a group of doctors focused on metabolic health, it does not have a relationship with Feldman. https://www.dietdoctor.com/about/team-diet-doctor

David Feldman has never called himself a doctor

Yes, people with agendas fund science, the results speak for themselves, that is the purpose of science - publish reproducible results for others to replicate.

[–] Reverendender@sh.itjust.works 5 points 1 day ago (1 children)

And someone immediately downvoted you

[–] jet@hackertalks.com 6 points 1 day ago* (last edited 1 day ago) (2 children)

Yeah, lemmy can be very emotional!

Trying to keep a community on topic without that level of gut reaction is a sisyphean task https://discuss.online/modlog/696952?page=1&actionType=ModBanFromCommunity but i try anyway

[–] acosmichippo@lemmy.world 3 points 21 hours ago* (last edited 21 hours ago) (1 children)

13 of 20 bans for "Reason: Sockpuppet". looks like some good modding you do over there.

[–] jet@hackertalks.com 1 points 21 hours ago* (last edited 21 hours ago)

Thank you! Identifying sockpuppet accounts took some doing, but it has been really a fun adventure. Here is my moderation policy if you want the details https://hackertalks.com/post/13655318

Thank you for your organic downvotes!

Actually its 30 sockpuppet identifications so far.

load more comments (1 replies)
[–] victorz@lemmy.world 3 points 1 day ago

I was about to post a comment: Finally a use for AI that feels justified to spend energy on.

[–] Devmapall@lemmy.zip 10 points 1 day ago (1 children)

There was also a study going around claiming that llms caused cancer screenings by humans to decrease in accuracy. I'm not a scientist but I'm pretty sure the sample size was super small and localized in one hospital?

Anyway maybe they're remembering that in addition to the automatic AI hating down votes.

Not that I'm a fan of AI being shoved everywhere but this isn't that

[–] absentbird@lemmy.world 2 points 22 hours ago

Why would you use a large language model to examine a biopsy?

These should be specialized models trained off structured data sets, not the unbridled chaos of an LLM. They're both called "AI", but they're wildly different technologies.

It's like criticizing a doctor for relying on an air conditioner to keep samples cool when I fact they used a freezer, simply because the mechanism of refrigeration is similar.

[–] Perspectivist@feddit.uk 8 points 1 day ago* (last edited 1 day ago) (1 children)
[–] FauxLiving@lemmy.world 2 points 1 day ago (1 children)

A lot of people don’t realize that votes are public 🤓

[–] SugarCatDestroyer@lemmy.world 3 points 23 hours ago* (last edited 22 hours ago)

Well, it would be logical to say that anonymity is a threat. Plus, it makes it easier to block thought-criminals if they become a threat... :3

What anonymity, don't joke with me here.

[–] pdxfed@lemmy.world 4 points 1 day ago

RFK

Cancer causes autism

[–] SugarCatDestroyer@lemmy.world -3 points 23 hours ago (1 children)

But I wouldn't count on miracles, these freaks at the top will use this AI for their own vile purposes.

[–] Taleya@aussie.zone 1 points 22 hours ago

This is pattern recognition actual AI. Not LLM plagiarism code

[–] GraniteM@lemmy.world 3 points 1 day ago (1 children)

I thought the article was telling an unmarried woman that AI can find the cancer pathologists she's been looking for. Not sure why they would be hiding.

load more comments (1 replies)
load more comments
view more: next ›