this post was submitted on 28 Feb 2026
437 points (96.2% liked)

Technology

82015 readers
4384 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] voldage@lemmy.world -1 points 5 hours ago (1 children)

Few days ago a friend linked me a danish research paper and claimed it shown that higher wages for women lead to decrease in children being born, and that higher male wages led to the opposite. I don't have the skills required to parse this kind of paper quickly nor understanding of a lot of the terminology. I told chatGPT to read it and contrast it with the arguments being made, to which it responded with pointing out that the term "marginal net-of-tax wage" meant something different from "wage", and that this paper suggested that tax laws incentivizing working more hours led to lowered fertility rather than higher salaries for women. I was asked to point exactly where in the paper it was said like that, and again, I had to lean on LLM to get me page numbers. I eventualy convinced my friend that he got duped by right wing talking points and got him to think a bit.

So, if I didn't do that and just read the conclusion from the paper I'd probably have to agree with him instead, as just googling it led to the right wing trolls making those claims. Was this a good use case of LLM to get me some counter arguments, or would it have been better if I stayed true to my ideals and not to use those tools? Was I rude by arguing against the point made about a research that neither of us understood from the get go by using genAI to parse through it? While I do agree that companies developing those tools are evil and need to be stopped, there is an utility to it that I don't think is available elsewhere. Would me losing that argument and believing that women should have lower salaries to increase fertility (because I believe in science, and this paper seemed to be referenced a lot, also if anything capitalism would be to blame, so probably not as bad) be better than normalizing the use of the devil-tech but having myself and my friend better informed? I am legitimately not sure, but I think I did the right thing? What should've I done? I don't have the skills nor time nor will to read scientific papers that aren't related to my area of expertise, especially when someone linking them didn't do any research either. I am also genuinely exhausted from defending my left-wing points of view from the constant barrage of underhanded and often completely baseless arguments some of my coworkers and friends make to convince me I'm wrong and the default consensus is right. Is it bad to use genAI to figure out some counterpoints? Or should I give up and admit I'm not good or commited enough to make them myself? Right wing people often argue in bad faith and don't take the counterpoints to heart, but sometimes they do, even if the original point they made was just to rile me up. So, am I the asshole? Am I wrong? I seriously don't know.

[โ€“] Bibip@programming.dev 2 points 3 hours ago

a layperson cannot be relied upon to draw meaningful conclusions from a scholarly article. i learned this when i tried to do it. have you ever tried to read a spanish book, without knowing spanish, with nothing but an english-spanish dictionary? it's very slow going and it works alright until someone speaks in idiom or metaphor, but even then you can mostly still get it. this is not always the case with scholarly articles.

moreover, it's a waste of time. if it takes you 30 hours to look up every term and graph, but it would have taken your biology friend 20 minutes to synthesize it for you, there's an obvious solution here. if an LLM can save you 30 hours, and your biology friend 20 minutes, it's a useful tool.