this post was submitted on 01 May 2026
146 points (93.5% liked)

Technology

84257 readers
3249 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] magnue@lemmy.world 5 points 3 hours ago (1 children)

If you supplied humans with the same image and asked for the same estimate I'd be curious to know the difference in results.

[–] jj4211@lemmy.world 1 points 9 minutes ago

Mine would be: "I have no idea" - An answer the LLMs generally refuse to give by their nature (usually declining to answer is rooted in something in the context indicating refusing to answer being the proper text).

If you really pressed them, they'd probably google each thing and sum the results, so the estimates would be as consistent as first google results.

LLMs have a tendency to emit a plausible answer without regard for facts one way or the other. We try to steer things by stuffing the context with facts roughly based on traditional 'fact' based measures, but if the context doesn't have factual data to steer the output, the output is purely based on narrative consistency rather than data consistency. It may even do that if the context has fact based content in it sometimes.