this post was submitted on 25 Aug 2025
471 points (98.8% liked)

People Twitter

8037 readers
1293 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Tar_alcaran@sh.itjust.works 44 points 3 days ago (8 children)

LLMs are pretty shit at analysis, so the odds of this just being bullshit are high.

[–] logicbomb@lemmy.world 7 points 3 days ago (7 children)

Yeah, I was surprised when they said it could summarize the plot and talk about the characters. To my knowledge, LLMs only memory is in how long their prompt is, so it shouldn't be able to analyze an entire novel. I'm guessing if an LLM could do something like this, it would only be because the plot was already summarized at the end of the novel.

[–] frezik@lemmy.blahaj.zone 14 points 3 days ago* (last edited 3 days ago) (2 children)

I once asked ChatGPT for an opinion on my blog and gave the web address. It summarized some historical posts accurately enough. It was definitely making use of the content, and not just my prompt. Flattered me with saying "the author shows a curious mind". ChatGPT is good at flattery (in fact, it seems to be trained specifically to do it, and this is part of OpenAI's marketing strategy).

For the record, yes, this is a bit narcissistic, just like googling yourself. Except you do need to google yourself every once in a while to know what certain people, like employers, are going to see when they do it. Unfortunately, I think we're going to have to start doing the same with ChatGPT and other popular models. No, I don't like that, either.

[–] ruan@lemmy.eco.br 1 points 2 days ago

It was definitely making use of the content, and not just my prompt.

...

Ok, being simplistic about the actual workings: anything a LLM outputs is based only in the training data or the prompt, a LLM does not "create" anything.

I really doubt your blog is statistically significant enough represented in the training data, therefore I can only assume that yes, your blog post URL referenced was web scrapped by ChatGPT and, and any other URLs linked by this main URL that the scrapped deemed significant to the prompt, and all that text was in fact added to the full internal prompt that was processed by the actual LLM.

load more comments (1 replies)
load more comments (5 replies)
load more comments (5 replies)