this post was submitted on 25 Aug 2025
388 points (99.0% liked)

People Twitter

8006 readers
1773 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
all 40 comments
sorted by: hot top controversial new old
[–] Soup@lemmy.world 1 points 5 minutes ago

“While she was reading it”? Like, as if it takes a computer long enough to read something that it will stop for a break and comment to others?

These people are weird.

[–] Jankatarch@lemmy.world 20 points 10 hours ago (1 children)

You reckon they used AI for assistance while writing too?

[–] Evotech@lemmy.world 4 points 7 hours ago
[–] sanpedropeddler@sh.itjust.works 61 points 14 hours ago (2 children)

This person has a very interesting relationship with their rep. I've never even met my congressmen personally.

[–] P00ptart@lemmy.world 27 points 11 hours ago

Met mine once in Iraq. He gave us all medals that he was unauthorized to give, and the state Congress decided we shouldn't get them so, I have a memento of government stupidity in the form of something I'm unauthorized to wear.

[–] garbagebagel@lemmy.world 9 points 12 hours ago (1 children)

I almost wish my rep was AI tbh. Might be less shitty.

[–] Test_Tickles@lemmy.world 2 points 10 hours ago (1 children)

Sounds like you are also from the south.

[–] garbagebagel@lemmy.world 2 points 8 hours ago

Nah just a conservative part of Canada (not quite as bad as the south yet, but if they had their way)

[–] Stillwater@sh.itjust.works 226 points 21 hours ago* (last edited 21 hours ago) (2 children)

"She seemed disappointed to learn there were sequels" got me, lol

[–] Tar_alcaran@sh.itjust.works 39 points 20 hours ago

Wtf, I love AI now!

Though not quite in the same way as OOP.

[–] TheBat@lemmy.world 29 points 21 hours ago (8 children)

Me when I finished Three Body Problem (I hated it).

[–] ieGod@lemmy.zip 3 points 9 hours ago

It's very poorly written.

[–] P00ptart@lemmy.world 2 points 11 hours ago (2 children)
[–] TheBat@lemmy.world 3 points 10 hours ago

First book only.

[–] P00ptart@lemmy.world 1 points 10 hours ago

I couldn't even do that. Not could I finish the first season of the show.

[–] KazuyaDarklight@lemmy.world 18 points 20 hours ago

Yeah, I was interested at the start and then it started, and continued, to go downhill for me. I kept going in spite of growing concern because I hoped they'd tie it up well in the end, but no. It may partially be cultural and I can also see some argument for artistic tragedy, but it just didn't work for me.

[–] Valmond@lemmy.world 6 points 17 hours ago

History stuff : interesting but unnecessary violent (for me)

The rest: gibberish

[–] clif@lemmy.world 4 points 15 hours ago

Uh oh. I've had it (the first one) sitting on my shelf for a few weeks but need to finish my current series first.

[–] zammy95@lemmy.world 10 points 20 hours ago* (last edited 20 hours ago) (2 children)

Wow guess I'm the outlier? I couldn't put 3BP down, and then I got to the Dark Forest and loved it even more.

Deaths End fell apart terribly though.

It's not that I liked how he wrote, I'm not sure if it was the translation that caused this, but he did not seem like a good writer at all. But I was very intrigued by his plot and ideas.

[–] I_Fart_Glitter@lemmy.world 4 points 18 hours ago (2 children)

That’s how I felt too. I was excited for deaths end, but I put it down about half way through and never got back to it.

[–] zammy95@lemmy.world 1 points 54 minutes ago

I finished it, and it just got progressively more and more scrambled. I guess is was diagnosed with cancer when writing it, and tried to cram as many of his ideas into this one novel as he could. And then it turned out it was a misdiagnosis or something? I can't remember exactly but. Yeah, definitely not the best work lol

[–] P00ptart@lemmy.world 2 points 11 hours ago

That was me for the first book, and then I hoped the TV show would get me back into it but I quit that halfway through as well.

[–] clay_pidgin@sh.itjust.works 11 points 21 hours ago (1 children)

Same here. I feel like an outlier!

[–] Zirconium@lemmy.world 3 points 16 hours ago

I don't know why this happens to me but sci fi books that have "boring" characters like Raft by Stephen Baxter are books I actually really enjoy and find the character's to be realistic. Maybe I'm just a boring person irl

[–] Reverendender@sh.itjust.works 4 points 20 hours ago

I’ve tried like 6 times to keep slogging through. I was convinced it would be great if I just got to the point where it started being great. Now I feel validated.

[–] logicbomb@lemmy.world 44 points 20 hours ago (3 children)

A lot of writers just write for themselves, and don't really think or care about what other people might think when they read it. That's perfectly fine, by the way. Writing can be a worthwhile effort even if nobody ever reads it.

But if you want other people to enjoy it, then you have to keep them in mind. And honestly, this sort of feedback should be invaluable to authors, assuming it's not an AI hallucination.

[–] Tar_alcaran@sh.itjust.works 37 points 20 hours ago (1 children)

LLMs are pretty shit at analysis, so the odds of this just being bullshit are high.

[–] logicbomb@lemmy.world 6 points 20 hours ago (4 children)

Yeah, I was surprised when they said it could summarize the plot and talk about the characters. To my knowledge, LLMs only memory is in how long their prompt is, so it shouldn't be able to analyze an entire novel. I'm guessing if an LLM could do something like this, it would only be because the plot was already summarized at the end of the novel.

[–] Tar_alcaran@sh.itjust.works 2 points 8 hours ago

Summarizing is entirely different from analyzing though. It's a "skill" thats baked into LLMs, because that's how they manage all information. But any analysis would be based on a summary, which will lose a massive amount of resolution.

[–] frezik@lemmy.blahaj.zone 10 points 18 hours ago* (last edited 18 hours ago) (1 children)

I once asked ChatGPT for an opinion on my blog and gave the web address. It summarized some historical posts accurately enough. It was definitely making use of the content, and not just my prompt. Flattered me with saying "the author shows a curious mind". ChatGPT is good at flattery (in fact, it seems to be trained specifically to do it, and this is part of OpenAI's marketing strategy).

For the record, yes, this is a bit narcissistic, just like googling yourself. Except you do need to google yourself every once in a while to know what certain people, like employers, are going to see when they do it. Unfortunately, I think we're going to have to start doing the same with ChatGPT and other popular models. No, I don't like that, either.

[–] oddlyqueer@lemmy.ml 1 points 8 hours ago

I just had a horrifying vision of AI SM tools that help you optimize your public presentation. Get AI critiques as well as tips for appearing more favorable. People do it because you need to be well-received by AI evaluators to get a job. Gradually social pressure evolves all public figures (famous or not) into polished cartoon figures. The real horror of the dead internet is that we'll do it to ourselves.

[–] baguettefish@discuss.tchncs.de 9 points 19 hours ago

chatbots also usually have a database of key facts to query, and modern context windows can get very very long (with the right chatbot). but yeah the author probably imagined a lot of complexity and nuance and understanding that isn't there

[–] L0rdMathias@sh.itjust.works 3 points 18 hours ago (1 children)

Yes but actually no. LLMs can be setup in such a way where they remember previous prompts; most if not all the AI web services do not enable this by default, if they even allow it as an option.

[–] logicbomb@lemmy.world 7 points 17 hours ago

LLMs can be setup in such a way where they remember previous prompts

All of that stuff is just added to their current prompt. That's how that function works.

[–] ch00f@lemmy.world 22 points 19 hours ago (1 children)

"She listed three characters"

AI does everything in threes. Likely it just decided to not like three characters not because three characters were bad but because it always does three bullets.

[–] ech@lemmy.ca 4 points 13 hours ago

It didn't "decide" to "not like" anything. It can't do either.

[–] ech@lemmy.ca 6 points 20 hours ago

assuming it’s not an AI hallucination.

All output from an LLM is a "hallucination". That's the core function of the algorithm.

[–] SkunkWorkz@lemmy.world 11 points 21 hours ago

lol they can’t even enthuse an AI