this post was submitted on 28 Feb 2026
388 points (96.2% liked)

Technology

82015 readers
4113 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pixxelkick@lemmy.world 117 points 20 hours ago (4 children)

Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me

Not even "I asked chatgpt and it said", they just dump it in the chat @ me

Sometimes I'll write up a 2~3 paragraph thought on something.

And then I'll get a ping 15min later and go take a look at what someone responded with annnd... it starts with "Here's a quick summary of what (pixxelkick) said! "

I find this horribly rude tbh, because:

  1. If I wanted to be AI summarized, I would do that myself damnit
  2. You just clogged up the chat with garbage
  3. like 70% of the time it misquotes me or gets my points wrong, which muddies the convo
  4. It's just kind of... dismissive? Like instead of just fucking read what I wrote (and I consider myself pretty good at conveying a point), they pump it through the automatic enshittifier without my permission/consent, and dump it straight into the chat as if this is now the talking point instead of my own post 1 comment up

I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its... just coming across as wasting everyones time.

[–] Vlyn@lemmy.zip 2 points 2 hours ago

Oof, I don't even get what they are trying to accomplish there. Maybe they had some kind of social training that told them "Summarize and reply what you understood first to show that you listened and avoid miscommunication, then add your response." and their brain short circuited and started to think a ChatGPT summarization is the same.

I'd get pretty hostile at work if someone started to do that..

[–] doesit@sh.itjust.works 1 points 3 hours ago* (last edited 3 hours ago) (1 children)

I'd leave the appreciate the attempt out. You don't.
More importantly, would enquire if they use corporate or free AI. Second one is used for training and has no or low protection of (perhaps sensitive) corporate info/data.

[–] nickiwest@lemmy.world 2 points 2 hours ago

I think at some point it will come out that the corporate subscription is no different and the LLM companies have been scraping everything for training data.

[–] MrKoyun@lemmy.world 10 points 8 hours ago

I hate people so fucking much

[–] XLE@piefed.social 30 points 15 hours ago (1 children)

This is sad, really. People are fed the lie that AI is objective, and apparently they think that they will get the objective summary of what you said if they run it through a chatbot.

And the more people interact with chatbots, the harder they find it to interact outside of the chatbots. So they might feel even more uncomfortable with asking you to summarize yourself. So they go back to the chatbot. It's a self-perpetuating cycle.

[–] ErmahgherdDavid@lemmy.dbzer0.com 6 points 3 hours ago* (last edited 3 hours ago)

Exactly. To your point, AI output is probabilistically the average opinion of everyone on the internet so it shares the common biases of the general public. Even with a bit of RLHF to "balance out" the models. Also it probably doesn't help to anthropomorphise them. They don't have opinions, they just autocomplete based on prior input

It seems pretty clear after a few years of people getting AI psychosis that LLMs are an addictive psychological hazard