this post was submitted on 06 Dec 2025
45 points (90.9% liked)

Technology

40868 readers
114 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

With OpenAI’s memory upgrade, ChatGPT can recall everything you’ve ever shared with it, indefinitely. Similarly, Google has opened the context window with “Infini-attention,” letting large language models (LLMs) reference infinite inputs with zero memory loss. And in consumer-facing tools like ChatGPT or Gemini, this means persistent, personalized memory across conversations, unless you manually intervene.

The sales pitch is seductively simple: less friction, more relevance. Conversations that feel like continuity: “Systems that get to know you over your life,” as Sam Altman writes on X. Technology, finally, that meets you where you are.

In the age of hyper-personalization — of the TikTok For You page, Spotify Wrapped, and Netflix Your Next Watch — a conversational AI product that remembers everything about you feels perfectly, perhaps dangerously, natural.

Forgetting, then, begins to look like a flaw. A failure to retain. A bug in the code. Especially in our own lives, we treat memory loss as a tragedy, clinging to photo albums and cloud backups to preserve what time tries to erase.

But what if human forgetting is not a bug, but a feature? And what happens when we build machines that don’t forget, but are now helping shape the human minds that do?

you are viewing a single comment's thread
view the rest of the comments
[–] Hexorg@beehaw.org 4 points 17 hours ago

Its an interesting perspective, except… that’s not how AI works (even if it’s advertised that way). Even the latest approach for ChatGPT is not perfect memory. It’s a glorified search functionality. When you type a prompt the system can choose to search your older chats for related information and pull it into context… what makes that information related is the big question here - it uses an embedding model to index and compare your chats. You can imagine it as a fuzzy paragraph search - not exact paragraphs, but paragraphs that roughly talk about the same topic…

it’s not a guarantee that if you mention not liking sushi in one chat - talking about restaurant of choice will pull in the sushi chat. And even if it does pull that in, the model may choose to ignore that. And even if it doesn’t ignore that - You can choose to ignore that. Of course the article talks about healing so I imagine instead of sushi we’re talking about some trauma…. Ok so you can choose not to reveal details of your trauma to AI(that’s an overall good idea right now anyway). Or you can choose to delete the chat - it won’t index deleted chats.

At the same time - there are just about as many benefits of the model remembering something you didn’t. You can imagine a scenario where you mentioned your friend being mean to you and later they are manipulating you again. Maybe having the model remind you of the last bad encounter is good here? Just remember - AI is a machine and you control both its inputs and what you’re to do with its outputs.