this post was submitted on 06 Dec 2025
43 points (92.2% liked)

Technology

40868 readers
157 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

With OpenAI’s memory upgrade, ChatGPT can recall everything you’ve ever shared with it, indefinitely. Similarly, Google has opened the context window with “Infini-attention,” letting large language models (LLMs) reference infinite inputs with zero memory loss. And in consumer-facing tools like ChatGPT or Gemini, this means persistent, personalized memory across conversations, unless you manually intervene.

The sales pitch is seductively simple: less friction, more relevance. Conversations that feel like continuity: “Systems that get to know you over your life,” as Sam Altman writes on X. Technology, finally, that meets you where you are.

In the age of hyper-personalization — of the TikTok For You page, Spotify Wrapped, and Netflix Your Next Watch — a conversational AI product that remembers everything about you feels perfectly, perhaps dangerously, natural.

Forgetting, then, begins to look like a flaw. A failure to retain. A bug in the code. Especially in our own lives, we treat memory loss as a tragedy, clinging to photo albums and cloud backups to preserve what time tries to erase.

But what if human forgetting is not a bug, but a feature? And what happens when we build machines that don’t forget, but are now helping shape the human minds that do?

top 6 comments
sorted by: hot top controversial new old
[–] WalnutLum@lemmy.ml 5 points 1 hour ago

"Infini-attention" isn't perfect memory, it's highly compressed representations of the entire history. (https://arxiv.org/html/2404.07143v1).

Much like quantization. The longer the context gets the worse the compression makes recall.

[–] Hexorg@beehaw.org 2 points 1 hour ago

Its an interesting perspective, except… that’s not how AI works (even if it’s advertised that way). Even the latest approach for ChatGPT is not perfect memory. It’s a glorified search functionality. When you type a prompt the system can choose to search your older chats for related information and pull it into context… what makes that information related is the big question here - it uses an embedding model to index and compare your chats. You can imagine it as a fuzzy paragraph search - not exact paragraphs, but paragraphs that roughly talk about the same topic…

it’s not a guarantee that if you mention not liking sushi in one chat - talking about restaurant of choice will pull in the sushi chat. And even if it does pull that in, the model may choose to ignore that. And even if it doesn’t ignore that - You can choose to ignore that. Of course the article talks about healing so I imagine instead of sushi we’re talking about some trauma…. Ok so you can choose not to reveal details of your trauma to AI(that’s an overall good idea right now anyway). Or you can choose to delete the chat - it won’t index deleted chats.

At the same time - there are just about as many benefits of the model remembering something you didn’t. You can imagine a scenario where you mentioned your friend being mean to you and later they are manipulating you again. Maybe having the model remind you of the last bad encounter is good here? Just remember - AI is a machine and you control both its inputs and what you’re to do with its outputs.

[–] chicken@lemmy.dbzer0.com 8 points 11 hours ago* (last edited 11 hours ago)

I don't hate this article, but I'd rather have read a blog post grounded in the author's personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there's this attempt to make it sound like there's a lot of objective certainty to it that falls flat because of failing to draw a strong connection.

Like this part:

Research in cognitive and developmental psychology shows that stepping outside one’s comfort zone is essential for growth, resilience, and adaptation. Yet, infinite-memory LLM systems, much like personalization algorithms, are engineered explicitly for comfort. They wrap users in a cocoon of sameness by continuously repeating familiar conversational patterns, reinforcing existing user preferences and biases, and avoiding content or ideas that might challenge or discomfort the user.

While this engineered comfort may boost short-term satisfaction, its long-term effects are troubling. It replaces the discomfort necessary for cognitive growth with repetitive familiarity, effectively transforming your cognitive gym into a lazy river. Rather than stretching cognitive and emotional capacities, infinite-memory systems risk stagnating them, creating a psychological landscape devoid of intellectual curiosity and resilience.

So, how do we break free from this? If the risks of infinite memory are clear, the path forward must be just as intentional.

Some hard evidence that stepping out of your comfort zone is good, but not really any that preventing stepping out of their comfort zone is in practice the effect that "infinite memory" features of personal AI assistants has on people, just rhetorical speculation.

Which is a shame because how that affects people is pretty interesting to me. The idea of using a LLM with these features always freaked me out a bit and I quit using ChatGPT before they were implemented, but I want to know how it's going for the people that didn't, and who use it for stuff like the given example of picking a restaurant to eat at.

[–] TisButAScratch@piefed.zip 6 points 15 hours ago (1 children)

High quality article. Thanks for sharing.

This reminds me of Black Mirror's "The Entire History of You". Definitely not a good idea.

[–] Quexotic@beehaw.org 1 points 5 hours ago

I think they were confused and saw black mirror as an instruction guide.

[–] Kirk@startrek.website 4 points 21 hours ago

Great read, I was unfamiliar with this publication. Thanks for sharing.