"Infini-attention" isn't perfect memory, it's highly compressed representations of the entire history. (https://arxiv.org/html/2404.07143v1).
Much like quantization. The longer the context gets the worse the compression makes recall.
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
"Infini-attention" isn't perfect memory, it's highly compressed representations of the entire history. (https://arxiv.org/html/2404.07143v1).
Much like quantization. The longer the context gets the worse the compression makes recall.
Its an interesting perspective, except… that’s not how AI works (even if it’s advertised that way). Even the latest approach for ChatGPT is not perfect memory. It’s a glorified search functionality. When you type a prompt the system can choose to search your older chats for related information and pull it into context… what makes that information related is the big question here - it uses an embedding model to index and compare your chats. You can imagine it as a fuzzy paragraph search - not exact paragraphs, but paragraphs that roughly talk about the same topic…
it’s not a guarantee that if you mention not liking sushi in one chat - talking about restaurant of choice will pull in the sushi chat. And even if it does pull that in, the model may choose to ignore that. And even if it doesn’t ignore that - You can choose to ignore that. Of course the article talks about healing so I imagine instead of sushi we’re talking about some trauma…. Ok so you can choose not to reveal details of your trauma to AI(that’s an overall good idea right now anyway). Or you can choose to delete the chat - it won’t index deleted chats.
At the same time - there are just about as many benefits of the model remembering something you didn’t. You can imagine a scenario where you mentioned your friend being mean to you and later they are manipulating you again. Maybe having the model remind you of the last bad encounter is good here? Just remember - AI is a machine and you control both its inputs and what you’re to do with its outputs.
I don't hate this article, but I'd rather have read a blog post grounded in the author's personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there's this attempt to make it sound like there's a lot of objective certainty to it that falls flat because of failing to draw a strong connection.
Like this part:
Research in cognitive and developmental psychology shows that stepping outside one’s comfort zone is essential for growth, resilience, and adaptation. Yet, infinite-memory LLM systems, much like personalization algorithms, are engineered explicitly for comfort. They wrap users in a cocoon of sameness by continuously repeating familiar conversational patterns, reinforcing existing user preferences and biases, and avoiding content or ideas that might challenge or discomfort the user.
While this engineered comfort may boost short-term satisfaction, its long-term effects are troubling. It replaces the discomfort necessary for cognitive growth with repetitive familiarity, effectively transforming your cognitive gym into a lazy river. Rather than stretching cognitive and emotional capacities, infinite-memory systems risk stagnating them, creating a psychological landscape devoid of intellectual curiosity and resilience.
So, how do we break free from this? If the risks of infinite memory are clear, the path forward must be just as intentional.
Some hard evidence that stepping out of your comfort zone is good, but not really any that preventing stepping out of their comfort zone is in practice the effect that "infinite memory" features of personal AI assistants has on people, just rhetorical speculation.
Which is a shame because how that affects people is pretty interesting to me. The idea of using a LLM with these features always freaked me out a bit and I quit using ChatGPT before they were implemented, but I want to know how it's going for the people that didn't, and who use it for stuff like the given example of picking a restaurant to eat at.
High quality article. Thanks for sharing.
This reminds me of Black Mirror's "The Entire History of You". Definitely not a good idea.
I think they were confused and saw black mirror as an instruction guide.
Great read, I was unfamiliar with this publication. Thanks for sharing.