Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Associative Memories

Re:Frame - Retrieving Experience From Associative Memory

Daniil Zelezetsky · Egor Cherepanov · Aleksey Kovalev · Aleksandr Panov


Abstract:

Transformers have demonstrated strong performance in offline reinforcement learning (RL) for Markovian tasks, thanks to their ability to efficiently process historical information. However, in partially observable environments, where agents must rely on past experiences to make decisions in the present, transformers are limited by their fixed context window and struggle to capture long-term dependencies. Extending this window indefinitely is not feasible due to the quadratic complexity of the attention mechanism. This limitation has inspired us to look for alternative ways to improve memory handling. In neurobiology, associative memory allows the brain to link different stimuli by activating neurons simultaneously, creating associations between experiences that occurred around the same time. Motivated by this biological concept, we introduce Re:Frame (Retrieving Experience From Associative Memory), a novel RL algorithm that enables agents to better utilize their past experiences. Re:Frame incorporates a long-term memory mechanism that enhances decision-making in complex tasks by seamlessly integrating past and present information.

Chat is not available.