omar-florez / memory-augmented-neural-network

Given an annotated utterance (x, y), we encode x with an encoder (LSTM, Transformer) and cache similar latent representations generated during training into an external memory. Storage and retrieval operations are differentiable using attention over memory entries and extend encoder's capacity.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

omar-florez/memory-augmented-neural-network Stargazers