deepaksuresh / MemN2N

Implementation of End-to-End memory network in Flux

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pure Julia implementation of End-To-End Memory Networks, introduced by Sainbayar Sukhbaatar et al. in 2015, a form of Memory Network that incorporate a recurrent attention model over a potentially large external memory. Key features include:

  • Recurrent Attention Model: This allows the network to focus on different parts of the memory during different computational steps or "hops."
  • End-to-End Training: Unlike earlier Memory Networks, End-To-End Memory Networks can be trained with less supervision, making them more practical for real-world applications.
  • Multiple Computational Hops: The model can perform several steps of computation over the memory before producing an output, which enhances its reasoning capabilities.

End-To-End Memory Networks (MemN2N) were predecessors to the Transformer architecture.

About

Implementation of End-to-End memory network in Flux


Languages

Language:Julia 100.0%