labmlai / annotated_deep_learning_paper_implementations

🧑‍🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

Home Page:https://nn.labml.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Request for Mnemosyne Paper Implementation

TruongNhanNguyen opened this issue · comments

commented

Title

Request for Implementation of Mnemosyne: Learning to Train Transformers with Transformers in PyTorch

Description

I would like to request the implementation of the "Mnemosyne: Learning to Train Transformers with Transformers" paper in PyTorch. This paper proposes a new method for training Transformers using Transformers, which has shown promising results in several benchmark datasets. The authors of the paper and the link to the paper are listed as below:

  • Paper name: Mnemosyne: Learning to Train Transformers with Transformers.
  • Author: Deepali Jain, Krzysztof Marcin Choromanski, 2 Sumeet Singh, Vikas Sindhwani,
    Tingnan Zhang, Jie Tan, Avinava Dubey.
  • Paper link: https://arxiv.org/abs/2302.01128.

I believe that having the implementation of this paper in PyTorch would be highly beneficial for the community, as it would allow researchers and practitioners to experiment with this new method and potentially improve the performance of their Transformer models. Additionally, this implementation would demonstrate PyTorch's commitment to staying up-to-date with the latest research in the field.

I have attached a copy of the paper and some additional resources that could be helpful for the implementation. Thank you for your consideration, and I look forward to hearing your response.

Best regards,
Nguyen Truong Nhan

Examine Pleasure
Context Consciousness
Self-taught through self-development