ardigen / MAT

The official implementation of the Molecule Attention Transformer.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MAT

The official implementation of the Molecule Attention Transformer. ArXiv

architecture

Code

  • EXAMPLE.ipynb jupyter notebook with an example of loading pretrained weights into MAT,
  • transformer.py file with MAT class implementation,
  • utils.py file with utils functions.

More functionality will be available soon!

Pretrained weights

Pretrained weights are available here

Results

In this section we present the average rank across the 7 datasets from our benchmark.

  • Results for hyperparameter search budget of 500 combinations.

  • Results for hyperparameter search budget of 150 combinations.

  • Results for pretrained model

Requirements

  • PyTorch 1.4

Acknowledgments

Transformer implementation is inspired by The Annotated Transformer.

About

The official implementation of the Molecule Attention Transformer.

License:MIT License


Languages

Language:Python 80.2%Language:Jupyter Notebook 19.8%