dohlee / antiberty-pytorch

An unofficial re-implementation of AntiBERTy, an antibody-specific protein language model, in PyTorch.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

antiberty-pytorch

Lightning

antiberty_model

installation

$ pip install antiberty-pytorch

Reproduction status

Number of parameters

numparams

This version of AntiBERTy implementation has 25,759,769 parameters in total, and it matches well with the approx. 26M parameters specified in the paper (See above).

Training with 1% of the entire OAS data

I've reproduced AntiBERTy training with about tiny ~1% of the entire OAS data (batch_size=16, mask_prob=0.15) and observed pretty reasonable loss decrease, though it's not for validation set. The training log can be found here.

training_log

Observed Antibody Sequences (OAS) dataset preparation pipeline

I wrote a snakemake pipeline in the directory data to automate the dataset prep process. It will download metadata from OAS and extract lists of sequences. The pipeline can be run as follows:

$ cd data
$ snakemake -s download.smk -j1

NOTE: Only 3% of the entire OAS sequences were downloaded for now due to space and computational cost. (83M sequences, 31GB)

Citation

@article{ruffolo2021deciphering,
    title = {Deciphering antibody affinity maturation with language models and weakly supervised learning},
    author = {Ruffolo, Jeffrey A and Gray, Jeffrey J and Sulam, Jeremias},
    journal = {arXiv},
    year= {2021}
}

About

An unofficial re-implementation of AntiBERTy, an antibody-specific protein language model, in PyTorch.

License:MIT License


Languages

Language:Jupyter Notebook 69.8%Language:Python 30.2%