dnbaker / ProtTrans

ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


ProtTrans



ProtTrans is providing state of the art pre-trained models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using various Transformers Models.

Have a look at our paper ProtTrans: cracking the language of life’s code through self-supervised deep learning and high performance computing for more information about our work.


ProtTrans Attention Visualization


This repository will be updated regulary with new pre-trained models for proteins as part of supporting bioinformatics community in general, and Covid-19 research specifically through our Accelerate SARS-CoV-2 research with transfer learning using pre-trained language modeling models project.

Table of Contents

⌛️  Models Availability

Model Hugging Face Zenodo
ProtT5-XL-UniRef50 Download Download
ProtT5-XL-BFD Download Download
ProtT5-XXL-UniRef50 Download Download
ProtT5-XXL-BFD Download Download
ProtBert-BFD Download Download
ProtBert Download Download
ProtAlbert Download Download
ProtXLNet Download Download
ProtElectra-Generator-BFD Download Download
ProtElectra-Discriminator-BFD Download Download
ProtElectra-Generator coming soon
ProtElectra-Discriminator coming soon
ProtTXL coming soon
ProtTXL-BFD coming soon

⌛️  Datasets Availability

Dataset Dropbox
NEW364 Download
Netsurfp2 Download
CASP12 Download
CB513 Download
TS115 Download
DeepLoc Train Download
DeepLoc Test Download

🚀  Usage

How to use ProtTrans:

  • 🧬  Feature Extraction (FE):
    Please check: Embedding Section. More information coming soon.

  • ⚗️  Protein Sequences Generation:
    Please check: Generate Section. More information coming soon.

📊  Expected Results

  • 🧬  Secondary Structure Prediction (Q3):
Model CASP12 TS115 CB513
ProtT5-XL-UniRef50 81 87 86
ProtT5-XL-BFD 77 85 84
ProtBert-BFD 76 84 83
ProtBert 75 83 81
ProtAlbert 74 82 79
ProtXLNet 73 81 78
ProtElectra-Generator 73 78 76
ProtElectra-Discriminator 74 81 79
ProtTXL 71 76 74
ProtTXL-BFD 72 75 77

  • 🧬  Secondary Structure Prediction (Q8):
Model CASP12 TS115 CB513
ProtT5-XL-UniRef50 70 77 74
ProtT5-XL-BFD 66 74 71
ProtBert-BFD 65 73 70
ProtBert 63 72 66
ProtAlbert 62 70 65
ProtXLNet 62 69 63
ProtElectra-Generator 60 66 61
ProtElectra-Discriminator 62 69 65
ProtTXL 59 64 59
ProtTXL-BFD 60 65 60

  • 🧬  Membrane-bound vs Water-soluble (Q2):
Model DeepLoc
ProtT5-XL-UniRef50 91
ProtT5-XL-BFD 91
ProtBert-BFD 89
ProtBert 89
ProtAlbert 88
ProtXLNet 87
ProtElectra-Generator 85
ProtElectra-Discriminator 86
ProtTXL 85
ProtTXL-BFD 86

  • 🧬  Subcellular Localization (Q10):
Model DeepLoc
ProtT5-XL-UniRef50 81
ProtT5-XL-BFD 77
ProtBert-BFD 74
ProtBert 74
ProtAlbert 74
ProtXLNet 68
ProtElectra-Generator 59
ProtElectra-Discriminator 70
ProtTXL 66
ProtTXL-BFD 65

❤️  Community and Contributions

The ProtTrans project is a open source project supported by various partner companies and research institutions. We are committed to share all our pre-trained models and knowledge. We are more than happy if you could help us on sharing new ptrained models, fixing bugs, proposing new feature, improving our documentation, spreading the word, or support our project.

📫  Have a question?

We are happy to hear your question in our issues page ProtTrans! Obviously if you have a private question or want to cooperate with us, you can always reach out to us directly via our RostLab email

🤝  Found a bug?

Feel free to file a new issue with a respective title and description on the the ProtTrans repository. If you already found a solution to your problem, we would love to review your pull request!.

✅  Requirements

For protein feature extraction or fine-tuninng our pre-trained models, Pytorch and Transformers library from huggingface is needed. For model visualization, you need to install BertViz library.

🤵  Team

  • Technical University of Munich:
Ahmed Elnaggar Michael Heinzinger Christian Dallago Ghalia Rehawi Burkhard Rost
  • Med AI Technology:
Yu Wang
  • Google:
Llion Jones
  • Nvidia:
Tom Gibbs Tamas Feher Christoph Angerer
  • Seoul National University:
Martin Steinegger
  • ORNL:
Debsindhu Bhowmik

💰  Sponsors

Nvidia Google Google ORNL Software Campus

📘  License

The ProtTrans pretrained models are released under the under terms of the Academic Free License v3.0 License.

✏️  Citation

If you use this code or our pretrained models for your publication, please cite the original paper:

@article {Elnaggar2020.07.12.199554,
	author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard},
	title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing},
	elocation-id = {2020.07.12.199554},
	year = {2021},
	doi = {10.1101/2020.07.12.199554},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models taken from NLP. These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids. The LMs were trained on the Summit supercomputer using 5616 GPUs and TPU Pod up-to 1024 cores. Dimensionality reduction revealed that the raw protein LM-embeddings from unlabeled data captured some biophysical features of protein sequences. We validated the advantage of using the embeddings as exclusive input for several subsequent tasks. The first was a per-residue prediction of protein secondary structure (3-state accuracy Q3=81\%-87\%); the second were per-protein predictions of protein sub-cellular localization (ten-state accuracy: Q10=81\%) and membrane vs. water-soluble (2-state accuracy Q2=91\%). For the per-residue predictions the transfer of the most informative embeddings (ProtT5) for the first time outperformed the state-of-the-art without using evolutionary information thereby bypassing expensive database searches. Taken together, the results implied that protein LMs learned some of the grammar of the language of life. To facilitate future work, we released our models at \<a href="https://github.com/agemagician/ProtTrans"\>https://github.com/agemagician/ProtTrans\</a\>.Competing Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2021/05/04/2020.07.12.199554},
	eprint = {https://www.biorxiv.org/content/early/2021/05/04/2020.07.12.199554.full.pdf},
	journal = {bioRxiv}
}

About

ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models.

License:Academic Free License v3.0


Languages

Language:Jupyter Notebook 100.0%