songlab-cal / factored-attention

This repository contains code for reproducing results in our paper Interpreting Potts and Transformer Protein Models Through the Lens of Simplified Attention

Home Page:https://psb.stanford.edu/psb-online/proceedings/psb22/bhattacharya.pdf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

factored-attention

This repository contains code for reproducing results in our paper Interpreting Potts and Transformer Protein Models Through the Lens of Simplified Attention (we also host the Appendix in this repo). This code is built entirely on Mogwai, a small library for MRF models of protein families. If you wish to use our Potts or attention implementations for your own exploration, it is easier to use Mogwai directly. If you have questions, feel free to contact us or open an issue!

Installing

After cloning, please install mogwai and necessary dependencies with

$ make build

Updating Mogwai Submodule

Anytime you pull, please be sure to update the Mogwai submodule as well

$ git pull
$ make

Running a training run

Once you have set up your environment, run:

python train.py --model=factored_attention --attention_head_size=32 --batch_size=128 --l2_coeff=0.001 --learning_rate=0.005 --max_steps=5000 --num_attention_heads=256 --optimizer=adam --pdb=3er7_1_A

About

This repository contains code for reproducing results in our paper Interpreting Potts and Transformer Protein Models Through the Lens of Simplified Attention

https://psb.stanford.edu/psb-online/proceedings/psb22/bhattacharya.pdf

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Jupyter Notebook 99.8%Language:Python 0.2%Language:Shell 0.0%Language:Makefile 0.0%