amorehead / DIPS

Database of Interacting Protein Structures (DIPS) - Python 3 Version

Home Page:https://arxiv.org/abs/1807.01297

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Database of Interacting Protein Structures (DIPS)

Released with End-to-End Learning on 3D Protein Structure for Interface Prediction (NeurIPS 2019) by Raphael J.L. Townshend, Rishi Bedi, Patricia A. Suriana, Ron O. Dror. The SASNet training and testing code, as well as a cleaned up version of Docking Benchmark 5 (DB5) can be downloaded here.

This repository contains processing methods for converting the raw pdb data into tfrecords containing the positive and negative interactions between all binary protein complexes in the DIPS dataset. A total of 42826 binary protein complexes will be generated. We also generate a couple intermediate representations that may be useful. Specifically, make_dataset outputs:

  • parsed files: pickled pandas dataframes representing each pdb
  • pair files: dill files containing interacting pairs of parsed pdbs

We also include tfrecord parsing functionality as described below.

Installation

To use the processing and parsing code, you can run make requirements to obtain most dependencies. To obtain the tensorflow dependency, if you do not have it already, you can use make tensorflow (CPU version) or make tensorflow-gpu (GPU version). We recommend you do so within a virtualenv or conda environment.

Creating the DIPS dataset

Download the raw PDB files:

rsync -rlpt -v -z --delete --port=33444 \
rsync.rcsb.org::ftp_data/biounit/coordinates/divided/ ./data/DIPS/raw/pdb

Extract the raw PDB files:

python3 data_builder/extract_raw_pdb_gz_archives.py ./data/DIPS/raw/pdb

Process the raw pdb data into associated pair files:

python3 data_builder/make_dataset.py ./data/DIPS/raw/pdb ./data/DIPS/interim

Apply the additional filtering criteria:

python3 data_builder/prune_pairs.py ./data/DIPS/interim/pairs ./data/DIPS/filters/ ./data/DIPS/interim/pairs-pruned

Apply secondary structure and carbon-alpha (CA) atom type postprocessing to the filtered pairs:

python3 data_builder/postprocess_pruned_pairs.py ./data/DIPS/raw/pdb ./data/DIPS/interim/pairs-pruned ./data/DIPS/interim/pairs-postprocessed

Process the pair files into tfrecords:

python3 src/tfrecord.py ./data/DIPS/interim/pairs-pruned ./data/DIPS/processed/tfrecords-pruned -c 8

Reprocessing DB5 dataset

The DB5 dataset is provided as a fully processed set here. If, however, you wish to regenerate it, you can apply the above steps, with some additional flags and no pruning (as DB5 is already a gold-standard set):

python3 src/make_dataset.py ./data/DB5/raw/ ./data/DB5/interim --type=db5 --unbound
python3 src/tfrecord.py ./data/DB5/interim/pairs ./data/DB5/processed/tfrecords -c 8

Using tfrecord files with a TF dataset

You will want to use the parse_tf_example function in src/tfrecord.py:

import atom3.database as db
from src.tfrecord import parse_tf_example

filenames =  db.get_structures_filenames('./data/DIPS/processed', extension='.tfrecord')
tf_files = tf.data.Dataset.from_tensor_slices(
    tf.convert_to_tensor(filenames))
dataset = tf_files.interleave(tf.data.TFRecordDataset, 4)
dataset = dataset.map(parse_tf_example)

About

Database of Interacting Protein Structures (DIPS) - Python 3 Version

https://arxiv.org/abs/1807.01297

License:MIT License


Languages

Language:Python 94.0%Language:Makefile 6.0%