ocp-models is the modeling codebase for the Open Catalyst Project.
It provides implementations of state-of-the-art ML algorithms for catalysis that take arbitrary chemical structures as input to predict energy / forces / positions:
The easiest way to install prerequisites is via conda.
After installing conda, run the following commands
to create a new environment
named ocp-models
and install dependencies.
Install conda-merge
:
pip install conda-merge
If you're using system pip
, then you may want to add the --user
flag to avoid using sudo
.
Check that you can invoke conda-merge
by running conda-merge -h
.
Instructions are for PyTorch 1.8.1, CUDA 10.2 specifically.
First, check that CUDA is in your PATH
and LD_LIBRARY_PATH
, e.g.
$ echo $PATH | tr ':' '\n' | grep cuda
/public/apps/cuda/10.2/bin
$ echo $LD_LIBRARY_PATH | tr ':' '\n' | grep cuda
/public/apps/cuda/10.2/lib64
The exact paths may differ on your system.
Then install the dependencies:
conda-merge env.common.yml env.gpu.yml > env.yml
conda env create -f env.yml
Activate the conda environment with conda activate ocp-models
.
Install this package with pip install -e .
.
Finally, install the pre-commit hooks:
pre-commit install
Please skip the following if you completed the with-GPU installation from above.
conda-merge env.common.yml env.cpu.yml > env.yml
conda env create -f env.yml
conda activate ocp-models
pip install -e .
pre-commit install
Only run the following if installing on a CPU only machine running Mac OS X.
conda env create -f env.common.yml
conda activate ocp-models
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ pip install torch-cluster torch-scatter torch-sparse torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.8.0+cpu.html
pip install -e .
pre-commit install
Dataset download links for all tasks can be found at DATASET.md.
IS2* datasets are stored as LMDB files and are ready to be used upon download. S2EF train+val datasets require an additional preprocessing step.
For convenience, a self-contained script can be found here to download, preprocess, and organize the data directories to be readily usable by the existing configs.
For IS2*, run the script as:
python scripts/download_data.py --task is2re
For S2EF train/val, run the script as:
python scripts/download_data.py --task s2ef --split SPLIT_SIZE --get-edges --num-workers WORKERS --ref-energy
--split
: split size to download:"200k", "2M", "20M", "all", "val_id", "val_ood_ads", "val_ood_cat", or "val_ood_both"
.--get-edges
: includes edge information in LMDBs (~10x storage requirement, ~3-5x slowdown), otherwise, compute edges on the fly (larger GPU memory requirement).--num-workers
: number of workers to parallelize preprocessing across.--ref-energy
: uses referenced energies instead of raw energies.
For S2EF test, run the script as:
python scripts/download_data.py --task s2ef --split test
To download and process the dataset in a directory other than your local ocp/data
folder, add the following command line argument --data-path
. NOTE - the baseline configs expects the data to be found in ocp/data
, make sure you symlink your directory or modify the paths in the configs accordingly.
A detailed description of how to train and evaluate models, run ML-based relaxations, and generate EvalAI submission files can be found here.
Our evaluation server is hosted on EvalAI. Numbers (in papers, etc.) should be reported from the evaluation server.
Pretrained model weights accompanying our paper are available here.
Interactive tutorial notebooks can be found here to help familirize oneself with various components of the repo:
- Data visualization - understanding the raw data and its contents.
- Data preprocessing - preprocessing raw ASE atoms objects to OCP graph Data objects.
- LMDB dataset creation - creating your own OCP-compatible LMDB datasets from ASE-compatible Atoms objects.
- S2EF training example - training a SchNet S2EF model, loading a trained model, and making predictions.
For all non-codebase related questions and to keep up-to-date with the latest OCP announcements, please join the discussion board. All codebase related questions and issues should be posted directly on our issues page.
- This codebase was initially forked from CGCNN by Tian Xie, but has undergone significant changes since.
- A lot of engineering ideas have been borrowed from github.com/facebookresearch/mmf.
- The DimeNet++ implementation is based on the author's Tensorflow implementation and the DimeNet implementation in Pytorch Geometric.
If you use this codebase in your work, consider citing:
@article{ocp_dataset,
author = {Chanussot*, Lowik and Das*, Abhishek and Goyal*, Siddharth and Lavril*, Thibaut and Shuaibi*, Muhammed and Riviere, Morgane and Tran, Kevin and Heras-Domingo, Javier and Ho, Caleb and Hu, Weihua and Palizhati, Aini and Sriram, Anuroop and Wood, Brandon and Yoon, Junwoong and Parikh, Devi and Zitnick, C. Lawrence and Ulissi, Zachary},
title = {Open Catalyst 2020 (OC20) Dataset and Community Challenges},
journal = {ACS Catalysis},
year = {2021},
doi = {10.1021/acscatal.0c04525},
}