Cortex is a minimal deep training/evaluation engine built upon PyTorch. It follows a very simple deep learning coding paradigm, but is flexible enough to reproduce algorithms from a variety of areas.
(Currently we have released a metric learning benchmark written using cortex
. We're going to release more later after thorough tests)
- It supports distributed and mixed-precision GPU training. Simply set
distributed=True
and/ormixed_precision=True
to enable them. - It simplifies the development of deep learning models.
cortex
does most trivial things such aslogging, checkpointing, training/evaluation looping, device allocating, optimizing,
andscheduling
, while you'll only need to implement 5 (sometimes 6) functions to define a model:build_dataloader
build_training
train_step
val_step
test_step
optimize_step
# optional for, e.g., GANs and meta learners
- It reproduces a large number of deep learning algorithms using the the unified interfaces. Currently we have released the metric learning benchmark. We'll released more benchmarks on detection, tracking, GANs, etc. later after thorough tests.
- It is light-weighted. The (training/evaluation) engine contains only two major functions: one calls a single or multiple processes defined in the other.
Check cortex/apps
for a set of examples showing how to use cortex
in your research.
First install PyTorch, faiss, scipy, and apex (optional, if you want to use mixed-precision training), then run:
git clone https://github.com/huanglianghua/cortex.git
cd cortex
pip install -e .