π‘ Let's learn how to use lightning-hydra-template!
-
PyTorch Lightning: a lightweight PyTorch wrapper for high-performance AI research. Think of it as a framework for organizing your PyTorch code.
-
Hydra: a framework for elegantly configuring complex applications. The key feature is the ability to dynamically create a hierarchical configuration by composition and override it through config files and the command line.
βββ configs <- Hydra configuration files
β βββ callbacks <- Callbacks configs
β βββ datamodule <- Datamodule configs
β βββ debug <- Debugging configs
β βββ experiment <- Experiment configs
β βββ extras <- Extra utilities configs
β βββ hparams_search <- Hyperparameter search configs
β βββ hydra <- Hydra configs
β βββ local <- Local configs
β βββ logger <- Logger configs
β βββ model <- Model configs
β βββ paths <- Project paths configs
β βββ trainer <- Trainer configs
β β
β βββ eval.yaml <- Main config for evaluation
β βββ train.yaml <- Main config for training
β
βββ data <- Project data
β
βββ logs <- Logs generated by hydra and lightning loggers
β
βββ notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
β the creator's initials, and a short `-` delimited description,
β e.g. `1.0-jqp-initial-data-exploration.ipynb`.
β
βββ scripts <- Shell scripts
β
βββ src <- Source code
β βββ datamodules <- Lightning datamodules
β βββ models <- Lightning models
β βββ utils <- Utility scripts
β β
β βββ eval.py <- Run evaluation
β βββ train.py <- Run training
β
βββ tests <- Tests of any kind
β
βββ .env.example <- Example of file for storing private environment variables
βββ .gitignore <- List of files ignored by git
βββ .pre-commit-config.yaml <- Configuration of pre-commit hooks for code formatting
βββ Makefile <- Makefile with commands like `make train` or `make test`
βββ pyproject.toml <- Configuration options for testing and linting
βββ requirements.txt <- File for installing python dependencies
βββ setup.py <- File for installing project as a package
βββ README.md
-
Config parameter
python train.py trainer.max_epochs=20 model.optimizer.lr=1e-4
-
Add new parameters
python train.py +model.new_param="owo"
-
Training device
python train.py trainer=gpu
-
Train with mixed precision
python train.py trainer=gpu +trainer.precision=16
-
Train model with configs/experiment/example.yaml
python train.py experiment=example
-
Resume ckpt
python train.py ckpt_path="/path/to/ckpt/name.ckpt"
-
Evaluate ckpt on test dataset
python eval.py ckpt_path="/path/to/ckpt/name.ckpt"
-
Create a sweep over hyperparameters
python train.py -m datamodule.batch_size=32,64,128 model.lr=0.001,0.0005
π₯ result in 6 different combination
-
β€οΈ HPO with Optuna:
python train.py -m hparams_search=mnist_optuna experiment=example
We can define everything in a single config file -
Execute all experiments from the folder
configs/experiment/
throughpython train.py -m 'experiment=glob(*)'
-
Execute with multiple seed
python train.py -m seed=1,2,3,4,5 trainer.deterministic=True logger=csv tags=["benchmark"]
-
Write PyTorch Lightning module as
src/models/mnist_module.py
-
Write PyTorch Lightning datamodule as
src/datamodules/mnist_datamodule.py
-
Write experiment config.
-
Run training with command line as
python src/train.py experiment=experiment_name.yaml
- Acc VS Batch size
python train.py -m logger=csv datamodule.batch_size=16,32,64,128 tags=["batch_size_exp"]
- Logs
Configuration is in configs/logger
and run
python train.py logger=logger_name
- Tests
pytest
pytest tests/test_train.py
-
Config file is in
configs/hparams_search
-
Command line:
python train.py -m hparams_search=mnist_optuna
-
Supported frameworks: Optuna, Ax, and Nevergrad
-
The
optimization_results.yaml
will be available underlogs/task_name/multirun
folder.