- TensorBoardX / wandb support
- Background generator is used (reason of using background generator)
- In Windows, background generator could not be supported. So if error occurs, set false to
use_background_generator
in config
- In Windows, background generator could not be supported. So if error occurs, set false to
- Training state and network checkpoint saving, loading
- Training state includes not only network weights, but also optimizer, step, epoch.
- Checkpoint includes only network weights. This could be used for inference.
- Distributed Learning using Distributed Data Parallel is supported
- Config with yaml file / easy dot-style access to config
- Code lint / CI
- Code Testing with pytest
config
dir: folder for config filesdataset
dir: dataloader and dataset codes are here. Also, put dataset inmeta
dir.model
dir:model.py
is for wrapping network architecture.model_arch.py
is for coding network architecture.test
dir: folder forpytest
testing codes. You can check your network's flow of tensor by fixingtests/model/net_arch_test.py
. Just copy & pasteNet_arch.forward
method tonet_arch_test.py
and addassert
phrase to check tensor.utils
dir:train_model.py
andtest_model.py
are for train and test model once.utils.py
is for utility. random seed setting, dot-access hyper parameter, get commit hash, etc are here.writer.py
is for writing logs in tensorboard / wandb.
trainer.py
file: this is for setting up and iterating epoch.
- python3 (3.6, 3.7, 3.8 is tested)
- Write PyTorch version which you want to
requirements.txt
. (https://pytorch.org/get-started/) pip install -r requirements.txt
- Config is written in yaml file(default:
config/default.yaml
) data
field- Configs for Dataloader.
- glob
train_dir
/test_dir
withfile_format
for Dataloader. - If
divide_dataset_per_gpu
is true, origin dataset is divide into sub dataset for each gpu. This could mean the size of origin dataset should be multiple of number of using gpu. If this option is false, dataset is not divided but epoch goes up in multiple of number of gpus.
train
/test
field- Configs for training options.
random_seed
is for setting python, numpy, pytorch random seed.num_epoch
is for end iteration step of training.optimizer
is for selecting optimizer. Onlyadam optimizer
is supported for now.dist
is for configuring Distributed Data Parallel. Not using DDP whengpus
is 0, using all gpus whengpus
is -1
model
field- Configs for Network architecture and options for model like device.
- You can add configs in yaml format to config your network.
log
field- Configs for logging include tensorboard / wandb logging.
name
is train name you run.summary_interval
andcheckpoint_interval
are interval of step and epoch between training logging and checkpoint saving.- checkpoint and logs are saved under
chkpt_dir/name
andlog_dir/name
. Tensorboard logs are saving underlog_dir/name/tensorboard
load
field- loading from wandb server is supported
wandb_load_path
isRun path
in overview of run. If you don't want to use wandb load, this field should be~
.network_chkpt_path
is path to network checkpoint file. If using wandb loading, this field should be checkpoint file name of wandb run.resume_state_path
is path to training state file. If using wandb loading, this field should be training state file name of wandb run.
-
pip install -r requirements-dev.txt
for install develop dependencies (this requires python 3.6 and above because of black) -
pre-commit install
for adding pre-commit to git hook
python trainer.py -c config/path/to/file -n training_name
- If training name is specified in config, you can omit training name in command line argument.