newremagine
New experiences, replay and imagination, titrated, in training.
introduction
In this library we are given num_episodes
to learn a model. Each episode can be spent on one of three options:
- Sample new data
- Replay past data
- Imagine new data
We have assume that:
- We have a finite amount of traning data.
- The test data is from the same distribution as the traning.
- We want the model to perform well on unseen (test) data.
So, what is the best way to divide up our time? Should we only sample new data? Should replay past data often? Should we imagine-as-augmentation often? What is the best mix? Answering these questions is our goal here.
install
git clone https://github.com/CoAxLab/newremagine
pip install -e newremagine
dependencies
- python >3.6
- torch > 1.5
- standard anaconda
usage
See usage.ipynb
.