This is a package to test the performance of different experiment tracking tools: Tensorboard
, Weights and Biases
, Neptune
, Comet
and ClearML
.
Table of Contents
Make sure you have Python installed:
1. Install the package with pip in your terminal:
pip install git+https://github.com/AlexandreSajus/Experiment-Tracking-Benchmark.git
2. Run the benchmark with the following command:
python -m trackbench <tracking library> <training steps>
Currently the supported experiments are: tensorboard
This will train DQN and PPO on CartPole and output the experiment tracking website in the terminal.
For example, for tensorboard, the output will be:
Training... (this might take some time)
Training DQN on CartPole...
Training PPO on CartPole...
TensorBoard launched at http://localhost:6006/
Press Ctrl+C to stop TensorBoard
You will then be able to access the experiment tracking website at http://localhost:6006/ which will show result curves.
Tensorboard creates a local webpage with curves of the training process. It is supported on many platforms and is really easy to start with but it is not very customizable.
✅ Advantages:
- Easy to start
- Supported on many platforms
❌ Disadvantages:
- Difficult to customize
- Limited
- Basic interface
Weights and Biases records a lot of data about the training process and creates a webpage to visualize it
This allows the creation of customizable dashboards to visualize and analyze the training process.
But also hosted webpages with reports that you can share with colleagues
✅ Advantages:
- Many features (dashboards, reports, video, audio, ...)
- Very customizable
- Very ergonomic interface
- Supported everywhere with detailed documentation
❌ Disadvantages:
- Requires an account and an internet connection
- Paid when working as a team
- Setup is more complicated, takes time to learn