documentation request: benchmark of baselines
mschneider opened this issue · comments
Maximilian Schneider commented
Currently there are no baseline results published. It would really help me as a developer as it provides a head start into extending your code base:
- I don't need to spend a few days training and evaluating your code for multiple sets of hyper parameters to understand which quality the provided trainings yield.
- It makes it easier to understand the options in this toolbox. And what is the expected result of using them are while developing a new model.
- Understand the scope of this project better, which datasets / challenges is this useful for and which ones not.
Are you considering to provide those after an external event in the near future, e.g. after approving review of a conference submission?
mariolucic commented
Hi mschneider. As this framework supports various combinations of losses/penalties/architectures/etc. we don't provide baselines for each combination. Instead, we provide a set of working configs in that we tested on GPUs and TPUs which can be used as a good starting point for further research. We are happy to add additional configs that users found work well in a specific setting.