uber / bayesmark

Benchmark framework to easily compare Bayesian optimization methods on real machine learning tasks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Explanation to the 'visible_to_opt' and 'generalization'

Alaya-in-Matrix opened this issue · comments

I found no explanation of the visible_to_opt and generalization in the documentation, are they some kind of linear transformation to the normalized mean score?

The visible_to_opt is the score that the optimizer sees (e.g, CV error in a ML hyper-parameter tuning context). Whereas generalization is meant to be a related metric the optimizer does not get to see (e.g, error on held-out test set in a ML hyper-parameter tuning context).

Does that make sense? If so, a note can be added to the docs.

Thanks for your explanation! I think would be nice if they are also documented