stanfordmlgroup / ngboost

Natural Gradient Boosting for Probabilistic Prediction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Relation to mean-field variational inference.

trivialfis opened this issue · comments

Hi, this is a question regarding the relationship between natural gradient boosting and variational inference. In the most general sense, any optimization method that approximates a density can be considered a variational inference method (instead of strictly referring to the approximation of a posterior). In practice, most VI methods optimize the KL-divergence by using a proxy called ELBO. The NGBoost looks quite similar to the mean-field VI, which assumes latent variables are mutually independent, but I'm struggling to link the two methods. Would be great if anyone here has looked into a similar issue before and could share some insights. Thank you in advance!