stanfordmlgroup / ngboost

Natural Gradient Boosting for Probabilistic Prediction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does NGBoost work? Evaluating NGBoost against key criteria for good probabilistic prediction

StatMixedML opened this issue · comments

I found this article to be interesting Does NGBoost work? Evaluating NGBoost against key criteria for good probabilistic prediction, where the author compares the performance of NGBoost to conformal prediction.

I have already replied to one of the tweets: https://twitter.com/predict_addict/status/1588603934666805248

@alejandroschuler I would be interested in hearing your opinion on this.

It's a little more complicated than "does ngboost work" but overall I think he makes a very important point (which I've brought up as well #298 (comment)): if calibrated prediction intervals are all you need, then conformal inference is a simple and perfectly good approach.

The complications are as follows:

  1. conformal inference gives prediction intervals that are calibrated marginally, but not necessarily conditionally (i.e. calibration might be bad for any stratum of the covariates). There are good methods that help with this and you can get prediction intervals that are of different widths for different people but as far as I know there are still some relatively ad-hoc choices you have to make to use conformal for that purpose as of today. Lots of great work in this area so maybe things will settle/standardize soon. I'm not claiming ngboost gives intervals that are perfectly conditionally calibrated (or even marginally) in every situation, but certainly it should if the conditional density is close enough to well-specified and if there's enough data.
  2. building prediction intervals are not the same thing as doing conditional density estimation, which is what ngboost does. If you have the latter you get the former "for free", but not the other way around. If you need a conditional density for other purposes (eg as a generalized propensity score for inference of causal effects) then conformal can't help you there. Again, ngboost may or may not suffice for this purpose depending on whether or not the density is well-specified and on what conditions need to be met for unbiased inference, etc. But it does tackle the right problem and in a way that is accessible, fast, and intelligible.

So overall takeways:

  • If all you need are marginally calibrated prediction intervals, by all means use conformal prediction, it's awesome, it's easy, it works.
  • if you need conditional calibration of some kind, I'd still go for some flavor of conformal prediction but it will be a little more complicated. ngboost is one quick and easy alternative but your mileage may vary.
  • if you want conditional density estimation you have many choices: ngboost strikes a nice balance between being flexible/semiparametric and fast/easy-to-use

@alejandroschuler Thanks for your fast and detailed reply.

  1. building prediction intervals are not the same thing as doing conditional density estimation, which is what ngboost does. If you have the latter you get the former "for free", but not the other way around.

I fully agree with that statement. Also, since models like NGBoost, XGBoostLSS and LightGBMLSS model all moments of a distribution (mean, variance, ...) as functions of covariates, you gain a more detailed view on what actually drives variance etc., i.e., a better understanding of the Data Generating Process.

Conformal Prediction does not output prediction intervals only. Conformal Predictive Distributions output the whole CDF for predictions, the whole CDF is calibrated by default with mathematical guarantees for any underlying model, any data distribution and and dataset size.

Sharing some links in case they might be of interest to NGBoost devs.

https://proceedings.mlr.press/v91/vovk18a.html

https://www.youtube.com/watch?v=FUi5jklGvvo&t=3s

https://github.com/henrikbostrom/crepes

Thanks for the resources @valeman!

I'd say that if someone wants predicted CDFs something like mondrian conformal predictive systems is a clear choice for a first pass. That said, I think there are a few conceptual/practical wrinkles that can make a less rigorous approach (like ngboost) more attractive to users:

  1. CDF is not PDF. There are cases where the latter is necessary, for example when you need a conditional outcome density to estimate a quantile treatment effect. There are ways to get from what looks like an ECDF to a continuous PDF (smoothing, basically) but there are some choices to be made and added complexity.
  2. CDF calibration is not conditional. Again, this is a much broader challenge in the conformal world and it's not possible to have one ultimately satisfactory solution. There are approaches, but still no practicing consensus as far as I can tell (and maybe not enough automation/scalability?), and this may be confusing to people. To be clear: semiparametric approaches like ngboost don't solve the problem: at best I'd bet they give asymptotic conditional calibration if the parametric parts are well-specified (e.g. if all conditionals are normal) which imo is a very silly assumption to make.
  3. In the most vanilla case, predicted conformal CDFs are only location shifted. As with comment 2, I know there's a lot of work addressing this, e.g. normalizing, mondrian bins, etc. but again no practicing consensus. Once there is a clear go-to, scalable approach I think you'll see much more adoption.

These are mostly practical issues rather than theoretical. I hope to see continuing progress and consolidation along those fronts. I wouldn't be opposed to retiring ngboost entirely once it's not offering any theoretical, computational, or ease-of-use benefit!

Something I forgot to mention: when a PDF is what you need you often want some guarantee that your estimate converges (pointwise) to the true PDF. It's not immediately clear to me that the marginal coverage guarantee you get from conformal prediction of the CDF gives you pointwise convergence of the PDF even if it were clear how to get a PDF from an eCDF-like object. However, with a silly parametric model you of course get this convergence and thus this should be the case for something semiparametric like NGBoost as well (if you believe the assumed shape of the conditional distributions). So that's one case where ngboost at least gives you a guarantee in an unrealistic setting, whereas a conformal approach
leaves you hanging.

@alejandroschuler quick question, I see NGBoost paper was published on ArXiv, was it published in some peer reviewed journal as well? I can't seem to find peer reviewed journal publication for NGBoost.

@alejandroschuler quick question, I see NGBoost paper was published on ArXiv, was it published in some peer reviewed journal as well? I can't seem to find peer reviewed journal publication for NGBoost.

yeah it came out in ICML 2020: http://proceedings.mlr.press/v119/duan20a/duan20a.pdf