stanfordmlgroup / ngboost

Natural Gradient Boosting for Probabilistic Prediction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Monotonicity of some parameters in distribution

thomasfrederikhoeck opened this issue · comments

For some datasets (typically modeling physical properties) one knows that some montone constrain can be applied between a feature and the prediction which can help bring down noise and ensure meaningfull relative predictions.

In the point-prediction-world this can be done with a model like HistGradientBoostingRegressor using the monotonic_cst setting (see link).

When modeling using a parameterized distribution ( like ngboost does) one would probably only want to apply this constrain to some of the parameters in the distribution i.e. to the loc of a Normal and let the scale be unconstrained. How would one go about using base learners with different setting for different parameters?

Oh that's an interesting idea. Right now I don't think it can be done but it wouldn't be very hard to modify the code to allow it. Tbh something chatGPT could probably tackle! Feel free to put in a PR.

@alejandroschuler just for my understading then: the distribution parameters do not need to share a base learner - they just do right now because there was no use case for them to be different?