Calibrating a classifier
lucazav opened this issue · comments
In this comment there was the idea of adding an helper to calibrate a model.
Has this helper been implemented? If not could you provide an example of using the model_wrapper()
in order to override the predict_proba()
function of an existing model?
hi @lucazav no, the calibration method has not been implemented. This seems related to issue:
#458
perhaps you can send a sample notebook to reproduce and I can take a look.
" If not could you provide an example of using the model_wrapper() in order to override the predict_proba() function of an existing model?"
It should look very similar to one of these:
For example:
class MyModelWrapper(object):
"""A class for wrapping my model in the scikit-learn specification."""
def __init__(self, model):
"""Initialize MyModelWrapper with the model and evaluation function."""
self._model = model
def predict(self, dataset):
"""Predict the output using the wrapped model.
:param dataset: The dataset to predict on.
:type dataset: interpret_community.dataset.dataset_wrapper.DatasetWrapper
"""
# extra logic here and after to call the model and change the values to be in scikit-learn specification
return self._model.my_predict_function(dataset)
def predict_proba(self, dataset):
"""Predict the output probability using the wrapped model.
:param dataset: The dataset to predict_proba on.
:type dataset: interpret_community.dataset.dataset_wrapper.DatasetWrapper
"""
# extra logic here and after to call the model and change the values to be in scikit-learn specification
return self._model.my_predict_proba_function(dataset)
Thank you for the example, @imatiach-msft. I can now use calibrated predictions in the explainability dashboard thanks to the wrapper.