TorchEnsemble-Community / Ensemble-Pytorch

A unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model.

Home Page:https://ensemble-pytorch.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Allow Continuation of Training

jtpdowns opened this issue · comments

It appears that the fit method for ensembles is also where the estimators are instantiated. It would be convenient (for example for fine-tuning pretrained ensembles) if the instantiation and training happened in separate steps. Would it be possible either to decouple the instantiation and training steps to allow for the continuation of training? Is the functionality for continuation of training already available in some other way?

It seems like this might be straightforward to implement for any class where all estimators are initialized at once (ie I think adversarial, bagging, fusion, gradient boosting, soft gradient boosting, and voting):

    # Instantiate a pool of base estimators, optimizers, and schedulers.
    estimators = []
    for _ in range(self.n_estimators):
        estimators.append(self._make_estimator())

For fast geometric and snapshot, it seems like you could still manage a list and just update from the last element in the list for continuation (instantiating the first estimator into an otherwise empty list for a new ensemble).

Hi @jtpdowns, I think you are right. It would be convenient if we could decouple the training part and the model initialization part. Will appreciate a pull request very much.