johnantonn / cash-for-unsupervised-ad

Systematic Evaluation of CASH Search Strategies for Unsupervised Anomaly Detection

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add BOSH and BOHB search via callbacks

johnantonn opened this issue · comments

BOSH and BOHB callbacks in combination with the PyOD algorithms and the PredefinedSplit resampling strategy result in crashing execution of automl.fit() with the below trace:

        {
          "traceback": "Traceback (most recent call last):\n  File \"/home/johneegr/mai-thesis/auto-sklearn/autosklearn/evaluation/__init__.py\", line 42, in fit_predict_try_except_decorator\n    return ta(queue=queue, **kwargs)\n  File \"/home/johneegr/mai-thesis/auto-sklearn/autosklearn/evaluation/train_evaluator.py\", line 1210, in eval_holdout\n    evaluator.fit_predict_and_loss(iterative=iterative)\n  File \"/home/johneegr/mai-thesis/auto-sklearn/autosklearn/evaluation/train_evaluator.py\", line 638, in fit_predict_and_loss\n    budget_factor = self.model.get_max_iter()\n  File \"/home/johneegr/mai-thesis/auto-sklearn/autosklearn/pipeline/base.py\", line 137, in get_max_iter\n    raise NotImplementedError()\nNotImplementedError\n",
          "error": "NotImplementedError()",
          "configuration_origin": "Default"
        }

After a first, unsuccessful attempt to incorporate BOSH and BOHB into the setting of the thesis, some research seems to indicate that it's actually counter-intuitive for multi-fidelity like BOSH and BOHB to work well in this setting:

The main issue is that BOSH and BOHB require models that can be checkpointed during learning, i.e. paused and resumed, which is not the case for our single-iteration models. The approach could be followed for ensemble or deep learning models.

Closing for now