wandb / wandb

🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.

Home Page:https://wandb.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Q] What happend if no metric and goal are set in the sweep config with a random search strategy?

bessinz opened this issue · comments

Hello,
I am trying to figure out what is the effect of adding a metric or not in the random search config for a sweep.
Are there some default parameters that are used in this case?
The sweep config looks like this for now:

sweep_config = {
        'method': 'random',
        'parameters': {
            'batch': {'values': [4,8,16]},
            'warmup_epochs':{'values':[2,3,4,5]},
            'warmup_momentum':{'max': 0.85, 'min':0.1,'distribution': 'uniform'},
            'warmup_decay':{'max': 0.0001, 'min':0.00005,'distribution': 'uniform'},
            'lrdebut': {'max': math.log(0.09), 'min':math.log(0.0005),'distribution': 'log_uniform'},
            'lrf': {'max': 0.70, 'min':  0.01,'distribution': 'uniform'},
            'momentum': {'max': 0.90, 'min':  0.60,'distribution': 'uniform'},
            'cls':{'max': 4.0, 'min':0.5,'distribution': 'uniform'},
            'dfl':{'max': 2.0, 'min':1.5,'distribution': 'uniform'},
            'optimizer':{'values':['SGD','AdamW']},
            'box':{'max': math.log(10), 'min':math.log(0.02),'distribution': 'log_uniform'}
        }
}

Should I rerun my optmisation adding the metric/goal to the sweep?

Thanks a lot for the help on this.

Hi @bessinz, thank you for writing in. The metric/goal keys are used in Bayesian search, as detailed here: https://docs.wandb.ai/guides/sweeps/sweep-config-keys#bayesian-search. They serve as an objective function to be optimized. For random search, it isn't necessary to provide a metric. However, if you are interested in a particular metric, you can provide it, as it can be later used to retrieve the best run of a sweep from the API: https://docs.wandb.ai/guides/track/public-api-guide#get-the-best-run-from-a-sweep. Let me know if you had any more questions about it.

Hi @thanos-wandb, thank you for your explanation. So if I don't use it in my sweep config, I still can identify the best hyperparameter config according to one of the several metrics in output? I am using YOLOv8 architecture so I have precision, recall, mAP50 and mAP50-95 metrics that appear in the sweep output when plotting the sweep comparer (with the parallel coordinates tool in wandb api).

Hi @bessinz indeed, for random search you won't need to specify it, and you should be able to identify the best metric either from the UI (eg in parallel coordinates panel) or from the API as follows:

best_run = sweep.best_run(order='-summary_metrics.mAP50')
best_run.summary.mAP50

Please note the -/+ in the order depending if you wanted to sort the metric as ascending/descending respectively. I hope this helps! Let me know if you had any more questions.