raghakot / keras-resnet

Residual networks implementation using Keras-1.0 functional API

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Should we watch val_loss or val_acc in callbacks?

pawelkedra opened this issue · comments

In cifar10.py 2 callbacks are used:

ReduceLROnPlateau(monitor='val_loss', factor=np.sqrt(0.1), cooldown=0, patience=5, min_lr=0.5e-6) EarlyStopping(monitor='val_acc', min_delta=0.001, patience=10)

First one changes learning rate when validation loss stops decreasing, second one stops learning when validation accurracy stops increasing. Shouldn't both monitor the same quantity? When I use this callbacks with such configuration, learning often stops when validation accuracy is quite low. I observed that best final results are achieved when in both callbacks is used val_loss, with val_acc they are slightly worse (but not much).

What do you think about it?

I think we should keep both monitored metrics consistent. Thanks for pointing this out.

I updated the code.