All values are at 0 epoch
ShownX opened this issue · comments
Try relative
mode on the left.
@zihaolucky what do exactly you mean that relative
mode on the left? I am confused.
@ShownX As I haven't add step
params in scalar
, so you have to select the relative
mode of TensorBoard.
Thank you very much!
@zihaolucky , Could you please guide me in the right direction on how to add the step
param in scalar
? Relative
mode is very bad as of now. Gives me very odd results in the training graph.
I am calling tensorboard op on every batch_end.
batch_end_callbacks += [mx.contrib.tensorboard.LogMetricsCallback(training_log)]
Tensorboard logs every batch but individually (no stitching between batches) giving batches
number of graphs in the Relative
mode.
Hi @arundasan91
The reason it looks ugly is we log train & valid/test data points with different time scale. You can write another callback function and pass the step
explicitly, then use STEP
mode.
Hi @zihaolucky, I was able to figure it out but forgot to update you. I passed the params.epoch
to global_steps
in tensorboard.py and it worked as intended. Thank you so much for the wonderful project!
Do you have any idea on why the batch_end_caallback
gives discontinuous graphs? Some accuracy values are nan
when I download the csv but prints out perfectly to shell while training.