ibab / tensorflow-wavenet

A TensorFlow implementation of DeepMind's WaveNet paper

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No checkpoint found, despite running prevously

drdeaton opened this issue · comments

commented

Whenever I run train.py, I get the following message:

Trying to restore saved checkpoints from ./logdir/train/2018-07-24T21-45-30 ... No checkpoint found.
files length: 444

It then continues, starting anew from step 0
How can I get it to continue where it left off?

Also, it's probably worth mentioning that I get this whenever it saves a checkpoint:

Storing checkpoint to ./logdir/train/2018-07-24T21-45-30 ...WARNING:tensorflow:Issue encountered when serializing trainable_variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'filter_bias' has type str, but expected one of: int, long, bool
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'filter_bias' has type str, but expected one of: int, long, bool
 Done.

I don't think that is the issue, because when opening the checkpoint file in gedit, it seems like a valid cfg file format, but this could be to blame.

commented

i do that don't have this problem, now i train it 3500 step.

@Dysproh did you solve the problem?

@Dysproh Has a solution been found for this problem?

I am also running into this problem and I'm surprised this hasn't been fixed as well