paarthneekhara / text-to-image

Text to image synthesis using thought vectors

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error: d_bn1/d_bn1_2/moments/Squeeze/ExponentialMovingAverage/ does not exist

314rated opened this issue · comments

On running generate_images script, following are errors received.
Could be please suggest a fix for this?
Thanks

====================================================
python generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8

Traceback (most recent call last):
File "generate_images.py", line 106, in
main()
File "generate_images.py", line 64, in main
_, _, _, _, _ = gan.build_model()
File "repo/model.py", line 40, in build_model
disc_wrong_image, disc_wrong_image_logits = self.discriminator(t_wrong_image, t_real_caption, reuse = True)
File "repo/model.py", line 161, in discriminator
h1 = ops.lrelu( self.d_bn1(ops.conv2d(h0, self.options['df_dim']*2, name = 'd_h1_conv'))) #16
File "repo/Utils/ops.py", line 34, in call
ema_apply_op = self.ema.apply([batch_mean, batch_var])
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 403, in apply
colocate_with_primary=(var.op.type in ["Variable", "VariableV2"]))
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 174, in create_zeros_slot
colocate_with_primary=colocate_with_primary)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 151, in create_slot_with_initializer
dtype)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 67, in _create_slot_var
validate_shape=validate_shape)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1297, in get_variable
constraint=constraint)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1093, in get_variable
constraint=constraint)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 439, in get_variable
constraint=constraint)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 408, in _true_getter
use_resource=use_resource, constraint=constraint)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 765, in _get_single_variable
"reuse=tf.AUTO_REUSE in VarScope?" % name)
ValueError: Variable d_bn1/d_bn1_2/moments/Squeeze/ExponentialMovingAverage/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?

I am struggling with the same error, is there any fix?

Try adding the below line before ema.apply in ops.py
with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):

This resolved this error for me.

I'm using TensorFlow v0.12. This works for me:
add with tf.variable_scope(tf.get_variable_scope(), reuse=False):
before ema.apply

where should with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE): add?
before ema_apply_op = self.ema.apply([batch_mean, batch_var])?
but it does not work. @ravindra82

i use with tf.variable_scope(tf.get_variable_scope(), reuse=False): replace with tf.variable_scope(self.name,reuse=tf.AUTO_REUSE) as scope: in ops.py.
but i get ValueError: Trying to share variable beta, but specified shape (256,) and found shape (512,).
what is wrong?

i use with tf.variable_scope(tf.get_variable_scope(), reuse=False): replace with tf.variable_scope(self.name,reuse=tf.AUTO_REUSE) as scope: in ops.py.
but i get ValueError: Trying to share variable beta, but specified shape (256,) and found shape (512,).
what is wrong? @ravindra82

@gentlebreeze1 did you find solution ? because i have the same problem.

i use with tf.variable_scope(self.name,reuse=tf.AUTO_REUSE) as scope: in ops.py. but an occured like this

WARNING:tensorflow:From /home/hp/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
File "train.py", line 238, in
main()
File "train.py", line 76, in main
input_tensors, variables, loss, outputs, checks = gan.build_model()
File "/home/hp/Gan_project_main/text_image_gan/model.py", line 43, in build_model
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))
File "/home/hp/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_impl.py", line 157, in sigmoid_cross_entropy_with_logits
labels, logits)
File "/home/hp/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 2315, in _ensure_xent_args
"named arguments (labels=..., logits=..., ...)" % name)
ValueError: Only call sigmoid_cross_entropy_with_logits with named arguments (labels=..., logits=..., ...)
how can i solve this pls reply as soon as possible

@remyavijeesh22
Use g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = disc_fake_image_logits, labels = tf.ones_like(disc_fake_image)))
instead of g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))

@remyavijeesh22
Use g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = disc_fake_image_logits, labels = tf.ones_like(disc_fake_image)))
instead of g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))

I changed that. The error then goes on to show that
d_loss1 bla bla bla, so I added logits= and labels= the same style in g_loss. I made changes for d_loss2 and d_loss3 as well.

When I run $ python2 generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8

Another error

Tensor name "d_bn1_1/moments/Squeeze/ExponentialMovingAverage" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
	 [[node save/RestoreV2 (defined at generate_images.py:66) ]]

Any suggestion?

I tried all of the above.
Then you will see the following error.
Could be please suggest a fix for this?

===================================================================Traceback (most recent call last):
File "train.py", line 238, in
main()
File "train.py", line 78, in main
d_optim = tf.train.AdamOptimizer(args.learning_rate, beta1 = args.beta1).minimize(loss['d_loss'], var_list=variables['d_vars'])
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 413, in minimize
name=name)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 597, in apply_gradients
self._create_slots(var_list)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\adam.py", line 131, in _create_slots
self._zeros_slot(v, "m", self._name)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 1155, in _zeros_slot
new_slot_variable = slot_creator.create_zeros_slot(var, op_name)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 190, in create_zeros_slot
colocate_with_primary=colocate_with_primary)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 164, in create_slot_with_initializer
dtype)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 74, in _create_slot_var
validate_shape=validate_shape)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1496, in get_variable
aggregation=aggregation)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1239, in get_variable
aggregation=aggregation)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 562, in get_variable
aggregation=aggregation)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 514, in _true_getter
aggregation=aggregation)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 882, in _get_single_variable
"reuse=tf.AUTO_REUSE in VarScope?" % name)
ValueError: Variable d_h0_conv/w/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?

I tried all of the above.
Then you will see the following error.
Could be please suggest a fix for this?

===================================================================Traceback (most recent call last):
File "train.py", line 238, in
main()
File "train.py", line 78, in main
d_optim = tf.train.AdamOptimizer(args.learning_rate, beta1 = args.beta1).minimize(loss['d_loss'], var_list=variables['d_vars'])
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 413, in minimize
name=name)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 597, in apply_gradients
self._create_slots(var_list)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\adam.py", line 131, in _create_slots
self._zeros_slot(v, "m", self._name)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 1155, in _zeros_slot
new_slot_variable = slot_creator.create_zeros_slot(var, op_name)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 190, in create_zeros_slot
colocate_with_primary=colocate_with_primary)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 164, in create_slot_with_initializer
dtype)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 74, in _create_slot_var
validate_shape=validate_shape)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1496, in get_variable
aggregation=aggregation)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1239, in get_variable
aggregation=aggregation)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 562, in get_variable
aggregation=aggregation)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 514, in _true_getter
aggregation=aggregation)
File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 882, in _get_single_variable
"reuse=tf.AUTO_REUSE in VarScope?" % name)
ValueError: Variable d_h0_conv/w/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?

did you solve this problem? i also meet the same error that i have no way to solve it, can you give me some suggestion?

@remyavijeesh22
Use g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = disc_fake_image_logits, labels = tf.ones_like(disc_fake_image)))
instead of g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))

I changed that. The error then goes on to show that
d_loss1 bla bla bla, so I added logits= and labels= the same style in g_loss. I made changes for d_loss2 and d_loss3 as well.

When I run $ python2 generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8

Another error

Tensor name "d_bn1_1/moments/Squeeze/ExponentialMovingAverage" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
	 [[node save/RestoreV2 (defined at generate_images.py:66) ]]

Any suggestion?

hello,have you solved this problem?

commented

@remyavijeesh22
Use g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = disc_fake_image_logits, labels = tf.ones_like(disc_fake_image)))
instead of g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))

I changed that. The error then goes on to show that
d_loss1 bla bla bla, so I added logits= and labels= the same style in g_loss. I made changes for d_loss2 and d_loss3 as well.
When I run $ python2 generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8
Another error

Tensor name "d_bn1_1/moments/Squeeze/ExponentialMovingAverage" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
	 [[node save/RestoreV2 (defined at generate_images.py:66) ]]

Any suggestion?

hello,have you solved this problem?

hey were you able to fix this?

same problem T_T