davidADSP / GDL_code

The official code repository for examples in the O'Reilly book 'Generative Deep Learning'

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

03_05_VAE_faces_train, Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.

Paul-Williamson-90 opened this issue · comments

Hello,

I've been following the chapter on VAE and have copy pasted the code for the VAE faces training. I've came across a few issues with the code which I have corrected, but unfortunately I've been stumped by this last one.

TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.float32, name=None), name='Placeholder:0', description="created by layer 'tf.cast_39'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as tf.cond, tf.function, gradient tapes, or tf.map_fn. Keras Functional model construction only supports TF API calls that do support dispatching, such as tf.math.add or tf.reshape. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer call and calling that layer on this symbolic input/output.

I've narrowed it down to the custom loss function as it will run when replaced with MSE:

`def compile(self, learning_rate, r_loss_factor):
self.learning_rate = learning_rate

    ### COMPILATION
    def vae_r_loss(y_true, y_pred):
        r_loss = K.mean(K.square(y_true - y_pred), axis = [1,2,3])
        return r_loss_factor * r_loss

    def vae_kl_loss(y_true, y_pred):
        kl_loss =  -0.5 * K.sum(1 + self.log_var - K.square(self.mu) - K.exp(self.log_var), axis = 1)
        return kl_loss

    def vae_loss(y_true, y_pred):
        r_loss = vae_r_loss(y_true, y_pred)
        kl_loss = vae_kl_loss(y_true, y_pred)
        return  r_loss + kl_loss

    optimizer = tf.keras.optimizers.Adam(lr=learning_rate)
    self.model.compile(optimizer=optimizer, loss = vae_loss,  metrics = [vae_r_loss, vae_kl_loss])#'mse')#

`

I've tried replacing the K backend with tf.math functions but this hasn't worked. I'm not entirely sure what the error message is trying to communicate though. I'd really like to know how to get this to work as I've been quite interested in being able to implement custom loss functions.

Thanks a lot in advance for your support.

Paul

Paul,

If you are using TF 2.0+ you might try running

from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()

before compiling the model. That seems to work for me.

J

Paul,

If you are using TF 2.0+ you might try running

from tensorflow.python.framework.ops import disable_eager_execution disable_eager_execution()

before compiling the model. That seems to work for me.

J

Hi, Jason!

Thanks for the tip!

It worked perfectly now!

Thanks!

Paul,

If you are using TF 2.0+ you might try running

from tensorflow.python.framework.ops import disable_eager_execution disable_eager_execution()

before compiling the model. That seems to work for me.

J

Thanks so much Jason, and apologies for only responding now!