davidADSP / GDL_code

The official code repository for examples in the O'Reilly book 'Generative Deep Learning'

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ValueError: The model cannot be compiled because it has no loss to optimize. Error in 03_03_vae_digits_train Notebook

CloudaYolla opened this issue · comments

Error in 03_03_vae_digits_train Notebook
(TensorFlow2 branch)

ValueError: The model cannot be compiled because it has no loss to optimize.

When running the cell below:

vae.train(     
    x_train
#     x_train[:1000]
    , batch_size = BATCH_SIZE
    , epochs = EPOCHS
    , run_folder = RUN_FOLDER
    , print_every_n_batches = PRINT_EVERY_N_BATCHES
    , initial_epoch = INITIAL_EPOCH
)

I get the following error:

WARNING:tensorflow:Output output_1 missing from loss dictionary. We assume this was done on purpose. The fit and evaluate APIs will not be expecting any data to be passed to output_1.
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-12-ef78d2d1df6e> in <module>
      6     , run_folder = RUN_FOLDER
      7     , print_every_n_batches = PRINT_EVERY_N_BATCHES
----> 8     , initial_epoch = INITIAL_EPOCH
      9 )

~/SageMaker/generative-deep-l/TF2/GDL_code/models/VAE.py in train(self, x_train, batch_size, epochs, run_folder, print_every_n_batches, initial_epoch, lr_decay)
    224             , epochs = epochs
    225             , initial_epoch = initial_epoch
--> 226             , callbacks = callbacks_list
    227         )
    228 

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    823         max_queue_size=max_queue_size,
    824         workers=workers,
--> 825         use_multiprocessing=use_multiprocessing)
    826 
    827   def evaluate(self,

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    233           max_queue_size=max_queue_size,
    234           workers=workers,
--> 235           use_multiprocessing=use_multiprocessing)
    236 
    237       total_samples = _get_total_number_of_samples(training_data_adapter)

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    591         max_queue_size=max_queue_size,
    592         workers=workers,
--> 593         use_multiprocessing=use_multiprocessing)
    594     val_adapter = None
    595     if validation_data:

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    644     standardize_function = None
    645     x, y, sample_weights = standardize(
--> 646         x, y, sample_weight=sample_weights)
    647   elif adapter_cls is data_adapter.ListsOfScalarsDataAdapter:
    648     standardize_function = standardize

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
   2376     is_compile_called = False
   2377     if not self._is_compiled and self.optimizer:
-> 2378       self._compile_from_inputs(all_inputs, y_input, x, y)
   2379       is_compile_called = True
   2380 

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in _compile_from_inputs(self, all_inputs, target, orig_inputs, orig_target)
   2634         sample_weight_mode=self.sample_weight_mode,
   2635         run_eagerly=self.run_eagerly,
-> 2636         experimental_run_tf_function=self._experimental_run_tf_function)
   2637 
   2638   # TODO(omalleyt): Consider changing to a more descriptive function name.

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, distribute, **kwargs)
    444 
    445       # Creates the model loss and weighted metrics sub-graphs.
--> 446       self._compile_weights_loss_and_weighted_metrics()
    447 
    448       # Functions for train, test and predict will

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in _compile_weights_loss_and_weighted_metrics(self, sample_weights)
   1608       #                   loss_weight_2 * output_2_loss_fn(...) +
   1609       #                   layer losses.
-> 1610       self.total_loss = self._prepare_total_loss(masks)
   1611 
   1612   def _prepare_skip_target_masks(self):

~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in _prepare_total_loss(self, masks)
   1707       if total_loss is None:
   1708         if not self.losses:
-> 1709           raise ValueError('The model cannot be compiled '
   1710                            'because it has no loss to optimize.')
   1711         else:

ValueError: The model cannot be compiled because it has no loss to optimize.
commented

Hello,

I get the same error. Any idea how to fix?

My environment is Windows 10 with Anaconda.

Hi,
I was using AWS SageMaker built-in kernels for TensorFlow, and installing the missing pieces via pip install.

Today, I followed the approach recommended by the book, namely:

In my Jupyter NB terminal, did

  1. conda create -n hba-gan python=3.6 ipykernel
  2. conda activate hba-gan
  3. pip install -r requirements.txt
  4. conda deactivate

Then, from the Jupyter NB, I selected the 'hba-gan' kernel which I created above, and everything works fine. So, I'm closing this issue.

commented

Today, I followed the approach recommended by the book, namely:

Hello and thank you for the answer. I can try it later. Where in the book you fetch this information? I use the german translation and can't see any hint for that.