GeneralizedMeanPooling2D layer(s) do not allow changing precision policies after model save/load
erikreed opened this issue · comments
Hi -- I ran into an issue training a similarity model with a mixed_float16
policy, saving the model, and attempting to load in with a default/CPU-only float32
policy. I'm using the GeneralizedMeanPooling2D
layer, and the 1D variant looks similarly affected.
[...]
File "/workspace/inference.py", line 25, in __init__
self._model = tf.keras.models.load_model(model_root)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/__autograph_generated_fileoqt_g29_.py", line 64, in tf__call
x = ag__.ld(x_offset) + ag__.ld(mins) - 1
TypeError: Exception encountered when calling layer "gem_pool" (type GeneralizedMeanPooling2D).
in user code:
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/tensorflow_similarity/layers.py", line 250, in call *
x = x_offset + mins - 1
TypeError: Input 'y' of 'AddV2' Op has type float16 that does not match type float32 of argument 'x'.
Call arguments received by layer "gem_pool" (type GeneralizedMeanPooling2D):
inputs=tf.Tensor(shape=(None, 10, 10, 1280), dtype=float16)
The similarity package layers have no usage of cast/autocast. For consistency with the base keras layers, would the fix here be to cast the output of __call__
to be the input dtype a la https://github.com/keras-team/keras/blob/c269e3cd8fed713fb54d2971319df0bfe6e1bf10/keras/mixed_precision/policy.py#L172-L182?
Versions:
tensorflow-cpu==2.11.0
tensorflow-similarity==0.16.8
Looks like we need to pass kwargs along to the pooling layers in the init methods. Should be good to go now.
Much appreciated -- cheers!