tensorflow / serving

A flexible, high-performance serving system for machine learning models

Home Page:https://www.tensorflow.org/serving

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

the same savedModel can be loaded by tf-serving 2.2, but can not be loaded by tf-serving 2.5.2 when using s3 storage

liaocz opened this issue · comments

commented

System information

  • OS Platform and Distribution (Linux Ubuntu 16.04):
  • TensorFlow Serving installed from (source and binary):
  • TensorFlow Serving version is 2.5.2:

Describe the problem

  1. We have a model trained by tf2.5.2 and export savedModel by Estimator.export_saved_model
  2. We stored model in one burket of s3
  3. The model can be loaded correctly by tf-serving 2.2, but when we use tf-serving 2.5.2, it occurs the folloing errors
2022-09-15 08:59:39.485935: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: fashion_model version: 1} failed: Data loss: Can't parse s3://dlonline/ai/online/back/1603266981099/1/saved_model.pb as binary proto

@liaocz,

Could you try to export your model using tf.keras.models.save_model API and then try to load the model in latest TF serving build. Please refer this tutorial to save and serve model on TF serving.

Hope this helps. Thank you!

commented

@singhniraj08 but our model train by tf.estimator not using keras, is that ok to export model using tf.keras.model.save_model ? And our model can be loaded correctly by tf2.5.2 when the model is stored in local path not using s3.

@liaocz,

Looks like this issue is similar to #1963.

As a workaround, as suggested here, please try installing and importing tensorflow-io. Kindly let us know if this resolves your issue. Thank you!

@singhniraj08 but our model train by tf.estimator not using keras, is that ok to export model using tf.keras.model.save_model ? And our model can be loaded correctly by tf2.5.2 when the model is stored in local path not using s3.

Have you resolve this error? I've encountered the same problem.

@codernew007,

Could you please share the model code and the complete error stack trace so we can better understand the issue? Thank you!

@codernew007,

Could you please share the model code and the complete error stack trace so we can better understand the issue? Thank you!

My model can be loaded correctly when it is configured in local path in docker with image tfs-2.6.0-rc2. But when I configured model path by s3 such as 'base_path: "s3://models/multiModel/model1/"', I got the following error message:

E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: hp80 version: 1} failed: Data loss: Can't parse s3://models/multiModel/model1/1/saved_model.pb as binary proto

@codernew007,

This is a known issue where cloud file system implementation has been moved to tensorflow-io.
The suggested workaround is to pip install tensorflow-io and import it. This has the side effect of loading the plugin for the S3 filesystem so there will be an implementation.

Please try the workaround suggested and let u know if it helps. Thank you!

Closing this due to inactivity. Please take a look into the answers provided above, feel free to reopen and post your comments(if you still have queries on this). Thank you!