Ragged Tensors In TF Serving: 'Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph'
Michael-Blackwell opened this issue · comments
Bug Report
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 21H2 19044.2130
- TensorFlow Serving installed from (source or binary): Docker
- TensorFlow Serving version: 2.10.1
Models developed in TF 2.10.0 with Python 3.9
Describe the problem
I have a model composed of custom Keras layers which use ragged tensors. The model runs as expected locally and deploys successfully to the TF Serving docker container, but when a predict request is made to the container the response is 'error': 'Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph'
.
After googling the error, I found some similar posts in which unnamed sparse tensors were the culprit (see links below). This inspired me to try converting the ragged tensors to dense tensors immediately after their initialization, which fixed the problem. Since the ragged tensors are still present, the error seems to be triggered by the output of the layer rather than the actions within it.
For some additional background: I tested all other components of the model individually and only the layers outputting ragged tensors caused this issue.
I believe this issue is also related since the custom layer contains a ragged tensor:
A similar issue regarding sparse tensors.
Exact Steps to Reproduce
1.) Create a custom Keras layer that uses a ragged tensor
2.) Create and save a Keras model using the custom layer
3.) Deploy a tensorflow/serving:2.10.1 docker container running the model
4.) Make a prediction request to the container
Source code / logs
Create a model using a custom layer containing ragged tensors
import tensorflow as tf
class CustomRaggedLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__()
def call(self, input):
rag = tf.RaggedTensor.from_tensor(input)
return rag
def get_config(self):
base_config = super().get_config()
return base_config
inputs = tf.keras.layers.Input(shape=(None, None, 3), name='inputs')
x = CustomRaggedLayer()(inputs)
outputs = tf.keras.layers.Dense(10, name='outputs')(x)
model = tf.keras.Model(inputs=[inputs], outputs=[outputs])
model.save(f'TEST_MODEL_NAME')
Pull docker container and deploy it
docker pull tensorflow/serving:2.10.1
docker run -p 8500:8500 -p 8501:8501 \
-v HOST_PATH_TO_MODELS:/models \
--env MODEL_NAME=TEST_MODEL_NAME \
--name tf_serving2101-gpu \
-d --rm --gpus all \
tensorflow/serving:2.10.1-gpu
Make Prediction Request
import cv2
import requests
import numpy as np
import json
image_path = 'PATH_TO_TEST_IMAGE'
image = cv2.imread(image_path)
url = f'http://localhost:8501/v1/models/TEST_MODEL_NAME'
results = requests.post(url=f'{url}:predict',
data=json.dumps({"inputs": [image.tolist()]}),
headers={"content-type": "application/json"})
results.json()`
Output:
{'error': 'Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph'}
There is no official support for TensorFlow Serving on Windows at the moment and It looks like the only solution is to build from source, but that is not officially supported for windows and if you're looking to build from source then I would request you to please follow this official documentation here and even you can also refer discussion going on Tensorflow Forum here
If you need any clarification or assistance please let us know, we'll help you further ?
Thank you!
Closing this issue due to lack of recent activity for couple of weeks. Please feel free to reopen the issue or post comments, if you need any further assistance or update. Thank you!
We are experiencing the same issue when using ragged tensors. It is most likely unrelated the Windows build since the same issue is present on TF Serving on linux.
However, ragged tensors are not fully supported, as described here tensorflow/tensorflow#56819