serizba / cppflow

Run TensorFlow models in C++ without installation and without Bazel

Home Page:https://serizba.github.io/cppflow/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

libc++abi: terminating with uncaught exception of type std::runtime_error: No operation named "serving_default_input_1" exists

garricklw opened this issue · comments

I'd like to document an issue I ran into and the resolution in case anyone else runs into it, and to propose an api improvement, if possible, that would make the issue less difficult in the future.

I got the above error while trying to load a Keras model, and the resolution was that I needed to make sure the first layer of my model was named "input_1".

Keras has a singleton-esque pattern for naming its layers. That is, each time a layer of the same type is initialized in the same runtime, it increments the counter of the layer name. So the first time you create an input it's "input_1", the second time it's "input_2", etc...

This makes it really easy to make a model that throws the above error and not know exactly why it's happening. If it's possible, why not inspect the layers of the Keras model in cpp and use the first input layer as input by default, rather than looking for a hard-coded default name?

Hi @garricklw

I don't fully understand what are you proposing. You mean matching the strings returned by model::get_operations() against the string input?

I'll admit, I know almost nothing about which fields you can read from the model in cpp or how you're currently reading it, but the way load_model works in python Keras is if there is only one input layer in the whole model, it will automatically use that layer as input, regardless of how it's named. And I know at the very least the loaded Model in python provides the list of layers from which a reasonable default input layer could be picked, but that's all in python of course.

I'm not sure what you mean by matching the string "input", do you mean looking through the layer names for the literal "string"? I guess what I'm saying is I would expect that the python Type of the layer (keras.layers.Input, keras.layers.Dense, etc...) would be readable somewhere in the saved model as python is able to read it, though I can't confirm that it's possible in cpp.

Hi @garricklw

I am sorry, but I don't know yet how to inspect the definition of the model to retrieve the default inputs and outputs. I guess it would require to somehow parse the MetaGraphDef obtained when doing TF_LoadSessionFromSavedModel. I don't know if that's the best way to obtain it, and I don't know if it could be obtained without using the Protobuff library.

#203 is a small step towards this

Yeah, that's reasonable, my intuition for how easy it would be was just based on Python, so thanks for checking. Yeah, #203 seems like a good step for bridging the gap between the python code that builds the model and the cpp code calling it. The error message is only confusing because "serving_default_input_1" isn't explicitly defined anywhere in the model building, but I guess it's not that hard to figure out it's derived from the name of the input layer.

Perhaps I could change the error generated when the operation is not found and suggest to use the saved_model_cli to obtain the correct name. That might have helped you.