Different result between python and C++
hrshovonsmx opened this issue · comments
Hello,
First of all thanks a lot for this simple to use library. Converting a python code into C++ has been a breeze so far. But there are some issues that I would like to discuss.
Unfortunately I cant share model as its company proprietary stuff. But I am posting the code in case the mistake lies there.
Tensorflow python version 2.13
Tensorflow C api version 2.13
cppflow version latest
Model type: image segmentation(UNet)
model conversion code(python):
This code was written to convert multiple models
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import tensorflow as tf
from efficientnet.tfkeras import EfficientNetB7
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model, model_from_json
from pathlib import Path
from glob import glob
import numpy as np
import json
import skimage.io as skio
tf.keras.backend.clear_session()
import tensorflow.keras.backend as K
model_paths = [SOME_MODEL_PATH]
for model_path in model_paths:
print(model_path)
model = tf.keras.models.load_model(model_path,compile=False)
@tf.function
def serve(*args, **kwargs):
outputs = model(*args, **kwargs)
# Apply postprocessing steps, or add additional outputs.
...
return outputs
# arg_specs is `[tf.TensorSpec(...), ...]`. kwarg_specs, in this
# example, is an empty dict since functional models do not use keyword
# arguments.
arg_specs, kwarg_specs = model.save_spec()
savepath = f"op_ocr/{Path(model_path).stem}"
model.save(savepath, signatures={
'serving_default': serve.get_concrete_function(*arg_specs,
**kwarg_specs)
})
#model.save(savepath)
Inference code(C++):
input is a vector of CV_32FC mats. For my case, I have two types,
it could be 3 channel RGB(8 bit) or 3 channel RGB+1 channel NIR band(all of them 16 bit).
division factor is 255.f for 8 bit and 65535.f for 16 bit
TF_CONV_DTYPE_RGB is TF_UINT8
TF_CONV_DTYPE_NIR is TF_UINT16
in both cases, some segmentation results are slightly different from python
The converted model was also tested on python, the results are same as keras h5 model.
for(int i=0;i<input.size();i++)
{
cppflow::tensor img_tensor;
if(dtype == TF_CONV_DTYPE_RGB)
{
std::vector<uint8_t> img_data;
img_data.assign(input[i].data, input[i].data + input[i].total() * num_channels);
img_tensor = cppflow::tensor(img_data,{input_dim,input_dim,num_channels});
}
else if(dtype == TF_CONV_DTYPE_NIR)
{
Mat imgData = input[i].clone();
std::vector<uint16_t> img_data = imgData.reshape(1,1); //img_data.assign((uint16_t *)imgData.data, (uint16_t *)imgData.data + imgData.total() * num_channels);
img_tensor = cppflow::tensor(img_data,{input_dim,input_dim,num_channels});
}
img_tensor = cppflow::cast(img_tensor, dtype, TF_FLOAT);
img_tensor = img_tensor / division_factor;
img_tensor = cppflow::expand_dims(img_tensor, 0);
auto inf_out = (*modelpts)({{inputsigname+":0", img_tensor}},{"StatefulPartitionedCall:0"})[0];
//auto final_out = cppflow::arg_max(inf_out,3);
//auto final_8bit = cppflow::cast(final_out,TF_INT64,TF_UINT8);
std::vector<float> output_vector = inf_out.get_data<float>();
Mat op = Mat(input_dim, input_dim, CV_32FC(num_classes));
memcpy(op.data, output_vector.data(), output_vector.size()*sizeof(float));
output.push_back(op);
}