mattpoggi / pydnet

Repository for pydnet, IROS 2018

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Slice output to 1 dimension

l0stpenguin opened this issue · comments

I am trying to use the pretrained model for mobile devices by exporting the pretrained model to .pb file. I know there is already a repository for that but i am planning to retrain the model on another dataset in the future. Unfortunately the result of the prediction has the shape(1, 256, 512, 2) which is difficult to convert it to an image. The mobile framework i am working with expects dimensions of 1 for grayscale or 3 for rgb. I was advised to slice the output directly before freezing the graph. So i would like to replicate this pre processing directly inside the model before re-freezing the graph.

disp = sess.run(model.results[2], feed_dict={placeholders['im0']: img})
result = disp[0,:,:,0]

I am using the following script to freeze the graph:
https://gist.github.com/morgangiraud/249505f540a5e53a48b0c1a869d370bf#file-medium-tffreeze-1-py
My question is how do i add slicing to the output node before exporting the new graph to .pb file?
I am not very experienced in tensorflow since it's not my main field.
So i tried something like this but it does not work

 with tf.Session(graph=new Graph()) as sess:
        sess.run(tf.global_variables_initializer())
        saver = tf.train.import_meta_graph('model.meta', clear_devices=clear_devices)
        saver.restore(sess, input_checkpoint)   
        # try to slice the tensor
        outputNode = tf.get_default_graph().get_tensor_by_name("model/L0/ResizeBilinear:0")
        outputNode = outputNode[:,:,:,0]
        output_graph_def = tf.graph_util.convert_variables_to_constants(
            sess
            tf.get_default_graph().as_graph_def(),
            ['model/L0/ResizeBilinear']
        )