affinelayer / pix2pix-tensorflow

Tensorflow port of Image-to-Image Translation with Conditional Adversarial Nets https://phillipi.github.io/pix2pix/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Using trained/exported model

franciscorba opened this issue · comments

I have train and export some models that I intend to use with bash and/or python (no web use). I can't figure how to use my trained/ models.

I have read some old Issue where a process-local.py where used. This file doesn't exists anymore as stated in Issue #112. I download the old version of the file proposed but get an error when tried to use it. My theory is that the model is not exported in the same way/format as when the process-local.py file was there.

In Issue #103 @nidetaoge posted an python script to apply the model. The script is not complet and not clear and @nidetaoge doesn't answered the questions mades.

The other option i found in old Issues was using test mode with a blank target image, which seems not so clean.

I will thanks anyone that have some ideas that could help.

UPDATE:

From the file proposed in Issue #112. I solved the error and is working. Here my update version:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf
import numpy as np
import argparse
import json
import base64

parser = argparse.ArgumentParser()
parser.add_argument("--model_dir", required=True, help="directory containing exported model")
parser.add_argument("--input_file", required=True, help="input PNG image file")
parser.add_argument("--output_file", required=True, help="output PNG image file")
a = parser.parse_args()

def main():
	with open(a.input_file, "rb") as f:
		input_data = f.read()

	input_instance = dict(input=base64.urlsafe_b64encode(input_data).decode("ascii"), key="0")
	input_instance = json.loads(json.dumps(input_instance))

	with tf.Session() as sess:
		saver = tf.train.import_meta_graph(a.model_dir + "/export.meta")
		saver.restore(sess, a.model_dir + "/export")
		input_vars = json.loads(tf.get_collection("inputs")[0].decode())
		output_vars = json.loads(tf.get_collection("outputs")[0].decode())
		input = tf.get_default_graph().get_tensor_by_name(input_vars["input"])
		output = tf.get_default_graph().get_tensor_by_name(output_vars["output"])

		input_value = np.array(input_instance["input"])
		output_value = sess.run(output, feed_dict={input: np.expand_dims(input_value, axis=0)})[0]

	output_instance = dict(output=output_value.decode("ascii"), key="0")

	b64data = output_instance["output"]
	b64data += "=" * (-len(b64data) % 4)
	output_data = base64.urlsafe_b64decode(b64data.encode("ascii"))

	with open(a.output_file, "wb") as f:
		f.write(output_data)

main()

Thanks for this!

Hello,

I'm using your code with the exported model and I'm getting this. Does anyone have any idea on how can I fix this?

Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 2, computed = 1

W c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Invalid argument: Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 2, computed = 1 Traceback (most recent call last): File "F:\davidson\projetos\inprogress\TCC\project\venv\lib\site-packages\tensorflow\python\client\session.py", line 1022, in _do_call return fn(*args) File "F:\davidson\projetos\inprogress\TCC\project\venv\lib\site-packages\tensorflow\python\client\session.py", line 1004, in _run_fn status, run_metadata) File "c:\users\dkun\appdata\local\programs\python\python35\Lib\contextlib.py", line 66, in __exit__ next(self.gen) File "F:\davidson\projetos\inprogress\TCC\project\venv\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 2, computed = 1 [[Node: generator/decoder_8/conv2d_transpose/conv2d_transpose = Conv2DBackpropInput[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](generator/decoder_8/conv2d_transpose/stack, generator/decoder_8/conv2d_transpose/kernel/read, generator/decoder_8/Relu)]] [[Node: convert_image_1/_187 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_1428_convert_image_1", tensor_type=DT_UINT8, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Just to clarify to the ones that might be lost, you first need to export your pre trained model with a command like this

python pix2pix.py --mode export --output_dir single_test --model_dir ~/pix2pix-tensorflow/multi_train

then, after creating a new .py file with the code above pasted you run something like this

python singleproduction.py --input_file ~/pix2pix-tensorflow/NAME.png --model_dir ~/pix2pix-tensorflow/single_test --output_file ~/pix2pix-tensorflow/DIRNAME/NAMEOUT.png

At least this worked for me. Thank you very much for the code!

what is /export in saver.restore(sess, a.model_dir + "/export")
When I export a model, I dont get any export file

what is /export in saver.restore(sess, a.model_dir + "/export")
When I export a model, I dont get any export file

From Saver

save_path | String. Prefix of filenames created for the checkpoint.

'export' isn't a file but the name of a set of files that are the result of the line 621 of the pix2pix2.py:

export_saver.save(sess, os.path.join(a.output_dir, "export"), write_meta_graph=False)

@franciscorba I figured that out but my issue still remains, more details here. I'm exporting using the mode as 'export', however it always says this is not a valid checkpoint file

#195