Code execution is slow.
hyosong86 opened this issue · comments
Thank you for sharing your great work.
Loading initial model parameters takes approximately 10 minutes.
Execution speed also takes about 2.5 seconds for a 0004x3.png (680x448) image.
GPU A6000, torch 1.13.1 opencv-python 4.5.1.48 onnxruntime 1.10.0 onnxruntime-gpu 1.10.0
I ran it in the above environment.
Are there any possible causes?
Thank you for sharing your great work. Loading initial model parameters takes approximately 10 minutes. Execution speed also takes about 2.5 seconds for a 0004x3.png (680x448) image.
GPU A6000, torch 1.13.1 opencv-python 4.5.1.48 onnxruntime 1.10.0 onnxruntime-gpu 1.10.0 I ran it in the above environment. Are there any possible causes?
Hey,
Did you fixed this problem? I meet this problem as well.
@Xzy765039540 @hyosong86 I haven't encountered this problem,you can try it
onnx_model.zip
`import cv2
import onnxruntime as ort
import numpy as np
import time
onnx_model_path = "./out.onnx"
load_time = time.time()
ort_session = ort.InferenceSession(onnx_model_path, providers=['CPUExecutionProvider'])
print("load onnx time:{}s".format(time.time()-load_time))
onnx_input_name = ort_session.get_inputs()[0].name
onnx_outputs_names = ort_session.get_outputs()[0].name
img_path = "0004x3.png" # 680x468 pix
img = cv2.imread(img_path)
img = np.asarray(img, np.float32)/255.0
img = img.transpose((2, 0, 1))
img = img[np.newaxis, :, :, :]
start = time.time()
for i in range(100):
onnx_result = ort_session.run([onnx_outputs_names], input_feed={onnx_input_name: img})[0]
print("avg infer time:{}s".format((time.time()-start)/100))
onnx_result = onnx_result.squeeze()*255
onnx_result = onnx_result.transpose((1, 2, 0))
cv2.imwrite("out_images/"+img_path.split('/')[-1], onnx_result)`