motokimura / PyTorch_Gaussian_YOLOv3

PyTorch implementation of Gaussian YOLOv3 (including training code for COCO dataset)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inference speed is10 fps , did I went wrong? (using Tesla M60, image size=1600x1200)

sisrfeng opened this issue · comments

While fps is 42 on the paper, I think 10fps is a little too slow.
I did not change the config. I just need to detect vehicle and person. Is there any easy way to speed up ?(A slight drop of the mAP is acceptable)
Many thanks!

Hi, @sisrfeng.
Image size is too large I guess..
Did you try inference to smaller images?

I save the resized image by adding cv2.imgwrite('myname',img) after img, info_img = preprocess(img, imgsize, jitter=0) # info = (h, w, nh, nw, dx, dy) in:

img = cv2.imread(image_path)
        #Preprocess image
img_raw = img.copy()[:, :, ::-1].transpose((2, 0, 1))
img, info_img = preprocess(img, imgsize, jitter=0)  # info = (h, w, nh, nw, dx, dy)
img = np.transpose(img / 255., (2, 0, 1))
img = torch.from_numpy(img).float().unsqueeze(0)

if gpu >= 0:
    # Send model to GPU
    img = Variable(img.type(torch.cuda.FloatTensor))
else:
    img = Variable(img.type(torch.FloatTensor))

Testing on the 416x416 imgs, fps=21.
Anybody get higher fps?

Let me tell you in advance that I have never measured my implementation can run in 42 fps..

According to Gaussian YOLOv3 paper, The experiment is conducted on an NVIDIA GTX 1080 Ti with CUDA 8.0 and cuDNN v7.
I'm wondering if the performance of your device (M60) is enough to compete with 1080Ti or not.