amoudgl / pygoturn

PyTorch implementation of GOTURN object tracker: Learning to Track at 100 FPS with Deep Regression Networks (ECCV 2016)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

evaluate.py produces different results for demo.py?

junliang230 opened this issue · comments

Why does running demo. py and evalevaluate. py with the same data set produce different results?
could you explain it, thanks

@breezelj Hi, thanks for pointing out the issue. I also noticed that there is minor difference between bounding boxes produced by evaluate.py and demo.py.

# OTB Man sequence 0002 result
69.457,46.669,26.011,39.444  # demo.py
69.459,46.294,26.026,39.786  # evaluate.py

I recently digged into this issue and found out that got10k toolkit (evaluate.py) uses PIL to read images. On the other hand, my demo.py script reads images using opencv. There is a slight difference between pixel values of opencv and PIL images, which leads to different results. The following script reproduces this fact:

import numpy as np
import cv2
from PIL import Image
path = 'data/OTB/Man/img/0001.jpg'
cv2_img = cv2.imread(path)
cv2_img = cv2.cvtColor(cv2_img, cv2.COLOR_BGR2RGB)
pil_img = Image.open(path)
pil_img = np.array(pil_img)
print((cv2_img == pil_img).all())  # False

demo.py yields exactly same results as evaluate.py if you replace these lines with:

img_prev = np.array(Image.open(frames[i]))
img_curr = np.array(Image.open(frames[i+1]))

very nice, thank you very much for detailed explanation