NVlabs / RVT

Official Code for RVT-2 and RVT

Home Page:https://robotic-view-transformer-2.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about inference speed

geyan21 opened this issue · comments

Hi, thanks for your great work! May I ask how you compute fps for peract and rvt? since I also tested it, but I got a pretty high fps for both peract and rvt, I wonder if there is anything wrong with that

Hi,
Thanks for your interest in our work. The inference speed we got was in a similar order as reported by PerAct. There could be many reasons for your observation, including:

-- Difference in hardware.

-- Incorrect speed measurements. One common source of error is not synchronizing the GPUs after every iteration. An example of how to do it is here: https://deci.ai/blog/measure-inference-time-deep-neural-networks/

-- Also, note that we measure batch-1 inference speed to model the case where frames are coming one at a time.
I would be happy to help if you provide more details. Also, in case you figure out the issue, please let me know what it was.

Best,
Ankit

Closing because of inactivity. Please feel free to reopen if the issue persists.