mit-han-lab / litepose

[CVPR'22] Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation

Home Page:https://hanlab.mit.edu

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Jetson Nano inference speed is not same

Kangjik94 opened this issue · comments

hello,
I tested your COCO and CROWDPOSE path.tar files using litepose/valid.py

but in my experience result, when using COCO trained LightPose-Auto-S, inference speed was 2 FPS.

is there some ways to speed up inference speed on Jetson Nano?

or...did I missed something? (like converting torch models to tvm)

when I tested litepose/nano_demo/start.py, using weight <lite_pose_nano.tar>, FPS was almost 7.

if I have to convert torch model to tvm (or tensorRT), would you tell me some advices?

We have released the code for running our model on Jetson Nano with pre-built TVM binary in nano_demo. To convert the torch model to TVM binary, you may need to check the TVM Auto Scheduler Toturial.