mit-han-lab / litepose

[CVPR'22] Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation

Home Page:https://hanlab.mit.edu

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Running Video Capture Demo on Windows PC

Greendogo opened this issue · comments

Hey there, could you provide some instruction on setting up and running inference using a webcam on a Windows PC?

I'm getting stuck at the 'tvm' part.

Yes, this would be very helpful for testing the algorithm on various devices running Windows.
It would also be better if you could give more information on how one could enable/disable GPU device(s).
Thanks

Same here, using the jetson demo, it states ModuleNotFoundError: No module named 'tvm'

The nano_demo is tested on Jetson Nano with TVM support. If you are using Jetson Nano, you could follow this guide to install TVM. If you are using other devices, @MemorySlices could you adapt the TVM demo to a PyTorch model one for a more general demo?

@lmxyy Do you know if the models are CPU friendly? Do we "require" GPU to run them optimally?
I tried it in my CPU-only environment and it takes ~1.96 sec to process a frame (448x448x3). Am I doing sth wrong?

The model should be CPU-friendly, as we also include some results of Raspberry Pi and it only takes ~100ms. But if you directly run the PyTorch model using CPU, I think your result is reasonable, as the CPU backend is not well-optimized.

Thank you, @lmxyy for the prompt response.
That explains why I am getting such a slow speed. I am indeed using model(s) using Pytorch's CPU backend settings.
So, is there a way that I can run the optimized model(s) on a CPU-only env, or is that out of scope?

You could try TVM to optimize your CPU backend. But I think this will cost your much more time...

Hi @sushil-bharati would it be possible to share how you got it to run using the pytorch cpu backend? I tried doing model(img) and got:

conv2d() received an invalid combination of arguments - got (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int)

Thank you

Hello, I'd like to ask why I can't find this scheduler in the sentence from scheduler import warmup designer in dist_train file. What's the reason?