31sy / AIParsing

The pytorch code of AIParsing: Anchor-Free Instance-Level Human Parsing

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Install details and inference speed/

cjm-sfw opened this issue · comments

Hi, very happy to see such a good work in multi human parsing task.

But in my process of reproducing, the details of installing are not complete like the installation of Apex.

Could you provide a more complete installation guidance?

And I am curious about the inference speed in 2080ti,
8.9 FPS on CIHP in paper is the time from input the image to get final parsings?

thanks.

  1. For the apex install, I install it according to the official installation step.
  • git clone https://github.com/NVIDIA/apex
  • cd apex
  • pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

The last step may use: python setup.py build develop
You can have a try.

  1. About the inference time.
    The time is computed on the model inference step.
    I upload a test code in tools/test_human_parsing.py.
    You can try it.