MVIG-SJTU / AlphaPose

Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System

Home Page:http://mvig.org/research/alphapose.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is there any plan to implement PoseFlow on pytorch branch?

yusukefs opened this issue · comments

Hello,

I'm very impressed with your project AlphaPose!

I really would like to use AlphaPose to track pose in my project, but it seems PoseFlow is not implemented on pytorch branch now.
Are you planning to implement PoseFlow on pytorch branch?

If not, is it possible to run PoseFlow and get tracked pose with following steps?

  • Get alpha-pose-results-{...}.json by executing video_demo.py from pytorch branch
  • Extract images from each frame of video for the deepmatching
  • Execute deepmatching and get result file ({...}.txt)
  • Edit PoseFlow/tracker.py to enable it to read images that were extracted from my video and deepmatching result
  • Execute PoseFlow/tracker.py and get result

If you want to use PoseFlow in your project, you can just download it from the main branch and copy it to your project. After that, you also have to build 'deepmatching' to run PoseFlow as the following:

cd PoseFlow/deepmatching
make clean all
make python

I had a little problem with the dependencies, but I solved it. You can check this blog for more information: https://blog.csdn.net/rhythmjnh/article/details/79928134

It is written in Chinese and I do not know Chinese. So, I just use google translate to read the blog. But, the instruction is really easy to follow. Good luck!

@tmanh Thanks!

Thanks!
I successfully built deepmatching.

But I still don't understand how to use tracker.py for not the PoseTrack dataset but other original videos that I recorded.
Is there any instruction of how to do it?

@YuliangXiu Can you please have a look?

@yuuuuwwww You can check the tracker.py in PoseFlow, the following code is used to track the poses between two frames. You can modified it to make it work with your project. Good Luck!

# regenerate the missed pair-matching txt
if not os.path.exists(cor_file):
    dm = "/home/yuliang/code/PoseTrack-CVPR2017/external/deepmatching/deepmatching"
    img1_path = os.path.join(image_dir,video_name,frame_name)
    img2_path = os.path.join(image_dir,video_name,next_frame_name)
                
    cmd = "%s %s %s -nt 20 -downscale 2 -out %s"%(dm,img1_path,img2_path,cor_file)
    os.system(cmd)
    # if you want to directly call this deepmatching function in python. You can write it as follows:
    # import deepmatching as dm
    # from PIL import Image
    # img1 = np.array(Image.open(img1_path))
    # img2 = np.array(Image.open(img2_path))
    # matches = dm.deepmatching( img1, img2, '-downscale 2 -v' )
            
    all_cors = np.loadtxt(cor_file)

    # if there is no people in this frame, then copy the info from former frame
    if track[video_name][next_frame_name]['num_boxes'] == 0:
        track[video_name][next_frame_name] = copy.deepcopy(track[video_name][frame_name])
        continue
                
    cur_all_pids, cur_all_pids_fff = stack_all_pids(track[video_name], frame_list[:-1], idx, max_pid_id, link_len)
    match_indexes, match_scores = best_matching_hungarian(all_cors, cur_all_pids, cur_all_pids_fff, track[video_name][next_frame_name], weights, weights_fff, num, mag)
        
    for pid1, pid2 in match_indexes:
        if match_scores[pid1][pid2] > match_thres:
            track[video_name][next_frame_name][pid2+1]['new_pid'] = cur_all_pids[pid1]['new_pid']
            max_pid_id = max(max_pid_id, track[video_name][next_frame_name][pid2+1]['new_pid'])
            track[video_name][next_frame_name][pid2+1]['match_score'] = match_scores[pid1][pid2]

    # add the untracked new person
    for next_pid in range(1, track[video_name][next_frame_name]['num_boxes'] + 1):
        if 'new_pid' not in track[video_name][next_frame_name][next_pid]:
            max_pid_id += 1
            track[video_name][next_frame_name][next_pid]['new_pid'] = max_pid_id
            track[video_name][next_frame_name][next_pid]['match_score'] = 0

@tmanh
Thank you very much!
I will try on it.

@yuuuuwwww @tmanh PoseFlow is a purely independent python module, there is no deep learning code in it, so you can just download it and use it in your pose estimation results generated by AlphaPose or some other pose estimators. I have already added one sample.json, which is the standard input json for tracker.py.

As for the DeepMatching, here is DeepMatching that I wrote before, hope it can help you.

Fast ORB version is done, you can follow the latest README to generate matching files without DeepMatching.

@yuuuuwwww hello,
did you use tracker.py for not the PoseTrack dataset successfully ?
and how to use tracker.py for the PoseTrack dataset ?
Can you give me some advice? Thank you very much.

@my-hello-world
Hi,
I edited tracker.py for my project here (https://github.com/yuuuuwwww/AlphaPose/blob/master/PoseFlow/pose_tracker.py).
This code is for the videos for my project (not annotated, not like PoseTrack dataset).
Its inputs are video frames and pose estimation result file (xxx.json).
I hope it can help you!

@yuuuuwwww @ashar-ali thanks~~i will have a try

thanks @yuuuuwwww

@yuuuuwwww @YuliangXiu thanks!i got the .json-files.

@yuuuuwwww

hi,when I combine the deepmatching module,then there is an error.How to solve the problem below?I have done as follows:
make clean all
make python
python
import deepmatching

the problem is :
root@c479899290a0:/home/xxx/AlphaPose-pytorch/PoseFlow/deepmatching# python
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import deepmatching
Traceback (most recent call last):
File "", line 1, in
File "/home/xxx/AlphaPose-pytorch/PoseFlow/deepmatching/deepmatching.py", line 32, in
_deepmatching = swig_import_helper()
File "/home/xxx/AlphaPose-pytorch/PoseFlow/deepmatching/deepmatching.py", line 28, in swig_import_helper
_mod = imp.load_module('_deepmatching', fp, pathname, description)
File "/root/anaconda3/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/root/anaconda3/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: /home/xxx/AlphaPose-pytorch/PoseFlow/deepmatching/_deepmatching.so: undefined symbol: PyCObject_Type

@tmanh Hi,where is the SORT algorithm? Is it in https://github.com/MVIG-SJTU/AlphaPose branch? I can't find it. And I read the README.md file , it is two method ,one is deepmatching and another is orb.So , is the SORT method proposed recently? But, I event don't know why the PyCObject_Type problem happened.How to solve it? Thank you.

Sorry, it's Fast ORB not SORT :D (https://github.com/MVIG-SJTU/AlphaPose/blob/master/PoseFlow/tracker.py)

About deep matching compiling bug, I think you could try once again with this blog: https://blog.csdn.net/rhythmjnh/article/details/79928134

If everything is still the same, check the deepmatching folder, there is something wrong with _deepmatching.so

@tmanh Thanks.There are two versions of python. So,CPYTHONFLAGS in Makefile do not match the python
cmd.

PoseFlow(General Version) has already been released, now you can do pose tracking on any private dataset, the new version also supports pose tracking results visualization. @yuuuuwwww @tmanh @my-hello-world @pingqi

@YuliangXiu I'm also impressed with the same issue.
Although I have run the track successful, only using track_general.py and the results of alphapos, I still don't understand the usage of the deepmatching.
Could you please give some advices and explanation? Thanks

you can refer to the original paper "PoseFlow: Efficient Online Pose Tracking" to find the usage of deepmatching. @Tylerjoe

okay,Thanks

you can refer to the original paper "PoseFlow: Efficient Online Pose Tracking" to find the usage of deepmatching. @Tylerjoe

Okay,thanks

@YuliangXiu Hi, I have studied the a paper of "PoseFlow: Efficient Online Pose Tracking" .In my opinion, the PoseFlow maybe only use the orb or deepmatching algorithm , and where reflect the idea of the Pose Flow Building (PF-Builder) and Pose Flow NMS(PF-NMS)?
ps: why is the json getting from the alphapose project the same with the json getting from the poseflow project?