TrackingLaboratory / tracklab

A modular end-to-end tracking framework for research and development

Home Page:https://trackinglaboratory.github.io/tracklab/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pose_bottomup results not showing in video nor in state file

PrincessBiscuit opened this issue · comments

Thanks a lot for the great codebase!

I have been having trouble obtaining any pose estimation results, like the ones shown on the Readme file (https://github.com/TrackingLaboratory/tracklab/blob/main/docs/assets/gifs/PoseTrack21_008827.gif). There also seems to be no entry related to pose estimation in the state saving file when testing the pipeline on soccernet samples . What is the recommended setup to generate pose estimation results?

I have
1) changed the draw_keypoints and draw_skeleton flags in the config file
2) added a pose_bottomup module to the pipeline (logging clearly shows: yolov8->yolov8->...).
3) lowered the min_confidence
However, despite all this, I do not have any pose estimation results appearing on the video, nor in the .pklz state file.

Also, OpenPifPaf hangs on my machine (TitanXP, 64 GB RAM) when performing pose estimation.

Thanks a lot for your help ☺

Hi !

Thanks for the kind words.

Your first step was correct : this enables drawing of the pose estimation points and joints.

We currently support two ways to handle pose estimation : top-down (first find detections, and per detection estimate the pose) and bottom-up (directly estimate all the poses for the whole image at once). the yolov8 file in the pose_bottomup directory shouldn't be there, I'll delete it ASAP, however if you use openpifpaf, you don't need a bbox_detector. We don't yet have a nice way of detecting this error, and that's probably why it hangs. I'll try to figure out a way to at least give a warning that it's not supported.

You can also use a pose_topdown module and we currently only show one : "hrnet_posetrack18", but you could actually use any topdown mmpose model supported by openmmlabs.

In summary, either :

  • remove the bbox_detector and add a pose_bottomup in the "pipeline"
  • or keep the current bbox_detector and add a pose_topdown

I hope this works for you !
Victor

Thanks Victor for your very quick reply!

Success. Both variations worked (pose_bottomup, pose_topdown). Even pose_bottomup with bbox_detector worked if the module are sorted in a specific order. Overall, the pipeline seems super sensitive to module ordering.

For example:

  • Pipeline: YOLOv8 -> PRTReId -> TopDownMMPose -> OCSORT -> ... worked
  • Pipeline: YOLOv8 -> TopDownMMPose -> PRTReId -> OCSORT -> ... crashed.

Same for bottomup:

  • Pipeline: OpenPifPaf -> YOLOv8 -> BPBReId -> OCSORT worked
  • Pipeline: YOLOv8 -> OpenPifPaf -> BPBReId -> OCSORT hanged.

Thanks again a lot for your precious help!



I'm not certain what's going wrong with your first example, I think it works for me, I'll try it again on the latest version to make sure, YoloV8 -> TopDownMMPose at least should not crash.

For the bottomup, we currently don't really support having multiple "image-level" detectors (bbox or pose), as they will probably generate the same detections, and if we don't merge duplicate detections, there will be edge-cases that are hard to solve. I'll write some checks to make sure to generate a meaningful error message when this happens.

For me, in the YOLOv8 -> TopDownMMPose -> PRTReId, it is the REID that crashed...
I can double check it later if needed :)

If you have it, could you show the error you get when it crashes ?

Maybe it is a typo on my side, but here is the error: (and when I swap TopDownMMPose and PRTReId, I don't get this problem)

Traceback (most recent call last):
  File "/anaconda3/envs/tracklab/lib/python3.10/runpy.py", line 196,
in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/anaconda3/envs/tracklab/lib/python3.10/runpy.py", line 86, 
in _run_code
    exec(code, run_globals)
  File "tracklab/tracklab/main.py", line 117, 
in <module>
    main()
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/main.py
", line 94, in decorated_main
    _run_hydra(
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/_intern
al/utils.py", line 394, in _run_hydra
    _run_app(
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/_intern
al/utils.py", line 457, in _run_app
    run_and_report(
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/_intern
al/utils.py", line 223, in run_and_report
    raise ex
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/_intern
al/utils.py", line 220, in run_and_report
    return func()
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/_intern
al/utils.py", line 458, in <lambda>
    lambda: hydra.run(
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/_intern
al/hydra.py", line 132, in run
    _ = ret.return_value
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/core/ut
ils.py", line 260, in return_value
    raise self._return_value
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/hydra/core/ut
ils.py", line 186, in run_job
    ret.return_value = task_function(task_cfg)
  File "tracklab/tracklab/main.py", line 59, in
main
    tracking_engine.track_dataset()
  File "tracklab/tracklab/engine/engine.py", 
line 117, in track_dataset
    detections, image_pred = self.video_loop(tracker_state, video_metadata, 
video_idx)
  File "tracklab/tracklab/engine/offline.py", 
line 30, in video_loop
    for batch in self.dataloaders[model_name]:
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/torch/utils/d
ata/dataloader.py", line 628, in __next__
    data = self._next_data()
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/torch/utils/d
ata/dataloader.py", line 1333, in _next_data
    return self._process_data(data)
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/torch/utils/d
ata/dataloader.py", line 1359, in _process_data
    data.reraise()
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/torch/_utils.
py", line 543, in reraise
    raise exception
NotImplementedError: Caught NotImplementedError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/torch/utils/d
ata/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/torch/utils/d
ata/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/torch/utils/d
ata/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File 
"tracklab/tracklab/datastruct/datapipe.py", 
line 34, in __getitem__
    self.model.preprocess(image=image, detection=detection, metadata=metadata),
  File 
"/anaconda3/envs/tracklab/lib/python3.10/site-packages/torch/autogra
d/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File 
"sn-gamestate/sn_gamestate/reid/prtreid_api.py"
, line 131, in preprocess
    raise NotImplementedError
NotImplementedError

Hi @PrincessBiscuit , you can remove the entire if not self.cfg.model.bpbreid.learnable_attention_enabled and "keypoints_xyc" in detection: condition inside "sn-gamestate/sn_gamestate/reid/prtreid_api.py" around line 119. I actually removed this condition in a commit right now, you can also simply update sn-gamestate to the latest version. Let me know if it fixes your issue!