WesleyHsieh0806 / TAO-Amodal

Official Code for Tracking Any Object Amodally

Home Page:https://tao-amodal.github.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TAO-Amodal

Official Repository of Tracking Any Object Amodally.

πŸ“™ Project Page | πŸ“Ž Paper Link | ✏️ Citations

TAO-Amodal

πŸ“Œ Leave a ⭐ to keep track of our updates.


Table of Contents


πŸŽ’ Get Started

Clone the repository

git clone https://github.com/WesleyHsieh0806/TAO-Amodal.git 

Setup environment

conda create --name TAO-Amodal python=3.9 -y
conda activate TAO-Amodal
bash environment_setup.sh

πŸ“š Prepare Dataset

  1. Download our dataset following the instructions here.
  2. The directory should have the following structure:
    TAO-Amodal
     β”œβ”€β”€ frames
     β”‚    └── train
     β”‚       β”œβ”€β”€ ArgoVerse
     β”‚       β”œβ”€β”€ BDD
     β”‚       β”œβ”€β”€ Charades
     β”‚       β”œβ”€β”€ HACS
     β”‚       β”œβ”€β”€ LaSOT
     β”‚       └── YFCC100M
     β”œβ”€β”€ amodal_annotations
     β”‚    β”œβ”€β”€ train/validation/test.json
     β”‚    β”œβ”€β”€ train_lvis_v1.json
     β”‚    └── validation_lvis_v1.json
     β”œβ”€β”€ example_output
     β”‚    └── prediction.json
     β”œβ”€β”€ BURST_annotations
     β”‚    β”œβ”€β”€ train
     β”‚         └── train_visibility.json
     β”‚    ...

Explore more examples from our dataset here.

πŸ§‘β€πŸŽ¨ Visualization

Visualize our dataset and tracker predictions to get a better understanding of amodal tracking. Instructions could be found here.

TAO-Amodal

πŸƒ Training and Inference

We provide the training and inference code of the proposed Amodal Expander.

The inference code generates a lvis_instances_results.json, which could be used to obtain the evaluation results as introduced in the next section.

πŸ“Š Evaluation

  1. Output tracker predictions as json. The predictions should be structured as:
[{
    "image_id" : int,
    "category_id" : int,
    "bbox" : [x,y,width,height],
    "score" : float,
    "track_id": int,
    "video_id": int
}]

We also provided an example output prediction json here. Refer to this file to check the correct format.

  1. Evaluate on TAO-Amodal
cd tools
python eval_on_tao_amodal.py --track_result /path/to/prediction.json \
                             --output_log   /path/to/output.log \
                             --annotation   /path/to/validation_lvis_v1.json

Annotation JSON is provided in our dataset. Evaluation results will be written in your console and saved in --output_log.

Citations

@misc{hsieh2023tracking,
    title={Tracking Any Object Amodally},
    author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
    year={2023},
    eprint={2312.12433},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

About

Official Code for Tracking Any Object Amodally

https://tao-amodal.github.io


Languages

Language:Python 99.7%Language:Shell 0.3%