LimGeunTaekk / CoLA

[CVPR2021] CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning

Home Page:https://arxiv.org/abs/2103.16392

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CoLA: Weakly-Supervised Temporal Action Localization

PyTorch Implementation of paper accepted by CVPR'21:

CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning

Can Zhang, Meng Cao, Dongming Yang, Jie Chen and Yuexian Zou*.

[pdf][ArXiv]

Updates

  • [14 Feb 2022]

    • We have released the features and codebase of our CoLA on ActivityNet v1.2 dataset here.
  • [21 July 2021]

    • We have released the codebase and models of our CoLA.
    • Note that we have fine-tuned some hyper-parameter settings so the experimental result is better (+2.1% mAP@0.5, +0.8% mAP@AVG) than the orignal paper! Details are as follows:
    CoLA mAP@tIoU(%)
    0.1 0.2 0.3 0.4 0.5 0.6 0.7 AVG
    original paper 66.2 59.5 51.5 41.9 32.2 22.0 13.1 40.9
    this codebase 66.1 60.0 52.1 43.1 34.3 23.5 13.1 41.7
    gain(Δ) -0.1 +0.5 +0.6 +1.2 +2.1 +1.5 0.0 +0.8
    • [Results Reproducible] You can get the above results without changing any line of our code.

Content

Dependencies

Please make sure Python>=3.6 is installed (Anaconda3 is recommended).

Required packges are listed in requirements.txt. You can install them by running:

pip install -r requirements.txt

Code and Data Preparation

  1. Get the code. Clone this repo with git:

    • For THUMOS'14 experiments:

      git clone https://github.com/zhang-can/CoLA
      
    • For ActivityNet experiments:

      git clone -b anet12 https://github.com/zhang-can/CoLA
      
  2. Prepare the features.

    • Here, we provide the two-stream I3D features for THUMOS'14. You can download them from Google Drive or Weiyun.
    • (ActivityNet v1.2 features are available here.)
    • Unzip the downloaded features into the data folder. Make sure the data structure is as below.
    ├── data
    └── THUMOS14
        ├── gt.json
        ├── split_train.txt
        ├── split_test.txt
        └── features
            ├── ...
    

Training

You can use the following command to train CoLA:

python main_cola.py train

After training, you will get the results listed in this table.

Testing

You can evaluate a trained model by running:

python main_cola.py test MODEL_PATH

Here, MODEL_PATH denotes for the path of the trained model.

This script will report the localization performance in terms of mean average precision (mAP) at different tIoU thresholds.

You can download our trained model from Google Drive or Weiyun.

Other Info

References

This repository is inspired by the following baseline implementations for the WS-TAL task.

Citation

Please [★star] this repo and [cite] the following paper if you feel our CoLA useful to your research:

@InProceedings{zhang2021cola,
    author    = {Zhang, Can and Cao, Meng and Yang, Dongming and Chen, Jie and Zou, Yuexian},
    title     = {CoLA: Weakly-Supervised Temporal Action Localization With Snippet Contrastive Learning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {16010-16019}
}

Contact

For any questions, please feel free to open an issue or contact:

Can Zhang: zhang.can.pku@gmail.com

About

[CVPR2021] CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning

https://arxiv.org/abs/2103.16392

License:MIT License


Languages

Language:Python 100.0%