baopj / DenseEventsGrounding

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dense Events Grounding in Video (AAAI 2021 oral)

Introduction

This is a pytorch implementation of Dense Events Propagation Network (DepNet) on ActivityNet Captions for the AAAI 2021 oral paper "Dense Events Grounding in Video" .

Dataset

Please download the visual features from the official website of ActivityNet: Official C3D Feature. And you can download preprocessed annotation files here.

Prerequisites

  • python 3.5
  • pytorch 1.4.0
  • torchtext
  • easydict
  • terminaltables

Training

Use the following commands for training:

cd moment_localization && export CUDA_VISIBLE_DEVICES=0
python dense_train.py --verbose --cfg ../experiments/dense_activitynet/acnet.yaml

You may get better results than that reported in our paper thanks to the code updates.

Citation

If you use our code or models in your research, please cite with:

@inproceedings{bao2021dense,
  title     = {Dense Events Grounding in Video},
  author    = {Bao, Peijun and Zheng, Qian and Mu, Yadong},
  booktitle = {AAAI},
  year      = {2021}
}

About


Languages

Language:Python 100.0%