Alvin-Zeng / AME

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AME

Update Contributor

Exploring Motion Cues for Video Test-Time Adaptation

This repo holds the codes and models for the AME framework presented on ACM MM2023
Exploring Motion Cues for Video Test-Time Adaptation Runhao Zeng, Qi Deng, Huixuan Xu, Shuaicheng Niu, Jian Chen, ACM MM ’23, October 29-November 3, 2023, Ottawa, ON, Canada


Usage Guide

Prerequisites

The training and testing in AME is reimplemented in PyTorch for the ease of use.

  • PyTorch Other minor Python modules can be installed by running
pip install -r requirements.txt

Code and Data Preparation

Get the code

git clone --recursive https://github.com/Alvin-Zeng/AME

Dataset

Training and Testing AME

You can use this command to training and testing AME
You can use a for loop iterates to control which noise you want to train and test AME.If you want to change other noise,you can replace contrast with the name of another noise.

for CORRUPT in contrast 
do
    CUDA_VISIBLE_DEVICES=7 python main.py \
    --seed=507 \
    --log_dir=log_seed/tanet_ucf101/${CORRUPT}/5e-6 \
    --time_log \
    --dataset=ucf101-${CORRUPT} \
    --checkpoint=$PATH_OF_TRAINING_CHECKPOINT \
    --dataset_path= $PATH_OF_TRAINING_DATASET \
    --save_ckpt= $PATH_OF_SAVING_TRAINING_CHECKPOINT \
    --mix \
    --lr=5e-6 \
    --gpus 0
done

You can also use other commands in the script folder to train different dataset

Comparisons of test-time adaptation performance on UCF101 dataset. * video domain adaptation method.

Method gauss pepper salt shot zoom impulse motion jpeg contrast rain h265.abr avg
Without Adaptation 17.50 23.05 6.85 71.82 75.55 16.94 54.77 82.92 62.89 81.31 78.54 51.98
BN Adaptation 37.01 33.49 20.64 80.01 76.13 37.59 54.46 83.08 69.13 85.85 76.90 59.57
NORM 41.79 39.70 22.26 84.54 80.63 43.38 61.55 88.00 70.82 89.29 80.97 63.90
Contrast TTA 36.58 27.57 21.33 74.31 69.79 36.11 49.48 80.23 24.48 78.46 74.60 52.09
SAR 48.48 43.00 22.60 85.30 68.60 35.40 40.43 86.41 64.93 81.55 77.39 59.46
ATCoN* 60.19 50.60 32.60 84.80 78.80 62.50 69.40 84.70 71.10 86.30 78.30 69.03
Ours 72.06 64.45 53.50 86.84 77.80 67.09 63.57 88.94 71.76 90.50 80.89 74.31

Citation

Please cite the following paper if you feel AME useful to your research

@inproceedings{ACMMM-AME,
  author    = {Runhao Zeng and
               Qi Deng and
               Huixuan Xu and
               Shuaicheng Niu and
               Jian Chen},
  title     = {Exploring Motion Cues for Video Test-Time Adaptation},
  booktitle   = {ACM MM2023},
  year      = {2023},
}

Contact

For any question, please file an issue or contact

Runhao Zeng: runhaozeng.cs@gmail.com

About


Languages

Language:Python 97.9%Language:Shell 2.1%