gftww / B2C

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

B2C-AFM: Bi-directional Co-Temporal and Cross-Spatial Attention Fusion Model for Human Action Recognition

Introduction

This repo is official PyTorch implementation of B2C-AFM: Bi-directional Co-Temporal and Cross-Spatial Attention Fusion Model for Human Action Recognition (TIP).

Directory

Root

The ${ROOT} is described as below.

${ROOT}  
|-- assets  
|-- common
|-- data  
|-- main
|-- output  
|-- tool
|-- vis  
  • assets contains paper images .png.
  • common contains kernel codes for B2C.
  • data contains data loading codes and soft links to images and annotations directories.
  • main contains high-level codes for training or testing the network.
  • output contains log, trained models, visualized outputs, and test result.
  • tool contains a code to merge models of rgb_only and pose_only stages.
  • vis contains some codes for visualization.

Running

You need to follow directory structure of the data as described in IntegralAction Code Paper Details on How to train and test our code can be found in Git Repository

Demos

Acknowledgements

We thank IntegralAction for their outstanding work.

Reference

@article{guo2023b2c,
  title={B2C-AFM: Bi-directional Co-Temporal and Cross-Spatial Attention Fusion Model for Human Action Recognition},
  author={Guo, Fangtai and Jin, Tianlei and Zhu, Shiqiang and Xi, Xiangming and Wang, Wen and Meng, Qiwei and Song, Wei and Zhu, Jiakai},
  journal={IEEE Transactions on Image Processing},
  year={2023},
  publisher={IEEE}
}  
@InProceedings{moon2021integralaction,
  title={IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos},
  author={Moon, Gyeongsik and Kwon, Heeseung and Lee, Kyoung Mu and Cho, Minsu},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPRW)}, 
  year={2021}
}

About


Languages

Language:Python 100.0%