CJJ2923's repositories
MMSA-FET
A Tool for extracting multimodal features from videos.
M-SENA
M-SENA: All-in-One Platform for Multimodal Sentiment Analysis
Chinese-BERT-wwm
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
FG2021-BoLD
[IEEE FG2021] PyTorch code for paper titled "Leveraging Semantic Scene Characteristics and Multi-Stream Convolutional Architectures in a Contextual Approach for Video-Based Visual Emotion Recognition in the Wild".
xmodaler
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
Self-MM
Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"
bert
TensorFlow code and pre-trained models for BERT
mtl-eadrg
Emotion-Aware Dialogue Response Generation by Multi-Task Learning
OpenViDial
Code, Models and Datasets for OpenViDial Dataset
HGNN
Code for Infusing Multi-Source Knowledge with Heterogeneous Graph Neural Network for Emotional Conversation Generation (AAAI21)
Multimodal-Transformer
[ACL'19] [PyTorch] Multimodal Transformer
newsCls
News text classification
OpenFace
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.
RaNet
source code of our RaNet in EMNLP 2021
BBFN
This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Multimodal-Infomax
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
-
A No-Recurrence Sequence-to-Sequence Model for Speech Recognition
AOPG
Anchor-free Oriented Proposal Generator for Object Detection
Self-Supervised-Embedding-Fusion-Transformer
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
Maria
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".
MISA
MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
MELD
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
roberta_zh
RoBERTa中文预训练模型: RoBERTa for Chinese
visdial-gnn
PyTorch code for Reasoning Visual Dialogs with Structural and Partial Observations
KdConv
KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation