DmitryRyumin / AVCER

Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion

Home Page:https://elenaryumina.github.io/AVCER

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion

The official repository for "Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion", as a part of CVPRW 2024 (Accepted)

Abstract

A Compound Expression Recognition (CER) as a part of affective computing is a novel task in intelligent human-computer interaction and multimodal user interfaces. We propose a novel audio-visual method for CER. Our method relies on emotion recognition models that fuse modalities at the emotion probability level, while decisions regarding the prediction of compound expressions are based on the pair-wise sum of weighted emotion probability distributions. Notably, our method does not use any training data specific to the target task. Thus, the problem is a zero-shot classification task. The method is evaluated in multi-corpus training and cross-corpus validation setups. We achieved F1-score values equal to 32.15% and 25.56% for the AffWild2 and C-EXPR-DB test subsets without training on target corpus and target task, respectively. Therefore, our method is on par with methods developed training target corpus or target task.

Acknowledgments

Parts of this project page were adopted from the Nerfies page.

Website License

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

About

Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion

https://elenaryumina.github.io/AVCER


Languages

Language:Jupyter Notebook 54.7%Language:Python 33.4%Language:JavaScript 9.0%Language:HTML 1.9%Language:CSS 1.0%