lzxai

lzxai

Geek Repo

0

followers

0

following

Github PK Tool:Github PK Tool

lzxai's starred repositories

scenic

Scenic: A Jax Library for Computer Vision Research and Beyond

Language:PythonLicense:Apache-2.0Stargazers:3138Issues:40Issues:240

conv-emotion

This repo contains implementation of different architectures for emotion recognition in conversations.

Language:PythonLicense:MITStargazers:1304Issues:37Issues:0

AWESOME-FER

Top conferences & Journals focused on Facial expression recognition (FER)/ Facial action unit (FAU)

multimodal-deep-learning

This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.

Language:OpenEdge ABLLicense:MITStargazers:684Issues:5Issues:8

MMSA

MMSA is a unified framework for Multimodal Sentiment Analysis.

Language:PythonLicense:MITStargazers:611Issues:10Issues:96

Food-Recipe-CNN

food image to recipe with deep convolutional neural networks.

Language:Jupyter NotebookStargazers:562Issues:29Issues:3

emotion-recognition-using-speech

Building and training Speech Emotion Recognizer that predicts human emotions using Python, Sci-kit learn and Keras

Language:PythonLicense:MITStargazers:547Issues:23Issues:34

MultiBench

[NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning

Language:HTMLLicense:MITStargazers:457Issues:16Issues:32

CLUB

Code for ICML2020 paper - CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information

Language:Jupyter NotebookStargazers:296Issues:7Issues:26

Low-rank-Multimodal-Fusion

This is the repository for "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", Liu and Shen, et. al. ACL 2018

opensmile-python

Python package for openSMILE

Language:PythonLicense:NOASSERTIONStargazers:230Issues:10Issues:53

Mead

MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]

Language:PythonLicense:MITStargazers:227Issues:8Issues:34

Multimodal-Emotion-Recognition

This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.

Language:PythonLicense:BSD-3-ClauseStargazers:224Issues:8Issues:8

Vision-KAN

KAN for Vision Transformer

Language:PythonLicense:MITStargazers:148Issues:7Issues:10

multimodal-emotion-recognition

This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".

Language:PythonLicense:MITStargazers:95Issues:4Issues:21

MMEmotionRecognition

Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS dataset

Language:PythonLicense:MITStargazers:86Issues:0Issues:0

MSAF

Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"

Language:PythonLicense:MITStargazers:71Issues:4Issues:11

pytorch-facial-expression-recognition

Lightweight Facial Expression(emotion) Recognition model

DMER

A survey of deep multimodal emotion recognition.

fusemix

Data-Efficient Multimodal Fusion on a Single GPU

Language:PythonStargazers:37Issues:8Issues:0

speech-emotion-recognition-iemocap

Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as input data to ML/DL models

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:36Issues:3Issues:3

Joint-Cross-Attention-for-Audio-Visual-Fusion

IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"

FV2ES

A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition

Language:PythonStargazers:24Issues:0Issues:0
Language:PythonLicense:MITStargazers:23Issues:0Issues:0

CCIM

[CVPR2023] Context De-confounded Emotion Recognition

Language:PythonLicense:MITStargazers:14Issues:1Issues:3

SERN

A Self-Attentive Emotion Recognition Network

Language:PythonLicense:MITStargazers:12Issues:3Issues:2

MultiMAE-DER

TensorFlow code implementation of "MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition"

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:11Issues:1Issues:0

AV4SER

PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognition

LoLEmoGameRecognition

Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams

Language:PythonLicense:MITStargazers:5Issues:5Issues:2

abaw5

5th ABAW Competition