aascode's repositories

Multimodal-Emotion-Recognition-on-Comics-scenes-EmoRecCom

ICDAR2021 Competition hosted on Codalab. The emotions of comic characters are described by the Visual information, the Text in speech Balloons or Captions and the Onomatopoeia (Comic drawings of words that phonetically imitates, resembles, or suggests the sound that it describes). The task hence is a multi-modal analysis task which can take advantages from both fields: computer vision and natural language processing which are two of the main interests of the ICDAR community.

Language:Jupyter NotebookStargazers:1Issues:0Issues:0

Arabic-Topic-Modeling

BERT for Arabic Topic Modeling: An Experimental Study on BERTopic Technique

Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

continuous_SER

Code to build bimodal (text+audio) speech emotion recognition (SER) model that predicts valence, arousal, and dominance (VAD) scores for audio input.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

cpjku_dcase20

CP-JKU submission to DCASE 20

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Depression-Detection-2

Depression detection using multi-modal fusion framework composed of deep convolutional neural network (DCNN) and deep neural network (DNN) models.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

DepressionDetection

Multi-modal depression detection

Stargazers:0Issues:0Issues:0

Engagement-recognition-using-DAISEE-dataset

Implementation of Engagement Recognition using DAiSEE dataset

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

har-with-imu-transformer

Intertial-based Human Activity Recognition with Transformers

Language:PythonStargazers:0Issues:0Issues:0

hierarchical-attention-HAR-1

[PAKDD-2021] Hierarchical Self Attention Based Autoencoder for Open-Set Human Activity Recognition

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:0Issues:0Issues:0

Human_Activity_Recognition

Classify an activity by sensor data from gyroscope and accelerometer.

Language:PythonStargazers:0Issues:0Issues:0

ifn-icassp-2011

Python implementation for the paper "Iterative feature normalization for emotional speech detection" by Busso et. al. published at ICASSP 2011

Language:Jupyter NotebookStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Parenting_OnlineUsage

Code and dataset fo paper: Understanding the Usage of Online Media for Parenting from Infancy to Preschool At Scale

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

pose-classification

Pose classification using OpenPose for TV Human interactions dataset

Stargazers:0Issues:0Issues:0

SDCNL

Deep Learning for Suicide and Depression Identification with Unsupervised Label Correction

Language:PythonStargazers:0Issues:0Issues:0

SpeakerProfiling

Estimating the Age, Height, and Gender of a speaker with their speech signal.

License:MITStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Speech-Emotion-Recognition-4

Speech Emotion Recognition using Deep Learning

Stargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Speech-Emotion-Recognition-using-Machine-Learning

In recent years, speech emotion recognition is playing a vital role in today’s digital world. In our project, we considered RAVDESS Dataset for training the model. We considered 10 different Machine Learning Algorithms to find out which is the best algorithm among those by considering their accuracies. After that, we cleaned the dataset by applying mask function to remove unwanted background noise, so that we can increase the accuracy and again applied all 10 algorithms on this clean speech dataset to verify which is the best algorithm. Finally, by using that algorithm’s model, we tested a sample audio file to predict its emotion. KEYWORDS : Python, Librosa, Scikit-learn, Soundfile, Pyaudio, RAVDESS dataset, MLPClassifier, Logistic Regression, Naive Baye’s, K-Neighbor Classifier, XGB, LightGBM, Random Forest, Decision Tree, Stochastic Gradient Descent, Support Vector Machine, Jupyter Notebook.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

SpeechVision

Speech Vision (SV) is a Dysarthric Speech Recognition System that adopts a novel approach towards dysarthric ASR in which speech features are extracted visually, then SV learns to see the shape of the words pronounced by dysarthric individuals.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

youtube-crosstalk

Code and Data for paper: Cross-Partisan Discussions on YouTube: Conservatives Talk to Liberals but Liberals Don't Talk to Conservatives (ICWSM '21)

License:MITStargazers:0Issues:0Issues:0