Machadowisck's starred repositories

system-design-primer

Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.

Language:PythonLicense:NOASSERTIONStargazers:265214Issues:0Issues:0

speech-emotion-ptbr

Classification of emotions based on speech prosody (intonation, rythm, stress) in Portuguese

Language:Jupyter NotebookStargazers:3Issues:0Issues:0

AI-Blocks

A powerful and intuitive WYSIWYG interface that allows anyone to create Machine Learning models!

Language:JavaScriptLicense:NOASSERTIONStargazers:1866Issues:0Issues:0

WikiSQL

A large annotated semantic parsing corpus for developing natural language interfaces.

Language:HTMLLicense:BSD-3-ClauseStargazers:1599Issues:0Issues:0

audiocraft

Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.

Language:PythonLicense:MITStargazers:20288Issues:0Issues:0

professional-programming

A collection of learning resources for curious software engineers

Language:PythonLicense:MITStargazers:45986Issues:0Issues:0

cpp-libface

Fastest auto-complete in the east

Language:C++Stargazers:259Issues:0Issues:0
Language:C++License:NOASSERTIONStargazers:24Issues:0Issues:0

element-web

A glossy Matrix collaboration client for the web.

Language:TypeScriptLicense:Apache-2.0Stargazers:10872Issues:0Issues:0

huggingsound

HuggingSound: A toolkit for speech-related tasks based on Hugging Face's tools

Language:PythonLicense:MITStargazers:429Issues:0Issues:0

tutorial

Tutorial AfroPython

License:NOASSERTIONStargazers:21Issues:0Issues:0

DeepSpeech

DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

Language:C++License:MPL-2.0Stargazers:24829Issues:0Issues:0

brasil.gov.portal

Implementação em Plone do Portal Padrão da Identidade Digital de Governo

Language:PythonStargazers:35Issues:0Issues:0

tais

Tais é uma assistente virtual para responder dúvidas dos usuários relacionadas à Lei Rouanet.

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:98Issues:0Issues:0

portuguese-bert

Portuguese pre-trained BERT models

Language:PythonLicense:NOASSERTIONStargazers:782Issues:0Issues:0

gappy-mwes

Code for NAACL 2019 paper: "Bridging the Gap: Attending to Discontinuity in Identification of Multiword Expressions"

Language:PythonStargazers:16Issues:0Issues:0

zuco-nlp

All NLP experiments described in ArXiv paper 1904.02682

Language:PythonStargazers:26Issues:0Issues:0

pizzadedados

O primeiro e mais querido podcast sobre ciência de dados no Brasil

Language:SCSSLicense:MITStargazers:14Issues:0Issues:0

datascience-pizza

🍕 Repositório para juntar informações sobre materiais de estudo em análise de dados e áreas afins, empresas que trabalham com dados e dicionário de conceitos

License:MPL-2.0Stargazers:2337Issues:0Issues:0
Language:PythonLicense:MITStargazers:147Issues:0Issues:0

CMU-MultimodalSDK-Tutorials

This is a short tutorial for using the CMU-MultimodalSDK.

Language:Jupyter NotebookStargazers:78Issues:0Issues:0

Voice_Emotion

Detecting emotion in voices

Language:Jupyter NotebookStargazers:46Issues:0Issues:0

Voice-Emotion-Detector

Voice Emotion Detector that detects emotion from audio speech using one dimensional CNNs (convolutional neural networks) using keras and tensorflow on Jupyter Notebook.

Language:Jupyter NotebookStargazers:99Issues:0Issues:0

Speech-Emotion-Analysis

Human emotions are one of the strongest ways of communication. Even if a person doesn’t understand a language, he or she can very well understand the emotions delivered by an individual. In other words, emotions are universal.The idea behind the project is to develop a Speech Emotion Analyzer using deep-learning to correctly classify a human’s different emotions, such as, neutral speech, angry speech, surprised speech, etc. We have deployed three different network architectures namely 1-D CNN, LSTMs and Transformers to carryout the classification task. Also, we have used two different feature extraction methodologies (MFCC & Mel Spectrograms) to capture the features in a given voice signal and compared the two in their ability to produce high quality results, especially in deep-learning models.

Language:Jupyter NotebookStargazers:22Issues:0Issues:0

CycleTransGAN-EVC

CycleTransGAN-EVC: A CycleGAN-based Emotional Voice Conversion Model with Transformer

Language:PythonStargazers:31Issues:0Issues:0

LGCCT

(DOI: 10.3390/e24071010) LGCCT: A Light Gated and Crossed Complementation Transformer for Multimodal Speech Emotion Recognition

Language:PythonLicense:GPL-3.0Stargazers:6Issues:0Issues:0

spectrogram-soul

speech emotion recognition using Audio Spectrogram Transformer on resd dataset

Language:PythonLicense:MITStargazers:3Issues:0Issues:0

mmser

SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings

Language:Jupyter NotebookStargazers:13Issues:0Issues:0

Speaker-VGG-CCT

Official implementation of the paper "SPEAKER VGG CCT: Cross-corpus Speech Emotion Recognition with Speaker Embedding and Vision Transformers, 2022"

Language:PythonStargazers:16Issues:0Issues:0