Roanak Baviskar's starred repositories
tech-interview-handbook
💯 Curated coding interview preparation materials for busy software engineers
cnn-explainer
Learning Convolutional Neural Networks with Interactive Visualization.
alan-sdk-web
Generative AI SDK for Web to create AI Agents for apps built with JavaScript, React, Angular, Vue, Ember, Electron
pytorch-kaldi
pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
alan-sdk-ios
Conversational AI SDK for iOS to enable text and voice conversations with actions (Swift, Objective-C)
alan-sdk-android
Conversational AI SDK for Android to enable text and voice conversations with actions (Java, Kotlin)
alan-sdk-flutter
Conversational AI SDK for Flutter to enable text and voice conversations with actions (iOS and Android)
audiomentations
A Python library for audio data augmentation. Inspired by albumentations. Useful for machine learning.
alan-sdk-ionic
Conversational AI SDK for Ionic to enable text and voice conversations with actions (React, Angular, Vue)
alan-sdk-cordova
Conversational AI SDK for Apache Cordova to enable text and voice conversations with actions (iOS and Android)
style-based-gan-pytorch
Implementation A Style-Based Generator Architecture for Generative Adversarial Networks in PyTorch
huggingsound
HuggingSound: A toolkit for speech-related tasks based on Hugging Face's tools
StyleGAN.pytorch
A PyTorch implementation for StyleGAN with full features.
Automatic-Speech-recognition-for-Speech-Assessment-of-Persian-Preschool-Children
Preschool evaluation is crucial because it gives teachers and parents influential knowledge about children's growth and development. The COVID-19 pandemic has highlighted the necessity of online assessment for preschool children. One of the areas that should be tested is their ability to speak. Employing an Automatic Speech Recognition (ASR) system would not help since they are pre-trained on voices that differ from children's in terms of frequency and amplitude. Because most of these are pre-trained with data in a specific range of amplitude, their objectives do not make them ready for voices in different amplitudes. To overcome this issue, we added a new objective to the masking objective of the Wav2Vec 2.0 model called Random Frequency Pitch (RFP). In addition, we used our newly introduced dataset to fine-tune our model for Meaningless Words (MW) and Rapid Automatic Naming (RAN) tests. Using masking in concatenation with RFP outperforms the masking objective of Wav2Vec 2.0 by reaching a Word Error Rate (WER) of 1.35. Our new approach reaches a WER of 6.45 on the Persian section of the CommonVoice dataset. Furthermore, our novel methodology produces positive outcomes in zero- and few-shot scenarios.