Mahmoud Ashraf's repositories
whisper-diarization
Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper
ctc-forced-aligner
Text to speech alignment using CTC forced alignment
AutoencoderCompression
Learned Image Compression Using Autoencoder Architecture
faster-whisper
Faster Whisper transcription with CTranslate2
Aligner-SUPERB
Speech-To-Text forced-alignment Speech processing Universal PERformance Benchmark
audio
Data manipulation and transformation for audio signal processing, powered by PyTorch
OpenCL-Matrix-Multiplication
A simple program to implement matrix multiplication on GPU using OpenCL
OpenMP-KMeans-Clustering
A simple implementation for K Means Clustering using OpenMP written in C++
perceptual-quality
Perceptual quality metrics for TensorFlow
Pthreads-Matrix-Multiplication
This is an example program to parallelize matrix multiplication using POSIX threads written in C
CTranslate2
Fast inference engine for Transformer models
deepmultilingualpunctuation
A python package for deep multilingual punctuation prediction.
llama_index
LlamaIndex is a data framework for your LLM applications
NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
ReDimNet
The official pytorch implemention of the Intespeech 2024 paper "Reshape Dimensions Network for Speaker Recognition"
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
TensorRT-Model-Optimizer
TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.