Emmanuel Kahembwe's repositories
adversarialGAN
Adversarial Learning of Robust and Safe Controllers for Cyber-Physical Systems
PlayableVideoGeneration
Official Pytorch implementation of "Playable Video Generation"
pytorch-pfn-extras
Supplementary components to accelerate research and development in PyTorch
Ultra-Data-Efficient-GAN-Training
[Preprint] "Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly", Tianlong Chen, Yu Cheng, Zhe Gan, Jingjing Liu, Zhangyang Wang
bookcorpus
Crawl BookCorpus
chatgpt-chrome-extension
A ChatGPT Chrome extension. Integrates ChatGPT into every text box on the internet.
cudipy
cupy-accelerated
differentiable-robot-model
We are implementing differentiable models of robot manipulators, which allows us to learn typically assumed to be known models of robots for control and motion planning. These models can then be used in more complex reinforcement learning settings, hopefully making learning more sample-efficient. Furthermore, such differentiable models can also be used for learning simulators.
fast-transformers
Pytorch library for fast transformer implementations
FlexFlow
A distributed deep learning framework.
google-research
Google Research
gpt_index
An index created by GPT to organize external information and answer queries!
LST
PyTorch Implementation for Locality Sensitive Teaching
OpenBB
Investment Research for Everyone, Everywhere.
petals
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Real-Time-Voice-Cloning
Clone a voice in 5 seconds to generate arbitrary speech in real-time
rwkv-cpp-cuda
A torchless, c++ rwkv implementation using 8bit quantization, written in cuda
RWKV-LM
RWKV is a RNN with transformer-level performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
wann-gradient-based
Gradient-Descent based search for Weight-Agnostic Neural Networks
xtreme-distil-transformers
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale