Ceyhun Derinbogaz's repositories
Cryptocurrency-Data-Fetcher-for-Deep-Learning
This is a small piece of code to make it easier to store market data from different exchanges and store them into an SQL database to be used for Deep Learning in later steps. It can be deployed into digitalocean dokku server
DeepSpeed-MII
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
deep-learning-notes
Notes from the DeepLearning.AI courses
aitextgen-aws
An amazing wrapper forked to work with sagemaker multi-gpu
Basic-UI-for-GPT-J-6B-with-low-vram
A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.
cf-workers-status-page
Monitor your websites, showcase status including daily history, and get Slack/Telegram/Discord notification whenever your website status changes. Using Cloudflare Workers, CRON Triggers, and KV storage.
finetune-gpt2xl
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
gpt-2-simple
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
try-gptj-generation
A wrapper to simply load GPT-J and use it for generation. Uses DeepSpeed ons stage 2 or 3 for inference, as it reduce GPU memory usage.