Ceyhun Derinbogaz (cderinbogaz)

cderinbogaz

Geek Repo

Company:@textcortex

Location:Berlin, Germany

Home Page:https://textcortex.com

Twitter:@cderinbogaz

Github PK Tool:Github PK Tool

Ceyhun Derinbogaz's repositories

inpredo

Inpredo is a Deep Learning tool which looks into financial charts and predicts stock movements.

Language:PythonLicense:MITStargazers:155Issues:16Issues:19

namekrea

NameKrea is an AI Domain Name Generator which uses GPT-2

Language:PythonLicense:MITStargazers:50Issues:3Issues:2

Cryptocurrency-Data-Fetcher-for-Deep-Learning

This is a small piece of code to make it easier to store market data from different exchanges and store them into an SQL database to be used for Deep Learning in later steps. It can be deployed into digitalocean dokku server

Language:PythonLicense:Apache-2.0Stargazers:19Issues:4Issues:1

gpt-j

Run GPT-J Using Low VRAM

Language:PythonStargazers:3Issues:2Issues:0

DeepSpeed-MII

MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.

Language:PythonLicense:MITStargazers:2Issues:1Issues:0

OrderBook

Matching Engine for Limit Order Book

Language:PythonLicense:NOASSERTIONStargazers:2Issues:2Issues:0

deep-learning-notes

Notes from the DeepLearning.AI courses

Language:Jupyter NotebookStargazers:1Issues:1Issues:0

aitextgen-aws

An amazing wrapper forked to work with sagemaker multi-gpu

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

Basic-UI-for-GPT-J-6B-with-low-vram

A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:1Issues:0

cf-workers-status-page

Monitor your websites, showcase status including daily history, and get Slack/Telegram/Discord notification whenever your website status changes. Using Cloudflare Workers, CRON Triggers, and KV storage.

License:MITStargazers:0Issues:0Issues:0

finetune-gpt2xl

Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

gpt-2-simple

Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0
Language:Jupyter NotebookStargazers:0Issues:1Issues:0

onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Language:C++License:MITStargazers:0Issues:1Issues:0

try-gptj-generation

A wrapper to simply load GPT-J and use it for generation. Uses DeepSpeed ons stage 2 or 3 for inference, as it reduce GPU memory usage.

Language:PythonStargazers:0Issues:1Issues:0