codingchild (codingchild2424)

codingchild2424

Geek Repo

Company:AI Developer in Enuma, Inc.

Location:Seoul

Github PK Tool:Github PK Tool

codingchild's repositories

debate_bot

debate_bot

Language:PythonStargazers:2Issues:0Issues:0

Deep_knowledge_tracing_baseline

Deep_knowlege_tracing_baseline

Language:PythonStargazers:1Issues:0Issues:0

cl_bert_kt

cl_bert_kt

Language:PythonStargazers:0Issues:0Issues:0

lm-trainer-v2

lm-trainer-v2

Language:PythonStargazers:0Issues:0Issues:0

lm-trainer-v3

lm-trainer-v3

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

alpaca-lora

Code for reproducing the Stanford Alpaca InstructLLaMA result on consumer hardware

License:Apache-2.0Stargazers:0Issues:0Issues:0

auto_gpt_stable

auto_gpt_stable

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

bitsandbytes

8-bit CUDA functions for PyTorch

License:MITStargazers:0Issues:0Issues:0

ddpm_practice

ddpm_practice

Language:PythonStargazers:0Issues:0Issues:0

gpt-4-vision-for-eval

gpt-4-vision-for-eval

Language:PythonStargazers:0Issues:0Issues:0

KoAlpaca

KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)

License:Apache-2.0Stargazers:0Issues:0Issues:0

lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

License:Apache-2.0Stargazers:0Issues:0Issues:0

LOMO

LOMO: LOw-Memory Optimization

License:MITStargazers:0Issues:0Issues:0

math_scoring_with_gpt

math_scoring_with_gpt

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

MEGABYTE-pytorch

Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch

License:MITStargazers:0Issues:0Issues:0

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2

License:NOASSERTIONStargazers:0Issues:0Issues:0

mlm-trainer

mlm-trainer

Language:PythonStargazers:0Issues:0Issues:0

nebullvm

Plug and play modules to optimize the performances of your AI systems 🚀

License:Apache-2.0Stargazers:0Issues:0Issues:0

Open-Llama

The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.

License:MITStargazers:0Issues:0Issues:0

oslo-1

OSLO: Open Source for Large-scale Optimization

Stargazers:0Issues:0Issues:0

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

phoenix

ML Observability in a Notebook - Uncover Insights, Surface Problems, Monitor, and Fine Tune your Generative LLM, CV and Tabular Models

License:NOASSERTIONStargazers:0Issues:0Issues:0

pretraining-with-human-feedback

Code accompanying the paper Pretraining Language Models with Human Preferences

License:MITStargazers:0Issues:0Issues:0

self-instruct

Aligning pretrained language models with instruction data generated by themselves.

License:Apache-2.0Stargazers:0Issues:0Issues:0

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

License:Apache-2.0Stargazers:0Issues:0Issues:0

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

trl

Train transformer language models with reinforcement learning.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

vision

Clean, reproducible, boilerplate-free deep learning project template.

Stargazers:0Issues:0Issues:0

whisper-diarization

Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper

License:BSD-2-ClauseStargazers:0Issues:0Issues:0