faseehahmed's starred repositories

NLP-progress

Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.

Language:PythonLicense:MITStargazers:22443Issues:0Issues:0

fastapi

FastAPI Tutorials & Deployment Methods to Cloud and on-prem infrastructures

Language:PythonLicense:CC0-1.0Stargazers:271Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:1421Issues:0Issues:0

phasellm

Large language model evaluation and workflow framework from Phase AI.

Language:PythonLicense:MITStargazers:447Issues:0Issues:0

petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

Language:PythonLicense:MITStargazers:8935Issues:0Issues:0

LocalAI

:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

Language:C++License:MITStargazers:21918Issues:0Issues:0

text-generation-inference

Large Language Model Text Generation Inference

Language:PythonLicense:Apache-2.0Stargazers:8412Issues:0Issues:0

data-science-on-aws

AI and Machine Learning with Kubeflow, Amazon EKS, and SageMaker

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:3320Issues:0Issues:0

vertex-ai-mlops

Google Cloud Platform Vertex AI end-to-end workflows for machine learning operations

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:455Issues:0Issues:0

transpeeder

train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism

Language:PythonLicense:Apache-2.0Stargazers:206Issues:0Issues:0

llama-tune

LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers

Language:PythonStargazers:52Issues:0Issues:0

LLM-Pretrain-FineTune

Deepspeed、LLM、Medical_Dialogue、医疗大模型、预训练、微调

Language:PythonStargazers:218Issues:0Issues:0

intel-extension-for-deepspeed

Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note XPU is already supported by stock DeepSpeed.

Language:C++License:MITStargazers:54Issues:0Issues:0

finetune-gpt2xl

Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed

Language:PythonLicense:MITStargazers:426Issues:0Issues:0

DeepSpeed-MII

MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.

Language:PythonLicense:Apache-2.0Stargazers:1767Issues:0Issues:0

Chatglm_lora_multi-gpu

chatglm多gpu用deepspeed和

Language:PythonStargazers:390Issues:0Issues:0

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2

Language:PythonLicense:NOASSERTIONStargazers:1283Issues:0Issues:0

gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Language:PythonLicense:Apache-2.0Stargazers:6717Issues:0Issues:0

DeepSpeedExamples

Example models using DeepSpeed

Language:PythonLicense:Apache-2.0Stargazers:5863Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:33913Issues:0Issues:0

ChatGPT-Like-Bot-On-Google-Collab

One Click Run ChatGPT like Bot

Language:Jupyter NotebookStargazers:8Issues:0Issues:0

core

The stable core is your personal server for AI rendering, powered with community plugins

Language:PythonLicense:MITStargazers:20Issues:0Issues:0

galai

Model API for GALACTICA

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:2667Issues:0Issues:0

ChatRWKV

ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.

Language:PythonLicense:Apache-2.0Stargazers:9340Issues:0Issues:0

gptq

Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".

Language:PythonLicense:Apache-2.0Stargazers:1796Issues:0Issues:0

evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.

Language:PythonLicense:NOASSERTIONStargazers:14397Issues:0Issues:0

dalai

The simplest way to run LLaMA on your local machine

Language:CSSStargazers:13101Issues:0Issues:0

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

Language:PythonLicense:Apache-2.0Stargazers:29170Issues:0Issues:0

CheatSheet-LLM

The LLM (Language Model) Cheatsheet is a quick reference guide that provides an overview of the key concepts and techniques related to natural language processing (NLP) and language modeling. It is designed to be a helpful tool for both beginners and advanced practitioners in the field of NLP.

Stargazers:5Issues:0Issues:0

stable-diffusion-webui

Stable Diffusion web UI

Language:PythonLicense:AGPL-3.0Stargazers:136221Issues:0Issues:0