Apoorv Agnihotri (apoorvagnihotri)

apoorvagnihotri

Geek Repo

Company:Rephrase AI

Location:Tübingen, Germany

Home Page:https://apoorvagnihotri.github.io

Twitter:@ApoorvAgnihotr2

Github PK Tool:Github PK Tool

Apoorv Agnihotri's starred repositories

llm-course

Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:38777Issues:406Issues:67

llama_index

LlamaIndex is a data framework for your LLM applications

Language:PythonLicense:MITStargazers:36525Issues:245Issues:5433

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:29647Issues:242Issues:5140

ChatDev

Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)

Language:ShellLicense:Apache-2.0Stargazers:25534Issues:311Issues:262

LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference

recommenders

Best Practices on Recommendation Systems

Language:PythonLicense:MITStargazers:19202Issues:275Issues:868

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:14061Issues:120Issues:1101

triton

Development repository for the Triton language and compiler

tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Language:PythonLicense:Apache-2.0Stargazers:11762Issues:376Issues:3401

open-llms

📋 A list of open LLMs available for commercial use.

litgpt

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

Language:PythonLicense:Apache-2.0Stargazers:10565Issues:93Issues:779

accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support

Language:PythonLicense:Apache-2.0Stargazers:7905Issues:97Issues:1624

awesome-NeRF

A curated list of awesome neural radiance fields papers

Language:TeXLicense:MITStargazers:6499Issues:237Issues:19

Pluto.jl

🎈 Simple reactive notebooks for Julia

Language:JavaScriptLicense:MITStargazers:4996Issues:37Issues:1666

ignite

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

Language:PythonLicense:BSD-3-ClauseStargazers:4528Issues:59Issues:1380

AutoGPTQ

An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.

Language:PythonLicense:MITStargazers:4453Issues:31Issues:460

HIP

HIP: C++ Heterogeneous-Compute Interface for Portability

exllamav2

A fast inference library for running LLMs locally on modern consumer-class GPUs

Language:PythonLicense:MITStargazers:3627Issues:34Issues:451

learn2learn

A PyTorch Library for Meta-learning Research

Language:PythonLicense:MITStargazers:2658Issues:32Issues:259

Makie.jl

Interactive data visualizations and plotting in Julia

Language:JuliaLicense:MITStargazers:2415Issues:25Issues:2640

course

The Hugging Face course on Transformers

Language:MDXLicense:Apache-2.0Stargazers:2225Issues:50Issues:153

SqueezeNet

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters

smplx

SMPL-X

Language:PythonLicense:NOASSERTIONStargazers:1850Issues:30Issues:198

arviz

Exploratory analysis of Bayesian models with Python

Language:PythonLicense:Apache-2.0Stargazers:1604Issues:48Issues:860

minillm

MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs

Language:PythonLicense:MITStargazers:864Issues:15Issues:13

meltingpot

A suite of test scenarios for multi-agent reinforcement learning.

Language:PythonLicense:Apache-2.0Stargazers:614Issues:16Issues:109

marlin

FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.

Language:PythonLicense:Apache-2.0Stargazers:608Issues:15Issues:29

qserve

QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving

Language:PythonLicense:Apache-2.0Stargazers:431Issues:9Issues:30

siamese-pytorch

Implementation of Siamese Networks for image one-shot learning by PyTorch, train and test model on dataset Omniglot

ArviZ.jl

Exploratory analysis of Bayesian models with Julia

Language:JuliaLicense:NOASSERTIONStargazers:105Issues:7Issues:67