Peter Yeh (petrex)

petrex

Geek Repo

Location:Mountain View, California

Github PK Tool:Github PK Tool

Peter Yeh's repositories

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Auto-GPT

An experimental open-source attempt to make GPT-4 fully autonomous.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

FasterTransformer

Transformer related optimization, including BERT, GPT

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Language:PythonLicense:NOASSERTIONStargazers:0Issues:3Issues:0

triton

Development repository for the Triton language and compiler

Language:C++License:MITStargazers:0Issues:0Issues:0

tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ChatDev

Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)

License:Apache-2.0Stargazers:0Issues:0Issues:0

composable_kernel

Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators

Language:C++License:NOASSERTIONStargazers:0Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

gpt-researcher

GPT based autonomous agent that does online comprehensive research on any given topic

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

human-eval

Code for the paper "Evaluating Large Language Models Trained on Code"

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

jax

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

llama

Inference code for LLaMA models

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

llama-recipes

Examples and recipes for Llama 2 model

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

llama.cpp

Port of Facebook's LLaMA model in C/C++

License:MITStargazers:0Issues:0Issues:0

llm.c

LLM training in simple, raw C/CUDA

License:MITStargazers:0Issues:0Issues:0

llvm-project

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Note: the repository does not accept github pull requests at this moment. Please submit your patches at http://reviews.llvm.org.

Stargazers:0Issues:0Issues:0

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

onnx

Open Neural Network Exchange

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

SpaceVim

A community-driven modular vim distribution - The ultimate vim configuration

Language:Vim scriptLicense:GPL-3.0Stargazers:0Issues:1Issues:0

stable-diffusion

A latent text-to-image diffusion model

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:0Issues:0Issues:0

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

torchtune

A Native-PyTorch Library for LLM Fine-tuning

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

training

Reference implementations of training benchmarks

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

tvm-1

TVM integration into PyTorch

Language:C++Stargazers:0Issues:1Issues:0