lsy105's starred repositories

LLM101n

LLM101n: Let's build a Storyteller

Stargazers:19832Issues:0Issues:0

cuda-repo

From zero to hero CUDA for accelerating maths and machine learning on GPU.

Language:CudaLicense:MITStargazers:156Issues:0Issues:0

AutoGPTQ

An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.

Language:PythonLicense:MITStargazers:4121Issues:0Issues:0

InferLLM

a lightweight LLM model inference framework

Language:C++License:Apache-2.0Stargazers:661Issues:0Issues:0

gpusimilarity

A Cuda/Thrust implementation of fingerprint similarity searching

Language:C++License:BSD-3-ClauseStargazers:94Issues:0Issues:0

IPSC

Introduction to parallel scientific computing

Language:C++Stargazers:1Issues:0Issues:0
Language:PythonStargazers:19Issues:0Issues:0

ORDerly

Chemical reaction data & benchmarks. Extraction and cleaning of data from Open Reaction Database (ORD)

Language:PythonLicense:MITStargazers:61Issues:0Issues:0

crem

CReM: chemically reasonable mutations framework

Language:Jupyter NotebookLicense:BSD-3-ClauseStargazers:190Issues:0Issues:0

llm.c

LLM training in simple, raw C/CUDA

Language:CudaLicense:MITStargazers:21970Issues:0Issues:0

MoleculeSTM

Multi-modal Molecule Structure-text Model for Text-based Editing and Retrieval, Nat Mach Intell 2023 (https://www.nature.com/articles/s42256-023-00759-6)

Language:PythonLicense:NOASSERTIONStargazers:188Issues:0Issues:0

tiny-training

On-Device Training Under 256KB Memory [NeurIPS'22]

Language:PythonLicense:MITStargazers:417Issues:0Issues:0

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

Language:PythonLicense:MITStargazers:9826Issues:0Issues:0

gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.

Language:PythonLicense:BSD-3-ClauseStargazers:5370Issues:0Issues:0

TinyChatEngine

TinyChatEngine: On-Device LLM Inference Library

Language:C++License:MITStargazers:637Issues:0Issues:0

llama_cu_awq

llama INT4 cuda inference with AWQ

Language:CudaLicense:MITStargazers:42Issues:0Issues:0

llamol

Offical repository for the paper "Llamol: a dynamic multi-conditional generative transformer for de novo molecular design" (https://doi.org/10.1186/s13321-024-00863-8)

Language:PythonLicense:NOASSERTIONStargazers:17Issues:0Issues:0

CS149-parallel-computing

Learning materials for Stanford CS149 : Parallel Computing

Language:CStargazers:136Issues:0Issues:0

CS149-asst2

Parallel Computing asst 2: Scheduling Task Graphs on a Multi-Core CPU

Language:C++Stargazers:5Issues:0Issues:0

AI-System

System for AI Education Resource.

Language:PythonLicense:CC-BY-4.0Stargazers:3206Issues:0Issues:0

CMU10-714

Learning material for CMU10-714: Deep Learning System

Language:Jupyter NotebookStargazers:177Issues:0Issues:0

Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks

Language:PythonStargazers:87Issues:0Issues:0

SNN_Calibration

Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021

Language:PythonLicense:MITStargazers:79Issues:0Issues:0

snn_optimal_conversion_pipeline

Optimal Conversion of Conventional Artificial Neural Networks to Spiking Neural Networks

Language:PythonStargazers:33Issues:0Issues:0

hybrid-snn-conversion

Training spiking networks with hybrid ann-snn conversion and spike-based backpropagation

Language:PythonStargazers:93Issues:0Issues:0

moses

Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models

Language:PythonLicense:MITStargazers:801Issues:0Issues:0
Language:PythonStargazers:3260Issues:0Issues:0
Language:PythonLicense:MITStargazers:412Issues:0Issues:0

DIG

A library for graph deep learning research

Language:PythonLicense:GPL-3.0Stargazers:1816Issues:0Issues:0

bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.

Language:PythonLicense:MITStargazers:5779Issues:0Issues:0