ZZK (MARD1NO)

MARD1NO

Geek Repo

Company:SiliconFlow

Location:Neverland

Home Page:https://mard1no.github.io/

Github PK Tool:Github PK Tool

ZZK's repositories

faster-nougat

Implementation of nougat that focuses on processing pdf locally.

Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

tiny-gpu

A minimal GPU design in Verilog to learn how GPUs work from the ground up

Stargazers:0Issues:0Issues:0

EETQ

Easy and Efficient Quantization for Transformers

Stargazers:0Issues:0Issues:0

BitBLAS

BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.

License:MITStargazers:0Issues:0Issues:0

lightning-thunder

Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.

License:Apache-2.0Stargazers:0Issues:0Issues:0

open-gpu-kernel-modules

NVIDIA Linux open GPU with P2P support

License:NOASSERTIONStargazers:0Issues:0Issues:0

quanto

A pytorch Quantization Toolkit

License:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

triton

Development repository for the Triton language and compiler

Language:C++License:MITStargazers:0Issues:0Issues:0

auto-round

SOTA Weight-only Quantization Algorithm for LLMs

License:Apache-2.0Stargazers:0Issues:0Issues:0

attorch

A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.

License:MITStargazers:0Issues:0Issues:0
Language:HTMLStargazers:0Issues:0Issues:0

qllm-eval

Code Repository of Evaluating Quantized Large Language Models

License:MITStargazers:0Issues:0Issues:0

cutlass_master

CUDA Templates for Linear Algebra Subroutines

Language:C++License:NOASSERTIONStargazers:0Issues:0Issues:0

cccl

CUDA C++ Core Libraries

License:NOASSERTIONStargazers:0Issues:0Issues:0

cudnn-frontend

cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it

License:MITStargazers:0Issues:0Issues:0

Triton-Puzzles

Puzzles for learning Triton

License:Apache-2.0Stargazers:0Issues:0Issues:0

GPUSorting

OneSweep, implemented in CUDA, D3D12, and Unity style compute shaders. Theoretically portable to all wave/warp/subgroup sizes.

License:NOASSERTIONStargazers:0Issues:0Issues:0
Language:C++Stargazers:1Issues:0Issues:0

APPy

APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to OpenMP, and automatically compiles the annotated code to GPU kernels.

License:MITStargazers:0Issues:0Issues:0

KVQuant

KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization

Stargazers:0Issues:0Issues:0

LLMRoofline

Compare different hardware platforms via the Roofline Model for LLM inference tasks.

Stargazers:0Issues:0Issues:0

KIVI

KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache

License:MITStargazers:0Issues:0Issues:0

fp6_llm

An efficient GPU support for LLM inference with 6-bit quantization (FP6).

License:Apache-2.0Stargazers:0Issues:0Issues:0

gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.

License:BSD-3-ClauseStargazers:1Issues:0Issues:0

marlin

FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

gemma_pytorch

The official PyTorch implementation of Google's Gemma models

License:Apache-2.0Stargazers:0Issues:0Issues:0
License:NOASSERTIONStargazers:0Issues:0Issues:0