fuochii

fuochii

Geek Repo

Github PK Tool:Github PK Tool

fuochii's starred repositories

dspy

DSPy: The framework for programming—not prompting—foundation models

Language:PythonLicense:MITStargazers:15017Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:5Issues:0Issues:0

LLMCompiler

[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling

Language:PythonLicense:MITStargazers:1295Issues:0Issues:0

openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

Language:C++License:Apache-2.0Stargazers:6581Issues:0Issues:0

nncf

Neural Network Compression Framework for enhanced OpenVINO™ inference

Language:PythonLicense:Apache-2.0Stargazers:878Issues:0Issues:0

mlir-aie

An MLIR-based toolchain for AMD AI Engine-enabled devices.

Language:MLIRLicense:NOASSERTIONStargazers:267Issues:0Issues:0

Enzyme

High-performance automatic differentiation of LLVM and MLIR.

Language:LLVMLicense:NOASSERTIONStargazers:1211Issues:0Issues:0

mlir-tutorial

MLIR For Beginners tutorial

Language:C++Stargazers:664Issues:0Issues:0

iree

A retargetable MLIR-based machine learning compiler and runtime toolkit.

Language:C++License:Apache-2.0Stargazers:2525Issues:0Issues:0

llvm-project

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.

Language:LLVMLicense:NOASSERTIONStargazers:27225Issues:0Issues:0

torch-mlir

The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.

Language:C++License:NOASSERTIONStargazers:1258Issues:0Issues:0
Language:C++Stargazers:113Issues:0Issues:0

mlir-extensions

Intel® Extension for MLIR. A staging ground for MLIR dialects and tools for Intel devices using the MLIR toolchain.

Language:MLIRLicense:NOASSERTIONStargazers:112Issues:0Issues:0

awesome-tensor-compilers

A list of awesome compiler projects and papers for tensor computation and deep learning.

Stargazers:2279Issues:0Issues:0

flashinfer

FlashInfer: Kernel Library for LLM Serving

Language:CudaLicense:Apache-2.0Stargazers:908Issues:0Issues:0

KVQuant

KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization

Language:PythonStargazers:246Issues:0Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:23985Issues:0Issues:0