Peng Tao (bergwolf)

bergwolf

Geek Repo

Company:Ant Group

Location:China

Home Page:http://bergwolf.github.io/

Github PK Tool:Github PK Tool


Organizations
alipay
confidential-containers
dragonflyoss
hyperhq
intelligent-machine-learning
kata-containers
kcl-lang

Peng Tao's repositories

linux

Linux kernel source tree -- forked to track pNFS block client commits. Now used to track my patches for Lustre kernel client clean up. well, I'm using it to track my own nfs patches again... And now it is used to track my random staff...

Language:CLicense:NOASSERTIONStargazers:3Issues:3Issues:0

libvirt

Automatic read-only mirror of http://libvirt.org/git/?p=libvirt.git;a=summary

Language:CLicense:LGPL-2.1Stargazers:1Issues:1Issues:0

kata-containers

Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs. https://katacontainers.io/

Language:RustLicense:Apache-2.0Stargazers:0Issues:0Issues:0

nydus

Dragonfly image service, providing fast, secure and easy access to container images.

Language:RustLicense:Apache-2.0Stargazers:0Issues:1Issues:0

Auto-GPT

An experimental open-source attempt to make GPT-4 fully autonomous.

Language:PythonLicense:MITStargazers:0Issues:1Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:2Issues:0
Language:GoLicense:Apache-2.0Stargazers:0Issues:1Issues:0

cutlass

CUDA Templates for Linear Algebra Subroutines

License:NOASSERTIONStargazers:0Issues:0Issues:0

DeepLearningSystem

Deep Learning System core principles introduction.

License:Apache-2.0Stargazers:0Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

diod

Distributed I/O Daemon - a 9P file server

Language:CLicense:GPL-2.0Stargazers:0Issues:2Issues:0

firecracker

Secure and fast microVMs for serverless computing.

Language:RustLicense:Apache-2.0Stargazers:0Issues:2Issues:0

fuse-backend-rs

Rust crate for implementing FUSE backends

Language:RustLicense:Apache-2.0Stargazers:0Issues:0Issues:0

grok-1

Grok open release

License:Apache-2.0Stargazers:0Issues:0Issues:0

iree

A retargetable MLIR-based machine learning compiler and runtime toolkit.

License:Apache-2.0Stargazers:0Issues:0Issues:0

libfuse

The reference implementation of the Linux FUSE (Filesystem in Userspace) interface

Language:CLicense:NOASSERTIONStargazers:0Issues:1Issues:0
Language:CLicense:MITStargazers:0Issues:1Issues:0

llama.cpp

Port of Facebook's LLaMA model in C/C++

License:MITStargazers:0Issues:0Issues:0

llm-inference-solutions

A collection of all available inference solutions for the LLMs

License:MITStargazers:0Issues:0Issues:0

logai

LogAI - An open-source library for log analytics and intelligence

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

Language:GoLicense:Apache-2.0Stargazers:0Issues:2Issues:0

ollama

Get up and running with Llama 2 and other large language models locally

License:MITStargazers:0Issues:0Issues:0

open-gpu-kernel-modules

NVIDIA Linux open GPU kernel module source

License:NOASSERTIONStargazers:0Issues:0Issues:0

publish-crates

GitHub action to get easy publishing of Rust crates

Language:TypeScriptLicense:MITStargazers:0Issues:0Issues:0

qemu

Official QEMU mirror. Please see http://wiki.qemu.org/Contribute/SubmitAPatch for how to submit changes to QEMU. Pull Requests are ignored.

Language:CLicense:NOASSERTIONStargazers:0Issues:2Issues:0

stablehlo

Backward compatible ML compute opset inspired by HLO/MHLO

Language:MLIRLicense:Apache-2.0Stargazers:0Issues:1Issues:0

triton

Development repository for the Triton language and compiler

License:MITStargazers:0Issues:0Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

License:Apache-2.0Stargazers:0Issues:0Issues:0

xla

A machine learning compiler for GPUs, CPUs, and ML accelerators

Language:C++License:Apache-2.0Stargazers:0Issues:1Issues:0