hudengjun (hudengjunai)

hudengjunai

Geek Repo

Location:Hangzhou,Zhejiang,China

Github PK Tool:Github PK Tool

hudengjun's repositories

WorkAccelerate

工作加速方法

Language:DockerfileStargazers:3Issues:2Issues:0
Language:PythonStargazers:1Issues:1Issues:0

easyprofiler

for a core lib from easy_profiler

Language:C++License:Apache-2.0Stargazers:1Issues:2Issues:0

btrace

BTrace - a safe, dynamic tracing tool for the Java platform

Language:JavaStargazers:0Issues:1Issues:0

ChatGLM-Tuning

一种平价的chatgpt实现方案, 基于ChatGLM-6B + LoRA

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

click

Python composable command line interface toolkit

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:1Issues:0

cmake-init

The missing CMake project initializer

License:GPL-3.0Stargazers:0Issues:0Issues:0

Colab_notebooks

google colab notebooks to learn

Language:Jupyter NotebookStargazers:0Issues:2Issues:0

DeepSpeedExamples

Example models using DeepSpeed

Language:PythonLicense:MITStargazers:0Issues:1Issues:0
Language:Vim ScriptStargazers:0Issues:2Issues:0

EnergonAI

Large-scale model inference.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

FasterTransformer

Transformer related optimization, including BERT, GPT

Language:C++License:Apache-2.0Stargazers:0Issues:1Issues:0

kubernetes-cloud

Getting Started with the CoreWeave Kubernetes GPU Cloud

Language:PythonStargazers:0Issues:0Issues:0

Learn-Vim

Learning Vim and Vimscript doesn't have to be hard. This is the guide that you're looking for.

License:NOASSERTIONStargazers:0Issues:0Issues:0

lightllm

LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

llama-deepspeed

train llama-30B on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLM

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

mcveil

an apsect oriented programming lib

Language:C++License:MITStargazers:0Issues:1Issues:0

nginx_cmake

the nginx-cmake file to quickly build cmake-vim clangd code view and debug.

Language:CMakeStargazers:0Issues:2Issues:0

nvim-lspconfig

Quickstart configurations for the Nvim LSP client

Language:LuaLicense:NOASSERTIONStargazers:0Issues:1Issues:0

rtdsync

A C++ library implements channel, timer, wait group, for multi-thread synchronization, inspired by Golang design.

Language:C++Stargazers:0Issues:1Issues:0

scanner

Efficient video analysis at scale

Language:C++License:Apache-2.0Stargazers:0Issues:1Issues:0

seastar

High performance server-side application framework

Language:C++License:Apache-2.0Stargazers:0Issues:1Issues:0

serving

A flexible, high-performance serving system for machine learning models

Language:C++License:Apache-2.0Stargazers:0Issues:1Issues:0

simple_vim

the most simple vim config for online docker

Stargazers:0Issues:1Issues:0

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

thread-pool

A C++17 thread pool for high-performance scientific computing.

Language:C++License:MITStargazers:0Issues:1Issues:0

vcpkg

C++ Library Manager for Windows, Linux, and MacOS

Language:CMakeLicense:NOASSERTIONStargazers:0Issues:1Issues:0

vcpkg_libs

some useful vcpkg libs not contained by current vcpkg.

Language:CMakeStargazers:0Issues:2Issues:0

yhosts

AD hosts爱好群,群号:201973909;无限期暂停更新。劝君更尽一杯酒,西出阳关无故人。莫愁前路无知己,天下谁人不识君。

Language:BatchfileStargazers:0Issues:1Issues:0