Bing Han (BrightXiaoHan)

BrightXiaoHan

Geek Repo

Company:Ifun Game

Location:China.

Github PK Tool:Github PK Tool


Organizations
PartTimeWorkers

Bing Han's repositories

CMakeTutorial

CMake中文实战教程

Language:C++License:MITStargazers:1348Issues:23Issues:5

optimum-ascend

Optimized inference with Ascend and Hugging Face

Language:PythonLicense:Apache-2.0Stargazers:7Issues:1Issues:0

fast-chatglm

Faster ChatGLM-6B with CTranslate2

Language:PythonLicense:MITStargazers:5Issues:2Issues:3

Ascend-text-generation-inference

huggingface/text-generation-inference 适配昇腾NPU

Language:PythonLicense:NOASSERTIONStargazers:4Issues:0Issues:1

HOME

My Personal Home Directory.

Language:ShellLicense:MITStargazers:3Issues:2Issues:57

pytorch-npu

Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch

Language:PythonLicense:BSD-3-ClauseStargazers:2Issues:0Issues:0

Blogs

我的个人博客

License:MITStargazers:1Issues:1Issues:0

elasticsearch-jieba-plugin

jieba analysis plugin for elasticsearch 7.0.0, 6.4.0, 6.0.0, 5.4.0,5.3.0, 5.2.2, 5.2.1, 5.2, 5.1.2, 5.1.1

Language:JavaLicense:MITStargazers:1Issues:0Issues:0
Stargazers:0Issues:2Issues:0
Language:PythonStargazers:0Issues:2Issues:0

ChatGLM-Efficient-Tuning

Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

CTranslate2

Fast inference engine for Transformer models

Language:C++License:MITStargazers:0Issues:1Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

faster-whisper

Faster Whisper transcription with CTranslate2

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

fastllm

纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

langchain

🦜🔗 Build context-aware reasoning applications

License:MITStargazers:0Issues:0Issues:0

langchain-ChatGLM

langchain-ChatGLM, local knowledge based ChatGLM with langchain | 基于本地知识的 ChatGLM 问答

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

lightllm

LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

nanoRWKV

The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

NvChad

Blazing fast Neovim config providing solid defaults and a beautiful UI, enhancing your neovim experience.

Language:LuaLicense:GPL-3.0Stargazers:0Issues:0Issues:0

nvchad-starter

Starter config for NvChad

Language:LuaLicense:GPL-3.0Stargazers:0Issues:0Issues:0

optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ragflow

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

rwkv.c

Inference Llama 2 in one file of pure C

Language:CLicense:MITStargazers:0Issues:0Issues:0

sacrebleu

Reference BLEU implementation that auto-downloads test sets and reports a version string to facilitate cross-lab comparisons

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

speaker-verification

speaker verification using pyannote

Language:PythonStargazers:0Issues:2Issues:0

ssr-command-client

:airplane:The commend client of ssr based Python3

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0