Xidong Wang (wangxidong06)

wangxidong06

Geek Repo

Company:PHD@The Chinese University of Hong Kong, Shenzhen, BA@Beijing Institute of Technology,

Location:xidongwang1@link.cuhk.edu.cn

Home Page:https://scholar.google.com/citations?user=WJeSzQMAAAAJ&hl=en

Github PK Tool:Github PK Tool

Xidong Wang's repositories

Notes-and-Assigns-for-CS224N

Homework and Notes of CS224N

Language:JavaScriptStargazers:9Issues:2Issues:0

BLAS_testbench

Basic Linear Algebra Subprograms testbench

Language:CStargazers:1Issues:1Issues:0

Optimized-LLM.cpp

Optimized LLM.cpp codes(LLaMa.cpp BLoomz.cpp Whisper.cpp) with Matrix Multiplication implemented by BLIS

Language:CStargazers:1Issues:1Issues:0

acl-2023

Repository for the ACL 2023 conference website

Language:JavaScriptLicense:NOASSERTIONStargazers:0Issues:0Issues:0

DoLa

Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"

Language:PythonStargazers:0Issues:0Issues:0

emnlp-2023

Repository containing the website for the EMNLP 2023 conference

Language:HTMLLicense:NOASSERTIONStargazers:0Issues:0Issues:0

EasyContext

Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Firefly

Firefly(流萤): 中文对话式大语言模型(全量微调+QLoRA),支持微调Baichuan2、CodeLlama、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya、Bloom等大模型

Language:PythonStargazers:0Issues:0Issues:0

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

llama-mistral

Inference code for Mistral and Mixtral hacked up into original Llama implementation

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

llama.cpp

Port of Facebook's LLaMA model in C/C++

Language:CLicense:MITStargazers:0Issues:0Issues:0

LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

LLMSFT_template

Various SFT acceleration framework scripts and codes

Language:PythonStargazers:0Issues:1Issues:0

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

Megatron-LLaMA

Best practice for training LLaMA models in Megatron-LM

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

neurips_llm_efficiency_challenge

NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day

Language:PythonStargazers:0Issues:0Issues:0

OpenAIAPI

Use OpenAIAPI stably and quickly

Language:PythonStargazers:0Issues:1Issues:0

opencompass

OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

OpenRLHF

A Ray-based High-performance RLHF framework (for 7B on RTX4090 and 34B on A100)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

PromethAI-Memory

Memory management for the AI Applications and AI Agents

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

TensorRT

NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.

License:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

UltraFastBERT

The repository for the code of the UltraFastBERT paper

Language:PythonLicense:MITStargazers:0Issues:0Issues:0