Gehao Zhang's repositories
fuck_water_sort
破解液体排序小游戏,运行程序读取截图即可。
wedding_cards
nothing special
ByteTransformer
optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052
coAST
Universal and language-independent abstract syntax tree
CUP
Just-In-Time Comment UPdater
defects4j
A Database of Real Faults and an Experimental Infrastructure to Enable Controlled Experiments in Software Engineering Research
dupl
a tool for code clone detection
FasterTransformer
Transformer related optimization, including BERT, GPT
funcom
Funcom Source Code Summarization Tool - Public Release
gpt-code-clippy
Full description can be found here: https://discuss.huggingface.co/t/pretrain-gpt-neo-for-open-source-github-copilot-model/7678?u=ncoop57
invase-gnn
Extension of INVASE to GNNs.
LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
research-method
论文写作与资料分享
snake-pygame
:snake: A snake game written in Python using the Pygame library
tabby
Self-hosted AI coding assistant
TensorRT
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
ThesisNotesTemplate
Try using LaTeX to annotate PDF files!
torch2trt
An easy to use PyTorch to TensorRT converter
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs