hxer7963's repositories
FacialExpressionRecognition
Contrast multiple facial expression recognition experiments and found that using SVM instead of softmax layer can achieve better classification results(65.47% accuracy on fer2013 dataset).
PatternRecognitionCourse
some materials about pattern recognition course in Nanjing university, such as lecture notes, Homework.
abseil-cpp
Abseil Common Libraries (C++)
AutoGPTQ
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
awesome-books
:books: 开发者推荐阅读的书籍
awesome-programming-books
📚 经典技术书籍 PDF 文件,持续更新...
awesome-tensor-compilers
A list of awesome compiler projects and papers for tensor computation and deep learning.
DeepLearningExamples
Deep Learning Examples
DesignPattern
C++11全套设计模式-23种指针的用法(a full DesignPattern implement with c++11)
homemade-machine-learning
🤖 Python examples of popular machine learning algorithms with interactive Jupyter demos and math being explained
hxer7963.github.io
My Blog / Jekyll Themes / PWA
InstallationNotes
some installation notes
lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
llama.cpp
LLM inference in C/C++
MatrixSlow
A simple deep learning framework in pure python for purpose of learning in DL
Oh-My-tkkc
A automatic crawler program for online course which is pointless and fool-style operation. So there is a complete automation crawler program to help the Uganda children.
parallel-hashmap
A family of header-only, very fast and memory-friendly hashmap and btree containers.
pytorch-distributed
A quickstart and benchmark for pytorch distributed training.
pytorch-pruning
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
tensorflow
An Open Source Machine Learning Framework for Everyone
TensorFlow-Examples
TensorFlow Tutorial and Examples for Beginners with Latest APIs
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs