Sihan Chen's repositories
spam-detection-neo4j
GCN-based Anti-Spam (GAS) model with Neo4j prototype
hesse-benchmarks
a multi-thread Kafka consumer that consumes and stores Hesse benchmarks for further analytics
BasicSR
Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
Bert-VITS2
vits2 backbone with multilingual-bert
DGFraud-TF2
A Deep Graph-based Toolbox for Fraud Detection in TensorFlow 2.X
emqx
An Open-Source, Cloud-Native, Distributed MQTT Message Broker for IoT.
facexlib
FaceXlib aims at providing ready-to-use face-related functions based on current STOA open-source methods.
flink-statefun-docker
Docker packaging for Apache Flink Stateful Functions
flink-statefun-playground
Apache Flink Stateful Functions Playground
GPT-SoVITS
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
langchain
🦜🔗 Build context-aware reasoning applications
neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
notebooks
Notebooks using the Hugging Face libraries 🤗
oneDNN
oneAPI Deep Neural Network Library (oneDNN)
optimum-habana
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
python_backend
Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
quantized-bert-vits2
int8 quantization to Bert-VITS2
remote-statefun-demo
A minimal deployment demo of Flink Stateful Functions on AWS Lambda as remote functions
Spycsh.github.io
Spycsh的个人主页
tf-saved-model-text-generation
Minimal code to solve the issue when doing inference after reloading TF saved model for text generation
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs