jiqing-feng's repositories

accelerate

🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

ClipBERT

[CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Diffusion-MU-Attack

The official implementation of the paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and efficient attack methods to generate toxic content for safety-driven diffusion models.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

FlexFlow

A distributed deep learning framework.

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

GEAR

GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM

Language:PythonStargazers:0Issues:0Issues:0

intel-extension-for-transformers

Extending Hugging Face transformers APIs for Transformer-based models and improve the productivity of inference deployment. With extremely compressed models, the toolkit can greatly improve the inference efficiency on Intel platforms.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

lm-evaluation-harness

A framework for few-shot evaluation of autoregressive language models.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

neural-compressor

Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimum

🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimum-intel

Accelerate inference of 🤗 Transformers with Intel optimization tools

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

pytorch_geometric

Graph Neural Network Library for PyTorch

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

tau

Pipeline Parallelism for PyTorch

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

models

Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Intel® Data Center GPUs

License:Apache-2.0Stargazers:0Issues:0Issues:0

ProtST

Camera-ready repo for ProtST

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

q-diffusion

[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.

License:MITStargazers:0Issues:0Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0