ZeroXClem / smol-vision

Recipes for shrinking, optimizing, customizing cutting edge vision models. πŸ’œ

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Smol

Smol Vision 🐣

Recipes for shrinking, optimizing, customizing cutting edge vision models.

Notebook Description
Quantization/ONNX Faster and Smaller Zero-shot Object Detection with Optimum Quantize the state-of-the-art zero-shot object detection model OWLv2 using Optimum ONNXRuntime tools.
VLM Fine-tuning Fine-tune PaliGemma Fine-tune state-of-the-art vision language backbone PaliGemma using transformers.
Intro to Optimum/ORT Optimizing DETR with πŸ€— Optimum A soft introduction to exporting vision models to ONNX and quantizing them.
Model Shrinking Knowledge Distillation for Computer Vision Knowledge distillation for image classification.
Quantization Fit in vision models using Quanto Fit in vision models to smaller hardware using quanto
Speed-up Faster foundation models with torch.compile Improving latency for foundation models using torch.compile
Speed-up/Memory Optimization Vision language model serving using TGI (SOON) Explore speed-ups and memory improvements for vision-language model serving with text-generation inference
Quantization/Optimum/ORT All levels of quantization and graph optimizations for Image Segmentation using Optimum (SOON) End-to-end model optimization using Optimum

About

Recipes for shrinking, optimizing, customizing cutting edge vision models. πŸ’œ

License:Apache License 2.0


Languages

Language:Jupyter Notebook 100.0%