totkoktom / sahi

Framework agnostic sliced/tiled inference + interactive ui + error analysis plots

Home Page:https://ieeexplore.ieee.org/document/9897990

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SAHI: Slicing Aided Hyper Inference

A lightweight vision library for performing large scale object detection & instance segmentation

teaser

downloads downloads
pypi version conda version package testing
ci
Open In Colab HuggingFace Spaces

Overview

Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities.

Command Description
predict perform sliced/standard video/image prediction using any yolov5/mmdet/detectron2/huggingface model
predict-fiftyone perform sliced/standard prediction using any yolov5/mmdet/detectron2/huggingface model and explore results in fiftyone app
coco slice automatically slice COCO annotation and image files
coco fiftyone explore multiple prediction results on your COCO dataset with fiftyone ui ordered by number of misdetections
coco evaluate evaluate classwise COCO AP and AR for given predictions and ground truth
coco analyse calcualate and export many error analysis plots
coco yolov5 automatically convert any COCO dataset to yolov5 format

Quick Start Examples

Check this link for a list of competitions that SAHI made us win 🚀

Tutorials

sahi-yolox

Installation

sahi-installation

Installation details:
  • Install sahi using pip:
pip install sahi
  • On Windows, Shapely needs to be installed via Conda:
conda install -c conda-forge shapely
  • Install your desired version of pytorch and torchvision:
conda install pytorch=1.11.0 torchvision=0.12.0 cudatoolkit=11.3 -c pytorch
  • Install your desired detection framework (yolov5):
pip install yolov5==6.2.1
  • Install your desired detection framework (mmdet):
pip install mmcv-full==1.6.1 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11.0/index.html
pip install mmdet==2.25.1
  • Install your desired detection framework (detectron2):
pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html
  • Install your desired detection framework (huggingface):
pip install transformers timm

Framework Agnostic Sliced/Standard Prediction

sahi-predict

Find detailed info on sahi predict command at cli.md.

Find detailed info on video inference at video inference tutorial.

Find detailed info on image/dataset slicing utilities at slicing.md.

Error Analysis Plots & Evaluation

sahi-analyse

Find detailed info at Error Analysis Plots & Evaluation.

Interactive Visualization & Inspection

sahi-fiftyone

Find detailed info at Interactive Result Visualization and Inspection.

Other utilities

Find detailed info on COCO utilities (yolov5 conversion, slicing, subsampling, filtering, merging, splitting) at coco.md.

Find detailed info on MOT utilities (ground truth dataset creation, exporting tracker metrics in mot challenge format) at mot.md.

Citation

If you use this package in your work, please cite it as:

@article{akyon2022sahi,
  title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection},
  author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin},
  journal={2022 IEEE International Conference on Image Processing (ICIP)},
  doi={10.1109/ICIP46576.2022.9897990},
  pages={966-970},
  year={2022}
}
@software{obss2021sahi,
  author       = {Akyon, Fatih Cagatay and Cengiz, Cemil and Altinuc, Sinan Onur and Cavusoglu, Devrim and Sahin, Kadir and Eryuksel, Ogulcan},
  title        = {{SAHI: A lightweight vision library for performing large scale object detection and instance segmentation}},
  month        = nov,
  year         = 2021,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.5718950},
  url          = {https://doi.org/10.5281/zenodo.5718950}
}

Contributing

sahi library currently supports all YOLOv5 models, MMDetection models, Detectron2 models, and HuggingFace object detection models. Moreover, it is easy to add new frameworks.

All you need to do is, creating a new class in model.py that implements DetectionModel class. You can take the MMDetection wrapper or YOLOv5 wrapper as a reference.

Before opening a PR:

  • Install required development packages:
pip install -e ."[dev]"
  • Reformat with black and isort:
python -m scripts.run_code_style format

Contributors

About

Framework agnostic sliced/tiled inference + interactive ui + error analysis plots

https://ieeexplore.ieee.org/document/9897990

License:MIT License


Languages

Language:Python 100.0%