Vision Compare is a benchmark suite for object detection models. It provides a way for researchers to test detectors on the same data, same metrics and using the same hardware. This makes it possible to avoid comparing models by their reported scores, which can have significant differences based on the test setup.
The benchmark takes advantage of Python class inheritance to build an abstraction of object detectors. This makes it easier to add more models to it as a form of a pluggable module. Any detector can be implemented for it in a way that it uses the Detector
as their superclass and therefore standard operations can be performed on all of them without the need to directly use their implementation-specific API. This makes the benchmark very robust.
Python 3.7
(developed usingPython 3.7.7
)CUDA 10.0
andcuDNN 7.6
(only if you plan to use your GPU)
- Clone project with all its submodules (
git clone https://github.com/geiszla/vision-compare.git --recurse-submodules
) - Create a Python virtual environment (e.g.
conda create -n vision-compare python=3.7.7
orvirtualenv env
) - Activate the environment (e.g.
conda activate vision-compare
orsource ./env/bin/activate
) - Change into the project directory
- Install required dependencies
- Using Poetry (recommended)
- Deployment:
poetry install --no-dev && pip install tensorflow==1.14.0
- Development:
poetry install
- Deployment:
- Using Pip (only for deployment; can result in errors)
- Deploying on Raspberry Pi:
pip install -r requirements-pi.txt && pip install tensorflow==1.14.0
- Deploying elsewhere:
pip install -r requirements.txt
- Deploying on Raspberry Pi:
- Using Poetry (recommended)
- If you want to use a USB AI accelerator
- Install the Edge TPU runtime
- install
tflite_runtime
- Raspberry Pi:
pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl
- Linux:
pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_x86_64.whl
- Windows:
pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-win_amd64.whl
- MacOS:
pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-macosx_10_14_x86_64.whl
- Raspberry Pi:
- Create a
model_data
directory and place the weight files for the desired models there. For the default models, you can download them here:- YOLOv3 320 and YOLOV3 tiny (convert to Keras model using
python lib/keras_yolo3_2/convert.py model_data/yolov3[-tiny].cfg model_data/yolov3[-tiny].weights model_data/yolov3[-tiny].h5
) - ResNet 50 (rename to
retinanet.h5
) - MobileNet v2 Lite for SSD 300 (rename to
ssd.hdf5
) - MobileNet v1 SSD (rename to
ssdv1.tflite
) - MobileNet v1 and v2 SSD for Edge TPU (rename to
ssdv1_edgetpu.tflite
andssdv2_edgetpu.tflite
respectively) - SqueezeDet: you don't need to download this, as it comes with the repo
- YOLOv3 320 and YOLOV3 tiny (convert to Keras model using
- Download VOC training/validation data from their website
- Extract
Annotations
andJPEGImages
directories into the project'sdata
directory - Run the benchmark script
Note that most of the default models are trained on COCO, so validation on it is redundant. If you still want to use the dataset, you need to modify the data_generator
in models_/detector.py
to load it instead of the VOC samples (you can also use read_coco_annotations
function inside utilities.py
to read downloaded data to the correct format).
- Install
pycocotools
usingpip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
- Download the COCO 2017 Train/Val annotations from their website and place it into
data/COCO/annotations
(create directory if doesn't exist) - Run
python src/download_coco.py
to download evaluation images and their annotations from the COCO dataset (by default, only 500 images and their annotations are downloaded; you can change this by modifyingIMAGE_COUNT
in the script)
If you are deploying this project on Linux (especially the Raspberry Pi), you may be required to install a few additional packages as well:
sudo apt install libatlas-base-dev libjasper-dev libqtgui4 python3-pyqt5 libqt4-test libhdf5-dev
- Activate the environment (e.g.
conda activate tensorflow
orsource ./env/bin/activate
; see instructions for creating an environment and downloading dependencies above) - Run the scripts from the root of the project directory (e.g.
python src/benchmark.py
)
Will be added