alighofrani95 / tensorflow-yolov4

YOLOv4 Implemented in Tensorflow 2.0. Convert YOLO v4 .weights to .pb and .tflite format for tensorflow and tensorflow lite.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

tensorflow-yolov4

license

NOTICE: This is a fork from https://github.com/hunglc007/tensorflow-yolov4-tflite, to add support to convert YOLO .weights to tf's .pb or tf.keras's .h5 format.

YOLOv4 Implemented in Tensorflow 2.0. Convert YOLO v4 .weights to .pb and .tflite format for tensorflow and tensorflow lite.

Download yolov4.weights file: https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT

Prerequisites

  • Tensorflow 2.1.0
  • tensorflow_addons 0.9.1 (required for mish activation)

Performance

Demo

# yolov4
python detect.py --weights ./data/yolov4.weights --framework tf --size 608 --image ./data/kite.jpg

# yolov4 tflite
python detect.py --weights ./data/yolov4-int8.tflite --framework tflite --size 416 --image ./data/kite.jpg

Output

Yolov4 original weight

Yolov4 tflite int8

Convert to .h5

# yolov4
python convert.py --weights ./data/yolov4.weights --output ./data/yolov4.h5

Convert to .pb

# yolov4
python convert.py --weights ./data/yolov4.weights --output ./data/yolov4-pb

Convert to .tflite

# yolov4
python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4.tflite

# yolov4 quantize int8
python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4-int8.tflite --quantize_mode int8

# yolov4 quantize float16
python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4-fp16.tflite --quantize_mode float16

# yolov4 quantize int8 full (with all function is converted to int8)
python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4-fp16.tflite --quantize_mode full_int8 --dataset ./coco_dataset/coco/val207.txt

Evaluate on COCO 2017 Dataset

# run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset
# preprocess coco dataset
cd data
mkdir dataset
cd ..
cd scripts
python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl
python coco_annotation.py --coco_path ./coco 
cd ..

# evaluate yolov4 model
python evaluate.py --weights ./data/yolov4.weights
cd mAP/extra
python remove_space.py
cd ..
python main.py --output results_yolov4_tf

mAP50 on COCO 2017 Dataset

Detection 512x512 416x416 320x320
YoloV3 55.43
YoloV4 61.96 57.33

Benchmark

python benchmarks.py --size 416 --model yolov4 --weights ./data/yolov4.weights

Tesla P100

Detection 512x512 416x416 320x320
YoloV3 FPS 40.6 49.4 61.3
YoloV4 FPS 33.4 41.7 50.0

Tesla K80

Detection 512x512 416x416 320x320
YoloV3 FPS 10.8 12.9 17.6
YoloV4 FPS 9.6 11.7 16.0

Tesla T4

Detection 512x512 416x416 320x320
YoloV3 FPS 27.6 32.3 45.1
YoloV4 FPS 24.0 30.3 40.1

Tesla P4

Detection 512x512 416x416 320x320
YoloV3 FPS 20.2 24.2 31.2
YoloV4 FPS 16.2 20.2 26.5

Macbook Pro 15 (2.3GHz i7)

Detection 512x512 416x416 320x320
YoloV3 FPS
YoloV4 FPS

Traning your own model

# Prepare your dataset
# If you want to train from scratch:
In config.py set FISRT_STAGE_EPOCHS=0 
# Run script:
python train.py

# Transfer learning: 
python train.py --weights ./data/yolov4.weights

TODO

  • YOLOv4 tflite on android
  • YOLOv4 tflite on ios
  • Training code
  • Update scale xy
  • ciou
  • Mosaic data augmentation
  • Mish activation
  • yolov4 tflite version
  • yolov4 in8 tflite version for mobile

References

  • YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4.
  • darknet

My project is inspired by these previous fantastic YOLOv3 implementations:

About

YOLOv4 Implemented in Tensorflow 2.0. Convert YOLO v4 .weights to .pb and .tflite format for tensorflow and tensorflow lite.

License:MIT License


Languages

Language:Python 99.2%Language:Shell 0.8%