yjh0410 / YOLAF

You Only Look At Face

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

YOLAF

You Only Look At Face

This is a demo about face detection.

No paper.

No sota model.

I provide two models, TinyYOLAF(anchor-based) and CenterYOLAF(anchor-free).

They are both fast enough (50-60 FPS on GTX-1060-mobile-3G GPU) and effective.

The AP on widerface val dataset:

size Easy Medium Hard
TinyYOLAF 640 0.784 0.827 0.771
CenterYOLAF 640 0.889 0.850 0.720

TinyYOLAF

TinyYOLAF is very simple. Its backbone network is darknet_tiny which is designed by myself.

Image

Since it is an anchor-based method, I design some anchor boxes with kmeans used in YOLOv3. You can open data/config.py to check them.

CenterYOLAF

CenterYOLAF is also very simple. Following CenterFace and CenterNet, I use ResNet-18 as backbone and several deconv to get a heatmap.

However, there are some differences:

  1. In CenterNet, it copies the codes from CornerNet to generate a radius for Gauss Kernel who will create a groudtruth heatmap. But, I can't understand why this method is suitable for center. So I apply a different method that I use weight and height of a bounding box to calculate sigma_w and sigma_h. For more details, you can open tools.py to see.

  2. In CenterNet, it uses L1 to learn offset while I use Sigmoid and BCELoss as the offset is between 0 and 1. Just like YOLOv3.

WiderFace

I only download widerface dataset, and evaluate my model on Val dataset.

Train on widerface

python train.py -v TinyYOLAF --cuda -hr --num_workers 8

python train.py -v CenterYOLAF --cuda -hr --num_workers 8

Eval on widerface

python widerface_val.py -v TinyYOLAF --trained_model [path_to_model]

python widerface_val.py -v CenterYOLAF --trained_model [path_to_model]

Demo

python demo.py -v [select a model] --cuda --trained_model [path_to_model] --mode [camera/image/video]

CenterYOLAF:

Image Image Image Image Image Image Image

About

You Only Look At Face


Languages

Language:Python 100.0%