h3ct0r / tf-objdetector

Utilities to use tensorflow object detction api with a yolo like dataset

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Utilities to use Tensorflow object detection API (coming from yolo)

This repository provides a bunch of scripts to train and deploy an object detector using tensorflow object detection api.

Follow the step by step guide to train, validate and deploy your own object detector.

Requires:

  1. Create a dataset in yolo like format:

    • 'images' --> folder containing the training image
    • 'labels' --> containing 1 annotation file for each image in .txt (one BB per row with class x-center y-center w h)
    • 'traininglist.txt' --> a txt file where each row refer an image to be used as training sample, images and labels folder should be contained in the same directory
    • 'validationlist.txt' --> a txt file where each row refer an image to be used as validation sample, images and labels folder should be contained in the same directory
    • 'className.txt' --> a txt file with the name of the class to be displayed, one per row
  2. Convert both the training and validation set to tfrecord:

    python yolo_tf_converter.py \
        -t ${IMAGE_LIST} \
        -o ${OUTPUT} \
        -c ${CLASSES}
    • IMAGE_LIST: path to traininglist.txt or validationlist.txt
    • OUTPUT: where the output files will be saved (tfrecord+labelmap)
    • CLASSES: path to the className.txt file
  3. Create the configuration file for training using the create_config.py script

    python create_config.py \
        -t ${TRAINING} \
        -v ${VALIDATION} \
        -l ${LABELS} \
        -w ${WEIGHTS} \
        -m ${MODEL} \
        -s ${STEP}

    If needed change the parameter in the produced 'model.config'

  4. Train the model as long as possible:

    # From the tensorflow/models/research directory
    python object_detection/train.py \
        --logtostderr \
        --pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \
        --train_dir=${PATH_TO_TRAIN_DIR}
    • PATH_TO_YOUR_PIPELINE_CONFIG: path to the model.config generated at step 3
    • PATH_TO_TRAIN_DIR: where the model will be saved
  5. OPTIONAL - Run evaluation

    # From the tensorflow/models/research directory
    python object_detection/eval.py \
        --logtostderr \
        --pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \
        --checkpoint_dir=${PATH_TO_TRAIN_DIR} \
        --eval_dir=${PATH_TO_EVAL_DIR}
    • PATH_TO_EVAL_DIR: where the evaluation event will be saved (use tensorboard to visualize them)
  6. Export trained model as inference graph (WARNING: this action freeze the weight, so the model can only be used for inference not for training)

    # From tensorflow/models/research
    python object_detection/export_inference_graph.py \
        --input_type image_tensor \
        --pipeline_config_path ${PIPELINE_CONFIG_PATH} \
        --trained_checkpoint_prefix model.ckpt-${CHECKPOINT_NUMBER} \
        --output_directory output_inference_graph
  7. Visual Detection

    python inference_engine.py \
        -g ${GRAPH} \
        -l ${LABEL_MAP} \
        -t ${TARGET} \
        -o ${OUT_FLD}
        -v
    • GRAPH: path to the frozen inference graph produced at step 6
    • LABEL_MAP: path to labelmap.pbtxt
    • TARGET: path to an image to test or to a .txt file with the list of image to test (one per row)
    • OUT_FLD: folder were the prediction will be saved, one '.txt' file for each image with one detection per row encoded as: %class %X_center %Y_center %width %height %confidence
  8. OPTIONAL - Live detection from webcam (needs opencv...)

    python webcam_detection.py \
        -g ${GRAPH} \
        -l ${LABEL_MAP} \
        -c ${CAM_ID}
    • GRAPH: path to the frozen inference graph produced at step 6
    • LABEL_MAP: path to labelmap.pbtxt
    • CAM_ID: id of the camera as seen by opencv

About

Utilities to use tensorflow object detction api with a yolo like dataset

License:GNU General Public License v3.0


Languages

Language:Python 100.0%