KleinYuan / tf-segmentation

Real-time semantic segmentation inference production ready code based on deeplab-resnet/psp-net and tensorflow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Introduction

Real time segmentation inference production ready code based on DeepLab-ResNet, PSP Net.

Demo Segmentation

Note

This repo is basically a cleaned unified re-organized code from multiple open sourced projects. I tried to implement as good practice as possible. However, there are still some ugly code borrowed from the original repo requiring tons of time to refine, while I haven't got enough bandwidth yet. Ultimately, I will even rewrite all the models/network layers so that it's easier to understand, modify, deploy on all levels, with clean level similar to this one.

Honestly, if the pre-trained model can be frozen well, we can just use similar architecture for inference. However, unfortunately, my previous efforts on freeze the graph and fetch tensor running the compute does not really give reasonable predictions. I may spend more time on that issue later so that we can remove the ugly network constructing code.

This repo is all about inference, server, api instead of training, or research.

Therefore, you are expected to do following things with this repo:

  • Real Time Segmentation with multiple models against Camera

  • Segmentation API wrapped up with Docker container, ready to deploy

  • Multiple open source pre-trained segmentation model performance evaluation against single image

Dependencies

Actually simply using Anaconda may save you a year!

  • Python 2.X

  • Tensorflow > 1.0.0

  • OpenCV

  • No GPU required

Run Demo

  1. Download pre-trained model first
  1. Put model.* files under /model folder ensuring the name is consistent with model name
--model
  |-- deeplab
      |-- checkpoint
      |-- model.ckpt-100000.data-00000-of-00001
      |-- model.ckpt-100000.index
      |-- model.ckpt-100000.meta
  |-- pspnet50
      |-- checkpoint
      |-- model.ckpt-0.data-00000-of-00001
      |-- model.ckpt-0.index
      |-- model.ckpt-0.meta
  |-- pspnet101
      |-- checkpoint
      |-- model.ckpt-0.data-00000-of-00001
      |-- model.ckpt-0.index
      |-- model.ckpt-0.meta

  1. run below:
make demo

Note: you can change this line to be with DeepLab, PSPNet101 and PSPNet50.

Freeze Model [Optional]

# Navigate to tf-segmentation
bash freeze.sh

References

Paper:

  1. Chen, Liang-Chieh, et al. "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs." arXiv preprint arXiv:1606.00915 (2016).

  2. Zhao, Hengshuang, et al. "Pyramid scene parsing network." IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). 2017.

Borrowed Code

  1. model.py/network.py are borrowed from DrSleep's implementation. The layout does not seem ideal to me and I may re-implement them later on, but for now, I will just stick with it.

  2. Pre-trained weight can be referred from Indoor-segmentation

  3. PSP Network code borrowed but refined from PSP-tensorflow

Docker

make build run

API

URL: http://0.0.0.0:8080/segmentation

HEADERS: {'Content-Type': application/json}

BODY: {url: ''}

Future Work

  • Freeze model as a google protobuf file

  • Wrap up this with flask as a Restful API

  • Wrap up this with Docker as a micro-server

About

Real-time semantic segmentation inference production ready code based on deeplab-resnet/psp-net and tensorflow

License:MIT License


Languages

Language:Python 99.8%Language:Makefile 0.1%Language:Shell 0.1%