OniroAI / Universal-segmentation-baseline-Kaggle-Airbus-Ship-Detection

This repo contains full solution for the challenge: from datasets creation to training and creating submit file. Moreover it can be used as a universal high-quality baseline solution for any segmentation task.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Universal segmentation baseline. Kaggle Airbus Ship Detection Challenge - bronze medal solution.

This repository contains a complete solution for the challenge: from creating datasets to training and creating a submit file. Moreover, it can be used as a universal high-quality baseline solution for any segmentation task.

Content

Quick setup and start

Preparations

  • Clone the repo, build a docker image using provided script and Dockerfile

    git clone 
    cd airbus-ship-detection/docker
    ./build.sh
    
  • The final folder structure should be:

    airbus-ship-detection
    ├── data
    ├── docker
    ├── git_images
    ├── notebooks
    ├── src
    ├── README.md
    

Run

  • The container could be started by a shell script

    cd airbus-ship-detection/docker
    ./run.sh
    

Targets for segmentation

Using src/target.py one could create several different targets for training besides usual masks to enhance resulting quality. All types of targets presented on the following image.

targets

Therefore one can use:

  1. mask as the main target
  2. border to learn to separate adjacent objects
  3. background as an auxiliary class for training
  4. distance transform to learn to find objects centers
  5. inverse distance transform as an auxiliary class to find objects centers

Using this repo there is no need to create and store this targets separately (except usual masks), they will be generated on the fly by passing masks for instance segmentation to the transforms. Mask for instance segmentation here is the mask where background is 0 and every object is encoded by one number starting from 1 to 255 thus limiting maximum number of objects on one image by 255).

For the challenge all ships masks were encoded using rle in one csv file. Process of creating masks for instance segmentation is presented in make_targets.ipynb.

Augmentations for training

Here several essential augmentations in transforms.py which can be seen on the following image.

augmentations

All transforms are applied on the fly during batch preparation and there is no need to store something besides original images.

Training

Training process is implemented in train_linknet.ipynb notebook. Here we use Argus framework for PyTorch which makes this process easier and more compact. For training you only need to pass model and loss function and follow procedure described in the example.

As a model we used U-Net with ResNet34 as encoder but you can set any available ResNet(18,34,50,101,152) and choose whether to use pretrained layers from torchvision or not. The full architecture may be found in src/models/unet_flex.py.

In case you would like to train ensemble of models and you split dataset into folds you can use src/train.py script. Besides paths to images, targets and directory to save model (as for usual training) in this case you should provide path to the dictionary with image names (without extentions) as keys and fold indexes as values.

Model for the competition may be found here It was trained on all non-empty train images splitted in 5 batches for 150 epochs with starting lr=1e-4.

Prediction and postprocessing

This part of the repo is relevant only for the competition but some postprocessing functions may be used to treat any segmentation results. The whole process described in noteboks/prediction.ipynb. There is function "waters" which combines all used postprocessing techniques: removing of false positive predictions (with area less then threshold), boundaries separation and fitting every ship by a rectangle. Examples of trained model predictions and results of postprocessing are presented on the following image. predictions

About

This repo contains full solution for the challenge: from datasets creation to training and creating submit file. Moreover it can be used as a universal high-quality baseline solution for any segmentation task.


Languages

Language:Python 56.9%Language:Jupyter Notebook 40.5%Language:Dockerfile 1.7%Language:Shell 0.9%