norlab-ulaval / logpiles_segmentation

Code repository for paper Instance Segmentation for Autonomous Log Grasping in Forestry Operations

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Instance Segmentation for Autonomous Log Grasping in Forestry Operations

Jean-Michel Fortin, Olivier Gamache, Vincent Grondin, François Pomerleau, Philippe Giguère

[arXiv] [BibTeX] [Paper]


Dataset

The TimberSeg 1.0 dataset is publicly available here. It comes with an original and a prescaled version of the images. We recommend using the prescaled version for faster dataloading and to avoid CUDA out-of-memory errors.

Installation

Requirements

  • Linux or macOS with Python ≥ 3.6
  • If using GPU, make sure you have at least 20 GB of memory and CUDA Toolkit installed.
  • We recommend that you first create a virtual env :
    python3 -m venv venv
    source venv/bin/activate
  • PyTorch ≥ 1.9 and torchvision that matches the PyTorch installation. Install them together at pytorch.org to make sure of this. Make sure to select the correct CUDA version if using GPU.
  • Then install project requirements
    pip install -r requirements.txt

Detectron2 and Mask2Former were copied in this repository since we modified some files for rotated Mask R-CNN.

Compile Detectron2

python -m pip install -e detectron2

CUDA kernel for MSDeformAttn (for Mask2Former)

After preparing the required environment, run the following command to compile CUDA kernel for MSDeformAttn:

CUDA_HOME must be defined and points to the directory of the installed CUDA toolkit.

cd mask2former/modeling/pixel_decoder/ops
sh make.sh
cd ../../../..

Usage

This repo contains multiple scripts to reproduce our experiments. Parameters can be changed at the beginning of each file. Start by fetching the TimberSeg 1.0 dataset using the following commands :

sudo chmod u+x fetch_dataset.sh
./fetch_dataset.sh

Also, fetch the weights file for Mask2Former with the following commands :

sudo chmod u+x fetch_weights.sh
./fetch_weights.sh

Model training

Three instance segmentation networks are evaluated in the paper: Mask R-CNN, Rotated Mask R-CNN and Mask2Former. The following scripts lets you train and test each of them using our best configuration.

python3 standard_maskrcnn.py
python3 rotated_maskrcnn.py
python3 maskformer2.py

Training outputs will be generated in the ./outputs folder.

Cross-validation training

To run a cross-validation training, set the desired network architecture and the number of folds you wish in the script's parameters and run :

python3 kfold_train.py

Inference

We provide a demo script for inference on a folder containing test images. You need to provide the correct output folder from a previous training in the script's parameters.

python3 inference.py

Citing This Paper

@inproceedings{Fortin2022,
  title = {Instance Segmentation for Autonomous Log Grasping in Forestry Operations},
  url = {http://dx.doi.org/10.1109/IROS47612.2022.9982286},
  DOI = {10.1109/iros47612.2022.9982286},
  booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  publisher = {IEEE},
  author = {Fortin,  Jean-Michel and Gamache,  Olivier and Grondin,  Vincent and Pomerleau,  Fran\c{c}ois and Giguère,  Philippe},
  year = {2022},
  month = oct 
}

About

Code repository for paper Instance Segmentation for Autonomous Log Grasping in Forestry Operations

License:Apache License 2.0


Languages

Language:Python 92.5%Language:Cuda 4.7%Language:C++ 2.3%Language:Shell 0.4%Language:Dockerfile 0.1%Language:Makefile 0.0%Language:CMake 0.0%