This code is GPLv3 licensed. Please read and understand the terms of the license before using (or reading) the code available in this repository.
A couple of useful links on this topic.
- https://www.gnu.org/licenses/gpl-3.0.en.html
- https://tldrlegal.com/license/gnu-general-public-license-v3-(gpl-3)
Cite the following publication.
@Article{herediaperez2020effects,
author = {Heredia Perez, S. A. and Marinho, M. M. and Harada, K. and Mitsuishi, Mamoru},
title = {The Effects of Different Levels of Realism on the Training of CNNs with only Synthetic Images for the Semantic Segmentation of Robotic Instruments in a Head Phantom},
journal = {International Journal of Computer Assisted Radiology and Surgery (IJCARS)},
year = {2020},
doi = {10.1007/s11548-020-02185-0},
}
This installation has been tested on Windows 10 64 bit.
This code was tested on a rig with three NVIDIA RTX 2070. The code will automatically assign one model for each available GPU.
Download and install Python 3.7 x64 (Tested on Python 3.7.8) https://www.python.org/downloads/windows/
Tensorflow 2 has the following requirements.
- NVIDIA GPU drivers - 418.x or higher. (Tested on 451.48)
- CUDA Toolkit - CUDA 10.1 (Tested on 10.1 update 2)
- cuDNN SDK - 7.6 (Tested on 7.6.5.32)
For more information, refer to https://www.tensorflow.org/install/gpu
Tensorflow's API and backward compatibility is unreliable. I do not plan on keeping this code up-to-date anymore and will migrate to another deep-learning framework.
A virtual environment is recommended. After setting up your virtual environment, run
python3 -m pip install -r requirements.txt
The data is available at IEEE DataPort http://dx.doi.org/10.21227/xmc2-1v59
Download and extract data.zip
and validation_data.zip
Contains 10376 synthetically generated images for each renderer and corresponding automatically-annotated ground-truths.
Folder | Meaning |
---|---|
data/1_flat_renderer/image | Images rendered with the flat renderer |
data/1_flat_renderer/label | Flat renderer ground-truth |
data/2_basic_renderer/image | Images rendered with the basic renderer |
data/2_basic_renderer/label | Basic renderer ground-truth |
data/3_realistic_renderer/image | Images rendered with the realistic renderer |
data/3_realistic_renderer/label | Realistic renderer ground-truth |
Folder | Meaning |
---|---|
validation_data/image | Images obtained from the physical SmartArm setup |
validation_data/label | Manually-annotated ground-truth |
python3 main.py
Output images during training and trained models will be saved to the corresponding output
folder of each renderer.
Most parameters relevant for training can be modified in the configuration.yml
file.