CoralDX: An image processing framework for extracting coral nibbins from a photo.
This repo walks you through how we prepare, train and run CoralDX detector in the Cloud through Roboflow and Google Colab.
YOLOv4 is a computer vision model for optimal speed and accuracy of object detection.
Before training custom detector, we need to prepare a dataset with annotations to provide your target areas to the model. Here, we utilized online annotation tool from the Roboflow which no needs to download and easy to use and save datasets: https://roboflow.com/annotate
The dataset need to be as versatile as you can. For CoralDX, we utilized 40 pictures in images folder to train. And after annotating, the Roboflow will give a corresponding .txt file with the coordinates of your selected target areas.
To annotate, use the second square tool in right white bar to square the target area, then group and name every target areas.
NOTE: Annotations are CASE SENSITIVE, so label all images used for training a model with the exact same labels.
Annotation sample:
Assign images into train and valid datasets which are for training and validing the custom detector in 80%:20% ratio.
To generate dataset, in preprocessing section. We resized images in 416* 416 which can accelerate the training before downing annotated dataset:
For Augmentation, press Continue.
While generating dataset, your screen will look like this:
After generating the dataset, click on the 'Export' option to export and download your dataset.
In the pop-up dialog box, select 'YOLO Darknet' format and 'Download zip to computer' option.
Download the zipped dataset includes all images and related .txt files like shown in images folder:
Before start the journey, make a copy of this Colab file.
- Enabling GPU within your notebook
- Cloning and Building Darknet
- Download pre-trained YOLOv4 weights
- Define Helper Functions
- Run Your Detections with Darknet and YOLOv4!
- Uploading Local or Google Drive Files to Use
We recommend to create a Google Drive Folder called yolov4 and put enerything into Google Drive for use
The follwing list is the files need to upload into the Googlr Drive
Copy of YOLOv4.ipynb: copy of this Colab tutorial file
images: images for test custom detector
backup: create empty folder to store weights file
obj.zip: change name of train folder to obj and compress
test.zip: change name of valid folder to test and compress
yolov4-obj.cfg: configuration file
obj.names: group names
obj.data: directions of files
Put group names in obj.names file and change the classes number for custom detector. Both file can be editedd from example files using Text Editor in cfg section.
generate_train.py: configuration files to train our custom detector are the train.txt training images
generate_test.py: configuration files to train our custom detector are the test.txt testing images
classes.txt: group names
Edit example file using Text Editor and put group names in. - Start training
Download Tensorflow folder to local drive. We recommend to use Gitbash shell to deliver command and visual studio code as editor.
- Set up Conda environment
We recommend to download Anaconda to set up tensorflow environment. Then deliver command in Gitbash shell to create and activate GPU or CPU.
Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate yolov4-cpu
Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate yolov4-gpu
-
Download 'yolov4-obj_best.weights' file from backup folder.
-
Use custom trained detector
Copy and paste your custom .weights file into the 'data' folder and copy and paste your custom .names into the 'data/classes/' folder.
The only change within the code you need to make in order for your custom model to work is on line 14 of 'core/config.py' file. Update the code to point at your custom .names file as seen below. (my custom .names file is called custom.names but yours might be named differently)
-
Convert yolov4 detector to Tensorflow detector
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4
Paste this command into Gitbash. -
Crop and save target areas as new images
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --images ./data/images/'your image name'.jpg --crop
Imput this command into Gitbash, make sure replace'your image name' to your image name. -
Do image processing and measure RGB values in Matlab.
Use .m MATLAB file to do image processing and measure cropped images, make sure to use correct directory and the number of coral nubbins.
Image processing is utilizing the edge detection and a series of dialation, holes filling, border clear, erosion to isolate coral nubbin from background. -
Test CoralDX.
Predict, predict and crop images.
Do image processing and measure RGB values.
Image processing.
MATLAB will give R, G, and B values for coral nubbins and color blocks.