Training Faster-RCNN using scientific data
jiangzihanict opened this issue · comments
We have a special dataset about extreme weather. The dataset has 16 channels and the size of the image is 1000*700px, which is totally different from the ImageNet. We want to implement end to end training based on this dataset.
The Faster-RCNN is composed of three parts: base network + RPN + RCNN. The base network usually is a pre-trained CNN(e.g. ResNet , VGG) for extracting features, but only ImageNet-based pre-trained model can be found because our dataset is not common. So, the question is can we implement end to end training without a pre-trained base network? Or, does the end-to-end training of Faster-RCNN include the parameters in the base network? I have seen many works about end-to-end training, but they all use the pre-trained model as their base network and seems only train the RPN and RCNN.
Hello @jiangzihanict!
Even though data might be quite different from ImageNet, I would still advise you to start from the pre-trained model and not from scratch. Transfer learning is widely used because the learned features on one dataset can be good for many tasks, even using different datasets.
In Luminoth, the way to control whether you want to train the entire network or not is with the following configuration parameter in config.yml
:
base_network:
# Starting point after which all the variables in the base network will be
# trainable. If not specified, then all the variables in the network will be
# trainable.
fine_tune_from: block2
So, changing block2
to an empty value will make it so that you train the entire network.
PD: I am curious, how does your data look like? Why did you pick Luminoth? 💪
@dekked Thanks for your reply.
The dataset is an open source data of extreme weather. Here is the link: https://extremeweatherdataset.github.io/
As for why I choose Luminoth, I think the lumi CLI tool is convenient.
So, block2
means only train the RPN and RCNN part? Can I use the pre-trained model and train the entire network?
Thanks for your response!
block2
means the training will proceed from this block of the ResNet onwards. As you can see in the TensorFlow implementation, the ResNet has 4 blocks. So it's not training only the RPN and RCNN parts, but the last layers of the ResNet, too.
If you use the pre-trained model you can still train the entire network if you wish, by setting fine_to_from
to empty value as I said before.