ZongxuPan / DrBox-v2-tensorflow

The tensorflow implementation of DrBox-v2 which is an improved detector with rotatable boxes for target detection in remote sensing images

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

evaluation.py error

mindmad opened this issue · comments

Hello @ZongxuPan.

I have some problem when implementing the evaluation.py file :

Traceback (most recent call last):
File "evaluation.py", line 224, in
pr_curve(route_result)
File "evaluation.py", line 155, in pr_curve
bep = BEP(pr_rec, pr_pre)
File "evaluation.py", line 197, in BEP
interval_rec_pre = pr_rec[0]-pr_pre[0]
IndexError: too many indices for array

I didn't know how to handle this error. so please some help

The calculation of indicators requires multiple data points as the basis. The problems you have discovered have been encountered before. One of the possible reasons is that when you do not have the correct target, or only one target is detected, BEP Calculations are not possible in any way. So I suggest you check the test set,if there is only one target or no more than one target detected, the indicator is indeed unable to be calculated.
If it is not for this reason, welcome to continue to communicate.

@Anquanzhi Actually I didn't understand.
but I am using your demo data I didn't change anything.
if it's possible could you explain more?

The current data is only used as a demo, you don't need to test the model to run the evaluation code. Is it because you changed the result in .data/result after running the test code, so that the situation in my last answer appeared?
Another possibility is that you have not updated the latest files, and the current data folder has been updated.

@Anquanzhi Thanks allot for your help and support.

I reclone the model and i will train and test again.

hello @Anquanzhi

I have cloned, train, and test the model again this time the evaluation worked, however, it shows me a week AP and BEP

Painting...
BEP: 0.333333333333
AP: 0.222222222222

even though the all_figures.txt.txt file shows good results :

0.999915480614 0
0.999887228012 0
0.999860286713 1
0.999860167503 0
0.999793469906 0
0.999550163746 1
0.997744560242 0
0.743125975132 0
0.434285312891 0
0.373028635979 0

that was for your demo data.

and for my own data, the situation is even worse.
Painting...
The recall rate is too low!

BEP: -1
AP: 0.483371083193

and for the all_figures.txt.txt file shows :

0.278173714876 1
0.896148264408 1
0.916225612164 1
0.487738460302 1
0.980519235134 1
0.309651911259 0
0.985907554626 1
0.44076231122 0
0.986064553261 1
0.49633744359 0
0.226171344519 0
0.981853187084 1
0.294361531734 1
0.387323856354 0
0.989579737186 1
0.494036614895 1
0.999974131584 0
0.964843630791 1
0.558833897114 0
0.431114941835 1
0.787603437901 1
0.366802752018 1
0.62779122591 1
0.513092637062 1
0.614045083523 1
0.436260849237 1
0.995342254639 1
0.987847089767 1
0.421141445637 1
0.985391378403 1
0.659533202648 1
0.979775309563 1
0.21331782639 0
0.91444838047 0
0.581584870815 0
0.735849797726 1
0.997273504734 1
0.41504907608 1
0.837167203426 1
0.20643222332 0
0.99900084734 1
0.326586782932 0
0.263598144054 0
0.302187234163 0
0.994367063046 1
0.489663392305 1
0.890060722828 1
0.649484813213 1

so why the evaluation is so bad especially for my own data.

The poor results in the demo are due to too few training samples (actually only about 10 samples are available before data enhancement).
Is there a problem with the amount of data on your own data? As there is not a lot of samples on your test set data, the training set may not be much. In addition, if your training set data is sufficient (the number of single-class detection targets before data enhancement can reach thousands) and the training set data and test set data are consistent, then you should consider the difficulty of the task and break off the training process when the loss function drops to a certain extent to suppress over-fitting.
In addition, BEP is -1 because the test result is not ideal, and the accuracy rate is not equal to the recall rate.

@Anquanzhi thank you for your response and help.

my dataset consists of 66203 for training and 50 for test
and my loss.txt is attached you can check if possible
loss.txt

@Anquanzhi hello I am grateful for your help.
If you don't mind I still have problem with rising the AP and BEP
so any help or advice

When you have problems with your own data, you need to check the problem from the following aspects:
1,The two parameters PRIOR_HEIGHTS and PRIOR_WEIGHTS appear in pairs. In other words, the two lists should have the same length. Reasonable setting of these two parameters needs to be considered in combination with your own data. Generally, several distribution centers of the target lengths and widths in the data set are selected as the basis for setting these two parameters.
Reference: our paper "DRBox-v2:An Improved Detector with Rotatable Boxes for Target Detection in SAR Images" IV, B, 2)
2,The setting of the "IS180" parameter is related to the characteristics of the targets in your data. When the head and tail of the target can be distinguished, (the angles of targets ranges from 0 to 360 in the annotation file) the parameter should be set to False, and PRIOR_ANGLES can be set to [0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330]; otherwise, the parameter is set to True.
3,When you use your own data, we recommend that you still set the image block size to 300×300, so you don't need to change the feature size(FEA_HEIGHT and FEA_WIDTH) and some other parameters. Otherwise, you need to adjust the feature size, stepsize according to your own data. In addition, please ensure that your annotation data is consistent with the sample and there are no problems such as crossing the boundary.
4, After you use your own annotation data, in order to ensure the correctness of the data preparation, you can mark the prior boxes in "pkl" file(which plays as positive samples) on images. As shown in the figure below, the prior box coded as a positive sample is similar to the target, and there is no problem in data preparation.

5, When you use your own data, you can modify the parameter settings as needed:
OVERLAP_THRESHOLD: affects the number of positive samples and the degree of similarity to the annotated information in training samples.
TEST_SCORE_THRESHOLD:the threshold value of retained results in the detection phase.
TEST_NMS_THRESHOLD: the predictions which has an IoU above the this threshold with other results could be discriminated.
6, In the reasonable training process, the loss function curve should play as a steady decline
When your loss function is abnormal, you can check it out.as following:
①You can comment on one loss, and observe the situation of the other loss to locate the problem. (self.loss = self.loc_loss + self.conf_loss)
②When the location loss function is abnormal, there may be a problem with the positive sample coding. You should check if your labeling information and network parameter settings are reasonable. (see 3, 4, 5)

@Anquanzhi
I am very grateful for your help and I can't thank you enough.
my image size is 300x300. but I will implement everything as you gaven me step by step.
and I hope that I didn't bother you. and if not I will happy to get more advice and guidance from you.