Segmentation fault core dump when detecting 2 classes
Timbimjim opened this issue · comments
Hey Vincent,
Thx for the great fork! I have a dataset train for 22 different classes. I used your fork and it went super smooth through 600 test images, saved the predictions etc pp.
Then I changed the classes for my dataset to only two classes (same images though), trained again and tried your fork for batch predictions.
Now after it runs through 4-5 images I keep getting the segmention fault core dump error.
I really don't know what the problem is. I did everything the same, only differenz is the new weight file and the fact it's only 2 classes now.
My training went without problems. I also didn't change anything important in the training (ofc I changed classes, filters etc. Though)
I Google a lot, tried out many different things but nothing works.
Any idea what might caus this? Seems like my weight file might be corrupted or so? It successfully loads it though.
Thx for any help
Hey Vincent,
The Problem was resolved. I changed random = 0 and max batch to 6000, it was 4000 before, and trained again. I was using roboflow Google colab where it automatically generates the cfg for you by multiplying the classes with 2000 for maxbatch. But according to Alexey this has to be at least 6000 or bigger.
So was either this or the random = 1,which overloaded the gpu I guess.
Works now fine. Thx for your reply anyway
Have a good day mate