vincentgong7 / VG_AlexeyAB_darknet

A forked AlexeyAB Darknet repo with extra convenient functions.

Home Page:https://darknet.gong.im

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Not able to get output in anyform

naman-mehta opened this issue · comments

commented

Dear VincentCongratulations on your amazing work. I really appreciate your hard work. I would really appreciate your time if you could help me with the following issues.
List to issues.
1- Not able to compile or make successfully with OpenCV.
2- Without Opencv not able to save predictions as png or jpg.
3- Tried to save as json using "-out" and text using "-save_labels" but failed.
4 - the only result able to save is inside result.txt is the time taken per inference.
Total BFLOPS 65.864 
seen 64 folder input=/content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/ and output=exp/out_images/Start processing /content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/0521fd41-40219907.jpg/content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/0521fd41-40219907.jpg: Predicted in 264.639000 milli-seconds.EndStart processing /content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/03d34d05-4a61174c.jpg/content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/03d34d05-4a61174c.jpg: Predicted in 20.649000 milli-seconds.End
5- Couldn't find the right variable to change the batch size. Or maybe the whole image directory is one batch 
Can you please provide some suggestions.

For more information and procedures I followed please check-
https://colab.research.google.com/drive/1X280Qm3x9Vwm2RvShBpaSWq2jdj91mTo

commented

Dear Vincent,

Yes, I tried to Make with Opencv=0 and GPU =1 but not able to save bounding box results in text or json file.
And not able to save images also in .png or .jpg file
I get output like-
Not compiled with OpenCV, saving to exp/out_images/06218dd8-085f4f5c.png instead
Failed to write image exp/out_images/06218dd8-085f4f5c.png
Failed to write image exp/out_images/07116ae0-0f12a817.png

And what is the batch size in your solution?
Thanks and regards

commented

Hey Vincent,

I have Cuda 10.0 in google collab and I think it didn't compile successfully.
Following is the error -
g++ -std=c++11 -DGPU -I/usr/local/cuda/include/ -Wall -Wfatal-errors -Wno-unused-result -Wno-unknown-pragmas -Ofast -DGPU obj/http_stream.o obj/gemm.o obj/utils.o obj/cuda.o obj/convolutional_layer.o obj/list.o obj/image.o obj/activations.o obj/im2col.o obj/col2im.o obj/blas.o obj/crop_layer.o obj/dropout_layer.o obj/maxpool_layer.o obj/softmax_layer.o obj/data.o obj/matrix.o obj/network.o obj/connected_layer.o obj/cost_layer.o obj/parser.o obj/option_list.o obj/darknet.o obj/detection_layer.o obj/captcha.o obj/route_layer.o obj/writing.o obj/box.o obj/nightmare.o obj/normalization_layer.o obj/avgpool_layer.o obj/coco.o obj/dice.o obj/yolo.o obj/detector.o obj/layer.o obj/compare.o obj/classifier.o obj/local_layer.o obj/swag.o obj/shortcut_layer.o obj/activation_layer.o obj/rnn_layer.o obj/gru_layer.o obj/rnn.o obj/rnn_vid.o obj/crnn_layer.o obj/demo.o obj/tag.o obj/cifar.o obj/go.o obj/batchnorm_layer.o obj/art.o obj/region_layer.o obj/reorg_layer.o obj/reorg_old_layer.o obj/super.o obj/voxel.o obj/tree.o obj/yolo_layer.o obj/upsample_layer.o obj/convolutional_kernels.o obj/activation_kernels.o obj/im2col_kernels.o obj/col2im_kernels.o obj/blas_kernels.o obj/crop_layer_kernels.o obj/dropout_layer_kernels.o obj/maxpool_layer_kernels.o obj/network_kernels.o obj/avgpool_layer_kernels.o -o darknet -lm -pthread -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand -lstdc++

Also by batch size, I mean the number of training examples utilized in one iteration. Adding one more dimension to the dataset and say for example if batch size is 2000 images. The image size is 12807203.
Then batch will be 20001280720*3 loaded at once in GPU for inference. Thus utilizing the GPU efficiently.
So batch size = the number of images in one forward pass.
Any suggestion to resolve the compilation error.
I also want to know if your model loads images as batches in one forward pass. or you give the directory and do inferencing one after another image sequentially and save the output.

commented

Hi Vincent,

Thanks for your time and reply. I got an answer to my question. I am trying to increase the batch size while inferencing to process multiple images concurrently. If you can suggest any way that would be very helpful.

Thanks and Regards,
Naman

Hi Naman,

I just updated the version with Yolo 4. It supports batch image processing and export bounding box in JSON or TXT. I have tested it both on Ubuntu and Windows. Compiles good and running well.

Recently I'm not that busy as before. In case you have any questions, let me know and let's fix it.

BR,
Vincent

commented

Hi Vincent,

Awesome! It sounds good. Great work

Thanks and Regards,
Naman

Hi Naman, I checked my code, as it has been a while. In my project, also my solution, the opencv is not needed, namely I did not use opencv: in makefile OPENCV=0 Therefore, if you turn on opencv, errors might occur. Regards, Vincent X. Gong

On Thu, Feb 20, 2020 at 3:31 PM Naman @.***> wrote: Dear VincentCongratulations on your amazing work. I really appreciate your hard work. I would really appreciate your time if you could help me with the following issues. List to issues. 1- Not able to compile or make successfully with OpenCV. 2- Without Opencv not able to save predictions as png or jpg. 3- Tried to save as json using "-out" and text using "-save_labels" but failed. 4 - the only result able to save is inside result.txt is the time taken per inference. Total BFLOPS 65.864 seen 64 folder input=/content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/ and output=exp/out_images/Start processing /content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/0521fd41-40219907.jpg/content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/0521fd41-40219907.jpg: Predicted in 264.639000 milli-seconds.EndStart processing /content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/03d34d05-4a61174c.jpg/content/VG_AlexeyAB_darknet/exp/input_images/content/images_test/03d34d05-4a61174c.jpg: Predicted in 20.649000 milli-seconds.End 5- Couldn't find the right variable to change the batch size. Or maybe the whole image directory is one batch Can you please provide some suggestions. For more information and procedures I followed please check- https://colab.research.google.com/drive/1X280Qm3x9Vwm2RvShBpaSWq2jdj91mTo — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#12?email_source=notifications&email_token=ABSXEI4X4AV3JAX75ADKKBTRD2H4TA5CNFSM4KYQADWKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IPAGLMQ>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABSXEI66JBUSHEEVJL2L353RD2H4TANCNFSM4KYQADWA .

Thank you so much! I was having issues, except my output was only 1 image. Changing opencv=0 in Makefile resolved my issue.