inference time
CuttlefishXuan opened this issue · comments
Thanks for the great work!
What is the inference time of the BiSeNetV2 model with input size 1024*512?
According to CoinCheung'repo, the speed of v2 is no different than v1 (https://github.com/CoinCheung/BiSeNet/tree/master/tensorrt). So, can anyone reproduce the results of 156fps in the paper?
@CuttlefishXuan The inference time can reach 87fps with gtx1070 and trt. With gtx3080 and trt the inference time can reach 170fps. With gtx1080ti and trt inference time can reach 150fps:)
Woow, it's close to the paper!
@CuttlefishXuan You could test on your local machine and the result may be slicly different due to environment. Welcome to share your test result:)
@MaybeShewill-CV I tested the pretrained model on my local machine which is 1080Ti. But why I get the result of 50.16fps for the bisenetv2 model. And is the default input image size 512×1024? Sorry i am not familiar with tensorflow.
bisenetv2-tensorflow/tools/cityscapes/timeprofile_cityscapes_bisenetv2.py
Lines 156 to 158 in f55c2e9
Input image was rescaled into 1024*512
@MaybeShewill-CV ok, thanks. Than why the result i got is 50fps for bisenetv2 on gtx 1080ti, is there something I missed?
@CuttlefishXuan Not caused by the code here. Maybe something wrong with your local env:)
@MaybeShewill-CV OK, thanks again for your patience.
@CuttlefishXuan Welcome:)