About fbgemm acceleration
nmanhong opened this issue · comments
Why do I test the faster-rcnn model in detectron2, there is almost no difference in the test time between opening fbgemm library and closing fbgemm, and closing mkldnn, I want to know where the fbgemm library accelerates
fbgemm library provides quantized operations. There are pytorch operators written using fbgemm APIs. If your model is using those quantized pytorch operators, you should see a difference.
@dskhudia But I do not use the api of fbgemm, where is there a case provided, let me test how much improvement is compared to use and not to use
How are you executing the model ?