why the inference time of MobileNetv2 is larger than MobileNetv1????
yuanze-lin opened this issue · comments
Yuanze Lin commented
why ????
ZhuLingfeng1993 commented
in the paper,it is refered although the mult-adds of bottleneck is larger than depthwise,but allow to use smaller input and output dimensions ,but the deploy.prototxt uses the dimensions the same as MobileNetV1-SSD,so we see this result