chuanqi305 / MobileNetv2-SSDLite

Caffe implementation of SSD and SSDLite detection on MobileNetv2, converted from tensorflow.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

why the inference time of MobileNetv2 is larger than MobileNetv1????

yuanze-lin opened this issue · comments

why ????

in the paper,it is refered although the mult-adds of bottleneck is larger than depthwise,but allow to use smaller input and output dimensions ,but the deploy.prototxt uses the dimensions the same as MobileNetV1-SSD,so we see this result