d-li14 / mobilenetv3.pytorch

74.3% MobileNetV3-Large and 67.2% MobileNetV3-Small model on ImageNet

Home Page:https://arxiv.org/abs/1905.02244

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

request for training loss curve

hengck23 opened this issue · comments

@d-li14 Thank you very much for your work. I was wondering if you can provide train loss curve as well? It can be in the form of training log in txt or plot like this: https://github.com/d-li14/octconv.pytorch (https://raw.githubusercontent.com/d-li14/octconv.pytorch/master/fig/ablation.png)

Btw, you may want to consider replacing conv2d + pad for future work as:

conv2d = nn.Sequential(
    nn.ZeroPad([ x1,y1,x2,y2]),
    nn.Conv2d( in_channel, out_channel, kernel_size, stride, padding=0, ...)
)

if you do this, and by selecting correct [ x1,y1,x2,y2], your model can be converted to tf or tf/keras or tflite model with same accuracy. tensorflow tf uses unsymmetrical padding. I have make a pytorch to tf converter for your mobilenet v3 and verified that numerical accuracy can be obtained (numerical difference of less than 1e-6)

Many uses of mobilenet are in mobile phone and are served in tflite. tiflite also allows for 8-bit quantisation