Feature Request: Add support for MobileNet V4
j99ca opened this issue · comments
MobileNet V4 was published this summer and the latest version of timm added support for the MobileNet V4 family of models.
A snippet from the paper regarding performance compared to MobileNet V3:
"On CPUs, MNv4 models notably outperform others, being approximately twice as fast as MobileNetV3 and several times faster compared to other models for equivalent accuracy targets. On EdgeTPUs, MNv4 models double the speed of MobileNet V3 at the same accuracy level"
I think we should be able to use the model classes in this file (src/otx/algo/classification/timm_model.py) to use MV4 with only minor changes. I'll do a quick test and share it back with you. We can use this simply, but it may take some time to be available as official recipe file because we need to validate or test that the IR Model actually works well. But I don't think it would be hard to make it available for users to try in API and CLI, and I'll keep you updated.
#3976 - As this PR was recently merged into develop, we made a fix in the API to allow for the use of different models of timm.
However, there are conflicts with upgrading the current timm version, so getting it up to the latest version of timm will be a later step, and creating a recipe for MV4 will be a later step.
Anyway, we now have the current MV4 models available on OTX with the methods below.
- pull latest develop
git pull
- install latest timm version
pip install timm==1.0.9
- use timm model in OTX
from otx.algo.classification.timm_model import TimmModelForMulticlassCls
timm_model_name = "mobilenetv4_conv_large.e600_r384_in1k"
mv4_model = TimmModelForMulticlassCls(model_name=timm_model_name, ...)