huawei-noah / Efficient-AI-Backbones

Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Wave-MLP looks like it uses depth-wise conv (continued)

Phuoc-Hoan-Le opened this issue · comments

Hi,

From the issue #191, I am still questioning how 1xK/Kx1 depth-wise can be directly translated to pure matrix multiplication or how Wave-MLP is an MLP model.

I understand that you are required to limit the window size to deal with dense prediction tasks with varying sizes of input images, but I am still wondering how 1xK/Kx1 depth-wise can be directly translated to pure matrix multiplication. From what I know MLP models such as MLP-mixer, ResMLP, etc, don't have weight sharing among pixels/patches, but they share the weights among channels.

In other words, for MLP-based models and even Swin transformers, each pixel/patch has its own filters, but the filters are shared among the channel dimension.