XiaLiPKU / EMANet

The code for Expectation-Maximization Attention Networks for Semantic Segmentation (ICCV'2019 Oral)

Home Page:https://xialipku.github.io/publication/expectation-maximization-attention-networks-for-semantic-segmentation/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about parameters and FLOPs in Tab. 1.

zhouyuan888888 opened this issue · comments

commented

You note "All results are achieved with the backbone ResNet-101 with output stride 8". Therefore, why the parameters and FLOPs of EMANet are substantially less than the backbone (ResNet-101)? Taking EMANet512 as an example, it contains 10M parameters and 43.1G FLOPs. However, the backbone (ResNet-101) network totally contains 42.6M parameters and 190.6G FLOPs. Are there some errors in this place?