GuideWsp / dot-product-attention

A collection of self-attention modules and pre-trained backbones

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

dot-product attention

Architecture Top-1 Acc. (%) Top-5 Acc. (%) Download
ResNet-50 76.74 93.47 model | log
NL-ResNet-50 76.55 92.99 model | log
A^2-ResNet-50 77.24 93.66 model | log
GloRe-ResNet-50 77.81 93.99 model | log
AA-ResNet-50 77.57 93.73 model

Models are trained on 32 GPUs with the mini-batch size 32 per GPU for 100 epochs. The SGD optimizer with initial learning rate 0.4, momentum 0.9, weight decay 0.0001 is adopted for training. The learning rate anneals following the cosine schedule, with linear warmup for the first 5 epochs with the warmup ratio of 0.25.

† initial learning rate 0.1 w/o warmup for 100 epochs, w/ label smoothing, mini-batch size 32 per GPU on 8 GPUs

About

A collection of self-attention modules and pre-trained backbones

License:MIT License


Languages

Language:Python 100.0%