lucidrains / x-transformers

A simple but complete full-attention transformer with a set of promising experimental features from various papers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

"Stabilizing Transformer Training by Preventing Attention Entropy Collapse" improvement to ViT

catid opened this issue · comments

From Apple paper: "Stabilizing Transformer Training by Preventing Attention Entropy Collapse" https://github.com/apple/ml-sigma-reparam

Here's my experiment:

catid/cifar10deepspeed@72a0b78

It works! Does a bit better than the default model: 85.6% vs 85.93% and gets rid of those awkward norms.

I also tried changing other Linear layers in SPT and in the output projection but that breaks the model. So these changes only seem to make sense inside the transformer.

On Twitter the authors said you can apply it to other layers like this: https://github.com/catid/cifar10deepspeed/pull/1/files
But it doesn't work in my script for whatever reason and seems like a bit of a hack.

@catid hey! thanks for sharing! probably a bit too complicated to introduce to this repository, but a good trick to know