calvinmccarter / idw-attention

Inverse distance-weighted attention learns prototypes in a single-hidden-layer network trained with vanilla cross-entropy loss.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inverse distance weighting attention

We report the effects of replacing the scaled dot-product (within softmax) attention with the negative-log of Euclidean distance. This form of attention simplifies to inverse distance weighting interpolation. Used in simple one hidden layer networks and trained with vanilla cross-entropy loss on classification problems, it tends to produce a key matrix containing prototypes and a value matrix with corresponding logits. We also show that the resulting interpretable networks can be augmented with manually-constructed prototypes to perform low-impact handling of special cases.

Poster at Associative Memory & Hopfield Networks Workshop @ NeurIPS 2023

Paper on OpenReview

Preprint on arXiv

About

Inverse distance-weighted attention learns prototypes in a single-hidden-layer network trained with vanilla cross-entropy loss.

License:GNU Affero General Public License v3.0


Languages

Language:Jupyter Notebook 99.6%Language:Python 0.4%