MuQiuJun-AI / bert4pytorch

超轻量级bert的pytorch版本,大量中文注释,容易修改结构,持续更新

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Transformer Quality in Linear Time gate control unit and FLASH code

aoom opened this issue · comments

commented

Transformer Quality in Linear Time

Weizhe Hua, Zihang Dai, Hanxiao Liu, Quoc V. Le
We revisit the design choices in Transformers, and propose methods to address their weaknesses in handling long sequences. First, we propose a simple layer named gated attention unit, which allows the use of a weaker single-head attention with minimal quality loss. We then propose a linear approximation method complementary to this new layer, which is accelerator-friendly and highly competitive in quality. The resulting model, named FLASH, matches the perplexity of improved Transformers over both short (512) and long (8K) context lengths, achieving training speedups of up to 4.9× on Wiki-40B and 12.1× on PG-19 for auto-regressive language modeling, and 4.8× on C4 for masked language modeling.
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Neural and Evolutionary Computing (cs.NE)
Cite as: arXiv:2202.10447 [cs.LG]
(or arXiv:2202.10447v1 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2202.10447

https://arxiv.org/abs/2202.10447