karpathy / minGPT

A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What is the purpose of `c_proj` here?

brynhayder opened this issue · comments

self.c_proj = nn.Linear(config.n_embd, config.n_embd)

Why do we need an additional linear transformation after the MHA and before the MLP when the dimensions are the same?

(I understand that this is how the initial transformer implementation was written, but I took this operation to be for dimension consistency between sequential attention operations. It seems superfluous here since the first linear in the MLP can already take linear combinations of the attention outputs.)

I think the point is that the W^O can change the dimensionality, if the output of the concatenation is large/small (i.e. if dk or dv was a different dimension). Though the paper ultimately used parameters to make W^O square. It maybe helped the authors with trying out other parameters.

Related: #118 (comment)