atcbosselut / comet-commonsense

Code for ACL 2019 Paper: "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction" https://arxiv.org/abs/1906.05317

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Could you please recover the full sentence of this picture? Thank you!

guotong1988 opened this issue · comments

WX20191209-194740@2x

I am confused about the alignment.

Are you asking for the the tokens that make up s, r, and o for the input here?

Yes. The full input and output.

I think it was a made up example that probably looked something like:

PersonX sails... < xNeed > have a sail boat

What is the output?

Why the first two tokens of output are two [MASK]?

Because during training, we don't learn to predict the tokens of s and r. Our model learns to predict the tokens of o given s and r, so we mask the tokens of s and r at the output during training.

Thank you very much!