A simple demo uncovering multiple masked tokens in the middle of a sentence with roberta.
- need to know the numebr of tokens to unmaks
- is it that useful?
- Could potentially train up to a masked token length, and pad out the blank tokens with
unk
orpad
token