SentencePiece
beginner-byte opened this issue · comments
Does it support the vocabulary segmentation trained by SentencePiece?
I have converted the "facebook/m2m100_418M" model, and now it is divided into encoder and decoder. I'm not sure what to do next. Since "facebook/m2m100_418M" does not have a tokenizer.json and uses a SentencePiece tokenizer, I'm a beginner.