THUDM / GLM

GLM (General Language Model)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

50035 token id 报错

Mryangkaitong opened this issue · comments

预测结果有一个tensor id 是50035(词表是"vocab_size": 50048),但是tokenizer.decode([50035])会报错

Traceback (most recent call last): File "test.py", line 18, in <module> print(tokenizer.decode(cur_id)) File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3471, in decode return self._decode( File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 931, in _decode filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens) File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 912, in convert_ids_to_tokens tokens.append(self._convert_id_to_token(index)) File "/root/.cache/huggingface/modules/transformers_modules/local/tokenization_glm.py", line 348, in _convert_id_to_token return self.sp_model.IdToPiece(index) File "/usr/local/lib/python3.8/site-packages/sentencepiece/__init__.py", line 501, in _batched_func return _func(self, arg) File "/usr/local/lib/python3.8/site-packages/sentencepiece/__init__.py", line 494, in _func raise IndexError('piece id is out of range.') IndexError: piece id is out of range.

Hi,我也遇到了这个问题,请问你现在解决了吗