FudanDISC / DISC-MedLLM

Repository of DISC-MedLLM, it is a comprehensive solution that leverages Large Language Models (LLMs) to provide accurate and truthful medical response in end-to-end conversational healthcare services.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

加载时 tokenizer 出错?

chopin1998 opened this issue · comments

运行下面这句话时报错
tokenizer = AutoTokenizer.from_pretrained("Flmc/DISC-MedLLM", use_fast=False, trust_remote_code=True)

`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 tokenizer = AutoTokenizer.from_pretrained("Flmc/DISC-MedLLM", use_fast=False, trust_remote_code=True)

/usr/local/lib/python3.10/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
753 if os.path.isdir(pretrained_model_name_or_path):
754 tokenizer_class.register_for_auto_class()
--> 755 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
756 elif config_tokenizer_class is not None:
757 tokenizer_class = None

/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs)
2022 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}")
2023
-> 2024 return cls._from_pretrained(
2025 resolved_vocab_files,
2026 pretrained_model_name_or_path,

/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, token, cache_dir, local_files_only, _commit_hash, _is_local, *init_inputs, **kwargs)
2254 # Instantiate the tokenizer.
2255 try:
-> 2256 tokenizer = cls(*init_inputs, **init_kwargs)
2257 except OSError:
2258 raise OSError(

~/.cache/huggingface/modules/transformers_modules/Flmc/DISC-MedLLM/c63decba7cb81129fba4157e1d2cc86eca3da44f/tokenization_baichuan.py in init(self, vocab_file, unk_token, bos_token, eos_token, pad_token, sp_model_kwargs, add_bos_token, add_eos_token, clean_up_tokenization_spaces, **kwargs)
53 unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token
54 pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
---> 55 super().init(
56 bos_token=bos_token,
57 eos_token=eos_token,

/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py in init(self, **kwargs)
365 # 4. If some of the special tokens are not part of the vocab, we add them, at the end.
366 # the order of addition is the same as self.SPECIAL_TOKENS_ATTRIBUTES following tokenizers
--> 367 self._add_tokens(
368 [token for token in self.all_special_tokens_extended if token not in self._added_tokens_encoder],
369 special_tokens=True,

/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py in _add_tokens(self, new_tokens, special_tokens)
465 return added_tokens
466 # TODO this is fairly slow to improve!
--> 467 current_vocab = self.get_vocab().copy()
468 new_idx = len(current_vocab) # only call this once, len gives the last index + 1
469 for token in new_tokens:

~/.cache/huggingface/modules/transformers_modules/Flmc/DISC-MedLLM/c63decba7cb81129fba4157e1d2cc86eca3da44f/tokenization_baichuan.py in get_vocab(self)
87 def get_vocab(self):
88 """Returns vocab as a dict"""
---> 89 vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
90 vocab.update(self.added_tokens_encoder)
91 return vocab

~/.cache/huggingface/modules/transformers_modules/Flmc/DISC-MedLLM/c63decba7cb81129fba4157e1d2cc86eca3da44f/tokenization_baichuan.py in vocab_size(self)
83 def vocab_size(self):
84 """Returns vocab size"""
---> 85 return self.sp_model.get_piece_size()
86
87 def get_vocab(self):

AttributeError: 'BaichuanTokenizer' object has no attribute 'sp_model'
`

搞清楚了, 是 transformers版本问题

搞清楚了, 是 transformers版本问题

您好,我遇到了相同的问题,请问您解决问题后的 $\texttt{Transformers}$ 版本是什么?

搞清楚了, 是 transformers版本问题

您好,我遇到了相同的问题,请问您解决问题后的 Transformers 版本是什么?

我忘记了。。。如果是pip安装的, 您试着降低一些版本号就可以

搞清楚了, 是 transformers版本问题

您好,我遇到了相同的问题,请问您解决问题后的 Transformers 版本是什么?

transformers-4.33.3可以工作

好的,谢谢

搞清楚了, 是 transformers版本问题

您好,我遇到了相同的问题,请问您解决问题后的 Transformers 版本是什么?

transformers-4.33.3可以工作