loujie0822 / DeepIE

DeepIE: Deep Learning for Information Extraction

Home Page:https://github.com/loujie0822/DeepIE

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

能否提供一些模型数据

Godlikemandyy opened this issue · comments

大佬,你好:
尝试跑了一下elt_span_transformers发现报了一些错误:
2021-01-26 14:49:24,295 - transformers.tokenization_utils - INFO - Model name 'transformer_model_path' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, ber
t-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-lar
ge-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-germa
n-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming 'transformer_model_path' is a path or url to a directory containing tokenizer files.
2021-01-26 14:49:24,295 - transformers.tokenization_utils - INFO - Didn't find file transformer_model_path. We won't load it.
2021-01-26 14:49:24,296 - transformers.tokenization_utils - INFO - Didn't find file transformer_model_path\added_tokens.json. We won't load it.
2021-01-26 14:49:24,296 - transformers.tokenization_utils - INFO - Didn't find file transformer_model_path\special_tokens_map.json. We won't load it.
2021-01-26 14:49:24,296 - transformers.tokenization_utils - INFO - Didn't find file transformer_model_path\tokenizer_config.json. We won't load it.
Traceback (most recent call last):
File "run/relation_extraction/etl_span_transformers/main.py", line 148, in
main()
File "run/relation_extraction/etl_span_transformers/main.py", line 129, in main
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=True)
File "D:\Anaconda3\envs\deepie\lib\site-packages\transformers\tokenization_utils.py", line 283, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "D:\Anaconda3\envs\deepie\lib\site-packages\transformers\tokenization_utils.py", line 347, in _from_pretrained
list(cls.vocab_files_names.values())))
OSError: Model name 'transformer_model_path' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingu
al-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whol
e-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We a
ssumed 'transformer_model_path' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
能否提供一些模型数据呢?多谢