renatoviolin / Question-Answering-Albert-Electra

Question Answering using Albert and Electra

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

lyonLeeLPL opened this issue · comments

commented

D:\Anaconda3\python.exe "D:\Program Files\JetBrains\PyCharm Community Edition 2020.1.1\plugins\python-ce\helpers\pydev\pydevd.py" --multiproc --qt-support=auto --client 127.0.0.1 --port 51440 --file "D:/pycharm projects/Question-Answering-Albert-Electra/app.py"
pydev debugger: process 12076 is connecting

Connected to pydev debugger (build 201.7223.92)
I1107 13:35:17.207592 11528 file_utils.py:41] PyTorch version 1.4.0 available.
I1107 13:35:21.279896 11528 tokenization_utils.py:420] Model name 'ahotrod/albert_xxlargev1_squad2_512' not found in model shortcut name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). Assuming 'ahotrod/albert_xxlargev1_squad2_512' is a path, a model identifier, or url to a directory containing tokenizer files.
I1107 13:35:24.373474 11528 tokenization_utils.py:504] loading file https://s3.amazonaws.com/models.huggingface.co/bert/ahotrod/albert_xxlargev1_squad2_512/spiece.model from cache at C:\Users\a5601.cache\torch\transformers\26718a99791ef86acb65fc2339f24a8bf44d40d05d6753a1fe1c750019d5f06b.c81d4deb77aec08ce575b7a39a989a79dd54f321bfb82c2b54dd35f52f8182cf
I1107 13:35:24.373474 11528 tokenization_utils.py:504] loading file https://s3.amazonaws.com/models.huggingface.co/bert/ahotrod/albert_xxlargev1_squad2_512/added_tokens.json from cache at None
I1107 13:35:24.373474 11528 tokenization_utils.py:504] loading file https://s3.amazonaws.com/models.huggingface.co/bert/ahotrod/albert_xxlargev1_squad2_512/special_tokens_map.json from cache at C:\Users\a5601.cache\torch\transformers\40431d530fa0aefd8474875b108cf27da02f7b1fe14fb592b81acb45d3864360.4f0d42b1849e2d6fd72c735fba48dff0d2f0a55f5d1961e79bcfce337d354167
I1107 13:35:24.373474 11528 tokenization_utils.py:504] loading file https://s3.amazonaws.com/models.huggingface.co/bert/ahotrod/albert_xxlargev1_squad2_512/tokenizer_config.json from cache at C:\Users\a5601.cache\torch\transformers\bca42744d74982b0deb212608808b37b5f9eb8c23973f2f3719ba9cc7607aa8b.11f57497ee659e26f830788489816dbcb678d91ae48c06c50c9dc0e4438ec05b
I1107 13:35:26.558839 11528 configuration_utils.py:282] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/ahotrod/albert_xxlargev1_squad2_512/config.json from cache at C:\Users\a5601.cache\torch\transformers\86d99b996878771bdd3aa45165bfbbee2995392a073da0312eff99d8a9bfbbd4.3ea8b110975cac874d0aed8467d4f5fa51ffd7ffa492a3068746bc4fa6fc35d5
I1107 13:35:26.558839 11528 configuration_utils.py:318] Model config AlbertConfig {
"_num_labels": 2,
"architectures": [
"AlbertForQuestionAnswering"
],
"attention_probs_dropout_prob": 0,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"decoder_start_token_id": null,
"do_sample": false,
"down_scale_factor": 1,
"early_stopping": false,
"embedding_size": 128,
"eos_token_id": 3,
"finetuning_task": null,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 4096,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"inner_group_num": 1,
"intermediate_size": 16384,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"layers_to_keep": [],
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "albert",
"net_structure_type": 0,
"no_repeat_ngram_size": 0,
"num_attention_heads": 64,
"num_beams": 1,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000
}

I1107 13:35:34.078456 11528 modeling_utils.py:500] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/ahotrod/albert_xxlargev1_squad2_512/pytorch_model.bin from cache at data/pytorch_model.bin
Traceback (most recent call last):
File "D:\Anaconda3\lib\site-packages\transformers\modeling_utils.py", line 509, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location="cpu")
File "D:\Anaconda3\lib\site-packages\torch\serialization.py", line 527, in load
with _open_zipfile_reader(f) as opened_zipfile:
File "D:\Anaconda3\lib\site-packages\torch\serialization.py", line 224, in init
super(_open_zipfile_reader, self).init(torch.C.PyTorchFileReader(name_or_buffer))
RuntimeError: version
<= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at ..\caffe2\serialize\inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at ..\caffe2\serialize\inline_container.cc:132)
(no backtrace available)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in _call_with_frames_removed
File "D:\pycharm projects\Question-Answering-Albert-Electra\albert\albert_xxlarge.py", line 4, in
model = AlbertForQuestionAnswering.from_pretrained('ahotrod/albert_xxlargev1_squad2_512')
File "D:\Anaconda3\lib\site-packages\transformers\modeling_utils.py", line 512, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

Process finished with exit code -1

all need data is being setted。 error still exists。

The latest Albert model uses pytorch 1.6.0 so you'll have to update your torch version from 1.4.0 to that. (see here)