idrblab / EnsemPPIS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

unable to load pretrain model and version mismatch

suice07 opened this issue · comments

Hello,

I was trying to run the predictions according to the steps, unfortunately, there is some version mismatch problem which I counld not solve, I install all the packages according to the requirement.txt and set the python version to 3.7.11. But when I run the ProtBERT_feature_generator.py, following problem occured.

Traceback (most recent call last):
  File "ProtBERT_feature_generator.py", line 4, in <module>
    from transformers import BertModel, BertTokenizer
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/transformers/__init__.y", line 43, in <module>
    from . import dependency_versions_check
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/transformers/dependenc_versions_check.py", line 41, in <module>
    require_version_core(deps[pkg])
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/transformers/utils/verions.py", line 94, in require_version_core
    return require_version(requirement, hint)
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/transformers/utils/verions.py", line 85, in require_version
    if want_ver is not None and not ops[op](version.parse(got_ver), version.pars(want_ver)):
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/packaging/version.py",line 54, in parse
    return Version(version)
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/packaging/version.py",line 200, in __init__
    raise InvalidVersion(f"Invalid version: '{version}'")
packaging.version.InvalidVersion: Invalid version: '0.10.1,<0.11'

seems to be some version mismatch problem. I tried to uninstall the tokenizers package, so that the program can go on. and the following problem appeared.

Traceback (most recent call last):
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1038, in from_pretrained
    state_dict = torch.load(resolved_archive_file, map_location="cpu")
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/torch/serialization.py", line 386, in load
    return _load(f, map_location, pickle_module, **pickle_load_args)
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/torch/serialization.py", line 580, in _load
    deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 3715186 more bytes. The file might be corrupted.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "ProtBERT_feature_generator.py", line 67, in <module>
    generate_protbert_features(file)
  File "ProtBERT_feature_generator.py", line 24, in generate_protbert_features
    model = BertModel.from_pretrained(path)
  File "/opt/conda/envs/ensem/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1041, in from_pretrained
    f"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' "
OSError: Unable to load weights from pytorch checkpoint file for './' at './pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

I have no idea why this problem happens, is the pretrained model a tf model? It would be nice if some help can be offered. Thanks in advance.

commented

we tested the scripts and no such traceback reported; please check or rebuild your virtual environment to make sure the "transformers" package (version = 4.3.3) is correctly installed.

我也有这个问题