pyvandenbussche / transformers-ner

Experiment on NER task using Huggingface state-of-the-art Transformers Natural Language Models library

Home Page:http://pyvandenbussche.info/2019/named-entity-recognition-with-pytorch-transformers/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

yld: lazy symbol binding failed: Symbol not found: _PySlice_Unpack

SeekPoint opened this issue · comments

"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 28996
}

08/25/2020 10:39:20 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /Users//.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1
08/25/2020 10:39:22 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin from cache at /Users//.cache/torch/transformers/35d8b9d36faaf46728a0192d82bf7d00137490cd6074e8500778afed552a67e5.3fadbea36527ae472139fe84cddaa65454d7429f12d543d80bfc3ad70de55ac2
08/25/2020 10:39:24 - INFO - transformers.modeling_utils - Weights of BertForTokenClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
08/25/2020 10:39:24 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
08/25/2020 10:39:24 - INFO - main - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='./data', device=device(type='cpu'), do_eval=False, do_lower_case=False, do_predict=True, do_train=True, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, labels='./data/labels.txt', learning_rate=5e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_seq_length=256, max_steps=-1, model_name_or_path='bert-base-cased', model_type='bert', n_gpu=0, no_cuda=False, num_train_epochs=3.0, output_dir='./output', overwrite_cache=True, overwrite_output_dir=True, per_gpu_eval_batch_size=8, per_gpu_train_batch_size=8, save_steps=50, seed=42, server_ip='', server_port='', tokenizer_name='', warmup_steps=0, weight_decay=0.0)
08/25/2020 10:39:24 - INFO - main - Creating features from dataset file at ./data
08/25/2020 10:39:24 - INFO - utils_ner - Writing example 0 of 9141
08/25/2020 10:39:42 - INFO - main - Saving features into cached file ./data/cached_train_bert-base-cased_256
08/25/2020 10:39:46 - INFO - main - ***** Running training *****
08/25/2020 10:39:46 - INFO - main - Num examples = 9141
08/25/2020 10:39:46 - INFO - main - Num Epochs = 3
08/25/2020 10:39:46 - INFO - main - Instantaneous batch size per GPU = 8
08/25/2020 10:39:46 - INFO - main - Total train batch size (w. parallel, distributed & accumulation) = 8
08/25/2020 10:39:46 - INFO - main - Gradient Accumulation steps = 1
08/25/2020 10:39:46 - INFO - main - Total optimization steps = 3429
Epoch: 0%| | 0/3 [00:00<?, ?it/sdyld: lazy symbol binding failed: Symbol not found: _PySlice_Unpack | 0/1143 [00:00<?, ?it/s]
Referenced from: /Users//ghSrc/transformers-ner_01/.venvpy36/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib
Expected in: flat namespace

dyld: Symbol not found: _PySlice_Unpack
Referenced from: /Users//ghSrc/transformers-ner_01/.venvpy36/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib
Expected in: flat namespace

zsh: abort python ./run_ner.py --data_dir ./data --model_type bert --model_name_or_path
(.venvpy36) ghSrc/transformers-ner_01 %