ai-forever / ner-bert

BERT-NER (nert-bert) with google bert https://github.com/google-research.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Key Error creating NerData

MNCTTY opened this issue · comments

Hello again!

I have a strange error while I run
data = NerData.create(train_path, valid_path, vocab_file)


KeyError Traceback (most recent call last)
/opt/anaconda3/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
3063 try:
-> 3064 return self._engine.get_loc(key)
3065 except KeyError:

pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: '1'

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last)
in
----> 1 data = NerData.create(train_path, valid_path, vocab_file)

~/ner-bert-master/modules/data/bert_data.py in create(cls, train_path, valid_path, vocab_file, batch_size, cuda, is_cls, data_type, max_seq_len, is_meta)
389 raise NotImplementedError("No requested mode :(.")
390 return cls(train_path, valid_path, vocab_file, data_type, *fn(
--> 391 train_path, valid_path, vocab_file, batch_size, cuda, is_cls, do_lower_case, max_seq_len, is_meta),
392 batch_size=batch_size, cuda=cuda, is_meta=is_meta)

~/ner-bert-master/modules/data/bert_data.py in get_bert_data_loaders(train, valid, vocab_file, batch_size, cuda, is_cls, do_lower_case, max_seq_len, is_meta, label2idx, cls2idx)
279 tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=do_lower_case)
280 train_f, label2idx = get_data(
--> 281 train, tokenizer, label2idx, cls2idx=cls2idx, is_cls=is_cls, max_seq_len=max_seq_len, is_meta=is_meta)
282 if is_cls:
283 label2idx, cls2idx = label2idx

~/ner-bert-master/modules/data/bert_data.py in get_data(df, tokenizer, label2idx, max_seq_len, pad, cls2idx, is_cls, is_meta)
145 all_args.extend([df["1"].tolist(), df["0"].tolist(), df["2"].tolist()])
146 else:
--> 147 all_args.extend([df["1"].tolist(), df["0"].tolist()])
148 if is_meta:
149 all_args.append(df["3"].tolist())

/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in getitem(self, key)
2686 return self._getitem_multilevel(key)
2687 else:
-> 2688 return self._getitem_column(key)
2689
2690 def _getitem_column(self, key):

/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in _getitem_column(self, key)
2693 # get column
2694 if self.columns.is_unique:
-> 2695 return self._get_item_cache(key)
2696
2697 # duplicate columns & possible reduce dimensionality

/opt/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py in _get_item_cache(self, item)
2484 res = cache.get(item)
2485 if res is None:
-> 2486 values = self._data.get(item)
2487 res = self._box_item_values(item, values)
2488 cache[item] = res

/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals.py in get(self, item, fastpath)
4113
4114 if not isna(item):
-> 4115 loc = self.items.get_loc(item)
4116 else:
4117 indexer = np.arange(len(self.items))[isna(self.items)]

/opt/anaconda3/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
3064 return self._engine.get_loc(key)
3065 except KeyError:
-> 3066 return self._engine.get_loc(self._maybe_cast_indexer(key))
3067
3068 indexer = self.get_indexer([key], method=method, tolerance=tolerance)

pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: '1'

May be you've faced it too. I can't google something similar to my case

please see u csv file, that u pass to model