stanfordnlp / stanfordnlp

[Deprecated] This library has been renamed to "Stanza". Latest development at: https://github.com/stanfordnlp/stanza

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RuntimeError: expected scalar type Long but found Float

bo-scnu opened this issue · comments

When I run the example:

import pandas as pd

# stanfordnlp.download('en')

nlp = stanfordnlp.Pipeline()
doc = nlp("Barack Obama was born in Hawaii.  He was elected president in 2008.")
doc.sentences[0].print_dependencies()

I get a bug:
RuntimeError: expected scalar type Long but found Float

How can I fix it??

The version of pytorch I installed is 1.7.1

Use device: gpu
---
Loading: tokenize
With settings: 
{'model_path': '/home/xiebo/stanfordnlp_resources/en_ewt_models/en_ewt_tokenizer.pt', 'lang': 'en', 'shorthand': 'en_ewt', 'mode': 'predict'}
---
Loading: pos
With settings: 
{'model_path': '/home/xiebo/stanfordnlp_resources/en_ewt_models/en_ewt_tagger.pt', 'pretrain_path': '/home/xiebo/stanfordnlp_resources/en_ewt_models/en_ewt.pretrain.pt', 'lang': 'en', 'shorthand': 'en_ewt', 'mode': 'predict'}
---
Loading: lemma
With settings: 
{'model_path': '/home/xiebo/stanfordnlp_resources/en_ewt_models/en_ewt_lemmatizer.pt', 'lang': 'en', 'shorthand': 'en_ewt', 'mode': 'predict'}
Building an attentional Seq2Seq model...
Using a Bi-LSTM encoder
Using soft attention for LSTM.
Finetune all embeddings.
[Running seq2seq lemmatizer with edit classifier]
---
Loading: depparse
With settings: 
{'model_path': '/home/xiebo/stanfordnlp_resources/en_ewt_models/en_ewt_parser.pt', 'pretrain_path': '/home/xiebo/stanfordnlp_resources/en_ewt_models/en_ewt.pretrain.pt', 'lang': 'en', 'shorthand': 'en_ewt', 'mode': 'predict'}
Done loading processors!
---
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-8-7d49d79a3a11> in <module>
      5 
      6 nlp = stanfordnlp.Pipeline()
----> 7 doc = nlp("Barack Obama was born in Hawaii.  He was elected president in 2008.")
      8 doc.sentences[0].print_dependencies()

~/.conda/envs/tensorflow2_3/lib/python3.6/site-packages/stanfordnlp/pipeline/core.py in __call__(self, doc)
    174         if isinstance(doc, str) or isinstance(doc, list):
    175             doc = Document(doc)
--> 176         self.process(doc)
    177         return doc

~/.conda/envs/tensorflow2_3/lib/python3.6/site-packages/stanfordnlp/pipeline/core.py in process(self, doc)
    168         for processor_name in self.processor_names:
    169             if self.processors[processor_name] is not None:
--> 170                 self.processors[processor_name].process(doc)
    171         doc.load_annotations()
    172 

~/.conda/envs/tensorflow2_3/lib/python3.6/site-packages/stanfordnlp/pipeline/lemma_processor.py in process(self, doc)
     64             edits = []
     65             for i, b in enumerate(seq2seq_batch):
---> 66                 ps, es = self.trainer.predict(b, self.config['beam_size'])
     67                 preds += ps
     68                 if es is not None:

~/.conda/envs/tensorflow2_3/lib/python3.6/site-packages/stanfordnlp/models/lemma/trainer.py in predict(self, batch, beam_size)
     86         self.model.eval()
     87         batch_size = src.size(0)
---> 88         preds, edit_logits = self.model.predict(src, src_mask, pos=pos, beam_size=beam_size)
     89         pred_seqs = [self.vocab['char'].unmap(ids) for ids in preds] # unmap to tokens
     90         pred_seqs = utils.prune_decoded_seqs(pred_seqs)

~/.conda/envs/tensorflow2_3/lib/python3.6/site-packages/stanfordnlp/models/common/seq2seq_model.py in predict(self, src, src_mask, pos, beam_size)
    208                     done += [b]
    209                 # update beam state
--> 210                 update_state((hn, cn), b, beam[b].get_current_origin(), beam_size)
    211 
    212             if len(done) == batch_size:

~/.conda/envs/tensorflow2_3/lib/python3.6/site-packages/stanfordnlp/models/common/seq2seq_model.py in update_state(states, idx, positions, beam_size)
    191                 br, d = e.size()
    192                 s = e.contiguous().view(beam_size, br // beam_size, d)[:,idx]
--> 193                 s.data.copy_(s.data.index_select(0, positions))
    194 
    195         # (3) main loop

RuntimeError: expected scalar type Long but found Float