Riccorl / transformer-srl

Reimplementation of a BERT based model (Shi et al, 2019), currently the state-of-the-art for English SRL. This model implements also predicate disambiguation.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to run the model on GPU?

Wangpeiyi9979 opened this issue · comments

Hi, thanks for your nice work.
I have many sentences needed to be parsed, could you tell me how to parse the sentence with this model on GPU.

To run on gpu you should pass the id of the GPU (like allennlp) using cuda_device parameter in from_path method. Usually it's -1 for CPU, 0 for GPU. If you have multiple GPU it can be 0 to n.

from transformer_srl import dataset_readers, models, predictors

predictor = predictors.SrlTransformersPredictor.from_path(
    "path/to/srl_bert_base_conll2012.tar.gz, 
    "transformer_srl",
    cuda_device=0
)
predictor.predict(
  sentence="Did Uriah honestly think he could beat the game in under three hours?"
)

thanks!!!