allenai / allennlp

An open-source NLP research library, built on PyTorch.

Home Page:http://www.allennlp.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it possible to load my own quantized model from local

pradeepdev-1995 opened this issue · comments

Here is the code I tried for the coreference resolution

from allennlp.predictors.predictor import Predictor
model_url = 'https://storage.googleapis.com/pandora-intelligence/models/crosslingual-coreference/minilm/model.tar.gz'
predictor = Predictor.from_path(model_url)  
text = "Eva and Martha didn't want their friend Jenny \
    to feel lonely so they invited her to the party."
prediction = predictor.predict(document=text)  
print(prediction['clusters'])  
print(predictor.coref_resolved(text))  

And it worked well I got the output with solved coreference. like below

Eva and Martha didn't want Eva and Martha's friend Jenny     to feel lonely so Eva and Martha invited their friend Jenny to the party.

Now I have quantized the model used here (https://storage.googleapis.com/pandora-intelligence/models/crosslingual-coreference/minilm/model.tar.gz) and the new quantized model is stored in a specific path in my local machine.

Shall I use that customized(quantized) model from my local path in model_url value and use this prediction command like below?

model_url = <Path to the quantized model in my local machine>
predictor = Predictor.from_path(model_url)  

@AkshitaB this is just a friendly ping to make sure you haven't forgotten about this issue 😜

@AkshitaB No Never.strictly following😇

@pradeepdev-1995 Closing this in favor of #5723 . Let us know if the guide chapter linked there does not help.