How to put the quantized model in .jit version?
RiccardoRuggiero opened this issue · comments
Hello everybody & developers. I've taken a look at this repo and I've successfully quantized my model to see results in terms of accuracy. Everything seemed to work fine, however I would like to put my quantized model in .jit form, in order to execute it on a controller or a mobile. However, when I try to execute the following command:
torch.jit.save(torch.jit.script(model), model_filepath)
.. PyTorch turns me an error and so I have no clue of how to put my quantized model in .jit form. Anybody can help me please?