GRAAL-Research / deepparse

Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning

Home Page:https://deepparse.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Export to ONNX

ml5ah opened this issue · comments

commented

Is your feature request related to a problem? Please describe.
A script to convert the Address Parser (.ckpt) model to ONNX (.onnx)?

Describe the solution you'd like
Has someone successfully converted the address parser model to onnx format?

Thank you for you interest in improving Deepparse.

Hi @ml5ah,

I only used ONNX once, and it was not a successful experience (it was my initial idea to handle Deepparse weights).

If I recall right, the bottleneck was that you had to set the batch size to a specific size; thus, it is cumbersome to find an appropriate one. However, it was back in 2019 or so, and things might have evolved since. I will take a look at it.

So the best case would be some export method for an AddressParser to export itself into an ONNX format, right?

I've looked at PyTorch doc, and it still seems like you need to provide a batch-a-like dataset for the export.

If you come up with a method that can export the AddressParser.model (model) attribute into an ONNX format, I will be more than happy to merge it into a PR. Otherwise, I don't find ONNX helpful and will only provide a new save_address_parser_weights method to save the model's weights in the next release.

commented

Thanks for the reply, @davebulaval!

Yes, that's correct - would be the best way. I have been trying to work with the inbuilt export function in torch but keep running into some issues. The export call works but having trouble initializing an inference session in onnxruntime.

Fixing the batch_size to be 1 should be good as well, for starters!

Do share any insights/suggestions. Thanks!

FYI - my error:

Screen Shot 2022-08-04 at 11 14 26

commented

@davebulaval saw your updated reply - got it, that makes sense. Sure, I'll keep you posted if I have any success.

It seems like a float typing error (it converts some into a float and others into a long float). LSTM parameters are LongTensor, and it may be there the problem.

@ml5ah I've just added the save_model_weights method to the AddressParser class into dev. It saves the PyTorch state dictionary into a pickle format.

If you need to use the model in another ML framework or code base (e.g. Java, etc.), you can 'simply' load the weights matrix. Usually, it is convenient, but you might need some naming/format conversion.

commented

Thanks @davebulaval! That function helped and I was able to move forward, albeit faced some more roadblocks along the way.

I faced 2 problems:

  1. The size of input tensor is variable based on the number of "words" in the input address text. This impacts the decomposition lengths input as well. Solved this problem temporarily to unblock but not sure if ONNX can handle it.

  2. While exporting, there is an operator "resolve_conj" that is used. Looks like this is not currently supported in any opset version. Documented here: pytorch/pytorch#73619
    Might have to wait a bit for this to be supported.

@ml5ah I see. Yeah, it seems like it is not for now.

And what exactly is your objective? In which language are you trying to import?

commented

@davebulaval objective is to deploy the pre-trained address parser model for inference using onnxruntime (in either python or java).
To do this, I've been trying to convert the model to onnx using python.

Ok, I got it. Do you want the address parser as an API-like service?

commented

Yep, exactly. With the constraint that inference is using onnxruntine with no dependency on PyTorch.

Keep us updated on your progress. I would love to have 1) the script for ONNX conversion and 2) the script to bundle it into an API. It would be a great doc improvement to have that.

This issue is stale because it has been open 60 days with no activity.
Stale issues will automatically be closed 30 days after being marked Stale
.

commented

@ml5ah hey, im curious if you managed to create an onnx export? If yes it would be great if you could share your insights.

The last time I checked, my bottleneck with ONNX was that batch size needed to be fixed beforehand.

commented

@kleineroscar @davebulaval apologies for the late reply. Its been while since I actively looked at this issue. ONNX does provide support for dynamic batch size using the dynamic-axes feature. But, as far as I remember, it did not work out of the box for deepparse till sometime back.

I will give it another shot - hopefully things have changed.

commented

Took another look, this doesn't seem to be a trivial problem.
pytorch/pytorch#28423 is another issue that'll need to be solved before the parser model can be exported to onnx. I also tried dealing with the embedding, encoder and decoder models separately -- non-trivial.

Please share your thoughts.

cc: @kleineroscar @davebulaval

The embedding conversion is done outside of the model. Thus, the LSTM expects to receive a 300d vector. I think for simplicity, we could fix a batch size, but it would require doing example padding if the number of addresses to parse is smaller than the batch size.
I am not a fan of ONNX. I found it to be too rigid.

I've added functionality to allow someone to put the weights in an S3 bucket and process data in cloud service. I could be a workaround to create an API in Python.

I have a friend working on Burn a Rust Torch-like framework, but LSTM is not yet implemented. I would prioritize using a Rust implementation of Deepparse rather than working on/with ONNX.