locuslab / qpth

A fast and differentiable QP solver for PyTorch.

Home Page:https://locuslab.github.io/qpth/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

btrisolve() replaced by lu_solve() in Pytorch 1.3.0

DeiPlusAY opened this issue · comments

Hello,

I'm using this package as an intermediate layer in our models. I found that with the newest Pytorch version, the package indicates that btrisolve() is replaced by lu_solve() and btrifact() is replaced by lu(). Literally replacing those two functions will yield CUDA memory access errors.

Also when switching back to Pytorch 1.0.1, the qpth package works but it is significantly (70 times) slower than CPU solvers, e.g. cvxpy. I think this problem is due to the version of newer PyTorch.

Could you suggest the best working environment (like the version of PyTorch) of the qpth package? This could really help out our project.

Thanks,
He Jiang

Hi -- I just quickly pushed in some fixes to make this work with the latest version of PyTorch although it's still getting deprecation warnings because of indexing and using the old Function interface. All of the tests in test.py are passing for me on the CPU and CUDA (need to manually set cuda=True in there). Can you try pulling and running that?

As for the performance and a recommended PyTorch version -- I'll probably deprecate this library soon and point everybody over to this wrapper around OSQP as it seems significantly faster: https://github.com/oxfordcontrol/osqpth

If you subscribe to this repo I'll send a notification out when this happens, although the backward pass in that repo is slightly off and isn't reproducing the results of this library in some cases.

Thanks for your help and your quick response! I'm running a quick test now.

Best,
He Jiang

Hi Brendan,

The code seems to work well! And it's much faster (~20x) than the previous setting (qpth 0.14 + Pytorch 1.0.1). It did pop up some warnings about indexing but it's fine. Thanks for your help!

Great! Yes those indexing issues should be easily fixable with the right calls to .bool() -- it doesn't seem like PyTorch tells us where those are coming from though so we'll have to manually search for them...

Hi -- I just pushed another update to pypi to the new Function interface and to use bool indices to get rid of all of the deprecation warnings

Hi I found those lu warnings are off! Thanks a lot!

WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.

I've seen this when I use the GPU solver. Trying to figure out the problem. It may come from the MAGMA package:
pytorch/pytorch#16963