locuslab / qpth

A fast and differentiable QP solver for PyTorch.

Home Page:https://locuslab.github.io/qpth/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RunTime Error from e = Variable(torch.Tensor())

robotsorcerer opened this issue · comments

commented

Sorry for my troubles, but why does using the e = Variable(torch.Tensor) raise RuntimeError: expected a Variable argument, but got FloatTensor in a recursive class implementation? I mean, e is already a Variable as defined on Line 77. The relevant call stack is

RuntimeError: expected a Variable argument, but got FloatTensor
> /home/robotec/catkin_ws/src/RAL2017/pyrnn/src/model.py(77)qp_layer()
     75             h = self.h.unsqueeze(0).expand(nBatch, nineq)
     76             e = Variable(torch.Tensor())
---> 77             x = QPFunction()(x, Q, p, G, h, e, e)
     78             return x
     79         self.qp_layer = qp_layer

You should make sure that all of your other arguments (x, Q, p, G, h, etc) are Variables/Parameters as well.

Making everything Variables/Parameters should fix this issue (though let us know if not). Also something new we've added is that you don't have to expand h or any other variable across the minibatch, QPFunction automatically detects that internally.

commented

Quick question, for chained inequality constraints such as 0 <= x <= 1, when entering the nineq in the QPFunction definition, since CVX does not allow for chaining constraints together, would nineq be 1 or 2 in your implementation. E.g, if I have the optimization problem,

                              \hat z =   argmin_z 1/2 z^T Q z + p^T z
                                               subject to  0.1 <= z_i <= 1  for i = 1, 2, ...,  6

do you specify nineq to be 6 or 12? e.g. for nineq = 12 and neq = 0, When implemented

import torch
from torch.autograd import Variable

from qpth.qp import QPFunction

nx, nineq, neq = 6, 12, 0
Q = torch.eye(nx)
p = torch.zeros(nx)
G = torch.randn(nineq, nx)
h = torch.randn(nineq)
A = torch.Tensor() #randn(neq, nx)
b = torch.Tensor() #randn(neq)

[Q, p, G, h, A, b] = [Variable(x.double()) for x in [Q, p, G, h, A, b]]
xhat = QPFunction()(Q, p, G, h, A, b)
print('xhat ', xhat)

It gives errors:

    RuntimeError                              Traceback (most recent call last)
<ipython-input-52-829e67ed9769> in <module>()
     14 
     15 [Q, p, G, h, A, b] = [Variable(x.double()) for x in [Q, p, G, h, A, b]]
---> 16 xhat = QPFunction()(Q, p, G, h, A, b)
     17 print('GPU double', xhat)

/home/robotec/anaconda2/lib/python2.7/site-packages/qpth-0.0.2-py2.7.egg/qpth/qp.pyc in forward(self, inputs, Q_, G_, h_, A_, b_)
     28         start = time.time()
     29         nBatch = inputs.size(0)
---> 30         Q, _ = expandParam(Q_, nBatch, 3)
     31         G, _ = expandParam(G_, nBatch, 3)
     32         h, _ = expandParam(h_, nBatch, 2)

/home/robotec/anaconda2/lib/python2.7/site-packages/qpth-0.0.2-py2.7.egg/qpth/util.pyc in expandParam(X, nBatch, nDim)
     41         return X.unsqueeze(0).expand(*([nBatch]+list(X.size()))), True
     42     else:
---> 43         raise RuntimeError("Unexpected number of dimensions.")

RuntimeError: Unexpected number of dimensions.

Hi @lakehanne - for box constraints like that in your case you would need nineq=12. Also the code you posted works for me. Are you using the latest qpth version?

qpth(master*)$ ./t.py
xhat  Variable containing:
-1.0712 -0.3375  0.6206 -0.3187 -0.2761 -0.5494
[torch.DoubleTensor of size 1x6]
commented

That's surprising. Yes, I'm on qpth version 0.0.5, the latest:

(py36) user@user:~/catkin_ws/src/RAL2017/pyrnn/src$ pip show qpth | grep Version
Version: 0.0.5

I will investigate my environment to be sure something else is not wrong.

In the trace that you sent, it looks like you're using python2.7 and qpth-0.0.2

commented

Thanks a lot. it's been fixed. The QP Layer works with my LSTM model. Commit b00d2d.

Thanks a lot for taking the time to answer my questions. I appreciate it.

Great, glad things are working!