locuslab / qpth

A fast and differentiable QP solver for PyTorch.

Home Page:https://locuslab.github.io/qpth/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Equality constraints not working in Pytorch 0.4.1

jeffren66 opened this issue · comments

Hello.

It seems that the equality constraints are not working in the latest version.

When I run the classification experiement of OptNet at:
https://github.com/locuslab/optnet/tree/master/cls
with the
mnist lenet --proj simproj
arguments, qpth shows the following error.

I'm using qpth v0.0.13 with PyTorch 0.4.1.

Thanks.

Stack trace:


RuntimeError Traceback (most recent call last)
in ()
280
281 if name=='main':
--> 282 main()

in main()
208 for epoch in range(1, args.nEpoch + 1):
209 adjust_opt(args, optimizer, epoch)
--> 210 train(args, epoch, net, trainLoader, optimizer, trainF)
211 test(args, epoch, net, testLoader, optimizer, testF)
212 try:

in train(args, epoch, net, trainLoader, optimizer, trainF)
228 data, target = Variable(data), Variable(target)
229 optimizer.zero_grad()
--> 230 output = net(data)
231 loss = F.nll_loss(output, target)
232 # make_graph.save('/tmp/t.dot', loss.creator); assert(False)

~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)

~/optnet-master/cls/models.py in forward(self, x)
47 x = F.relu(self.fc1(x))
48 x = self.fc2(x)
---> 49 return self.projF(x)
50
51 class LenetOptNet(nn.Module):

~/optnet-master/cls/models.py in projF(x)
32 A = self.A.unsqueeze(0).expand(nBatch, 1, nCls)
33 b = self.b.unsqueeze(0).expand(nBatch, 1)
---> 34 x = QPFunction()(Q, -x.double(), G, h, A, b).float()
35 x = x.log()
36 return x

~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/qpth/qp.py in forward(self, Q_, p_, G_, h_, A_, b_)
89
90 if self.solver == QPSolvers.PDIPM_BATCHED:
---> 91 self.Q_LU, self.S_LU, self.R = pdipm_b.pre_factor_kkt(Q, G, A)
92 zhats, self.nus, self.lams, self.slacks = pdipm_b.forward(
93 Q, p, G, h, A, b, self.Q_LU, self.S_LU, self.R,

~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/qpth/solvers/pdipm/batch.py in pre_factor_kkt(Q, G, A)
409
410 LU_A_invQ_AT = btrifact_hack(A_invQ_AT)
--> 411 P_A_invQ_AT, L_A_invQ_AT, U_A_invQ_AT = torch.btriunpack(*LU_A_invQ_AT)
412 P_A_invQ_AT = P_A_invQ_AT.type_as(A_invQ_AT)
413

~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/functional.py in btriunpack(LU_data, LU_pivots, unpack_data, unpack_pivots)
121 U = LU_data.new(LU_data.size()).zero_()
122 I_diag = torch.eye(sz).type_as(LU_data).byte().unsqueeze(0).expand(nBatch, sz, sz)
--> 123 L[I_diag] = 1.0
124 L[I_L] = LU_data[I_L]
125 U[I_U] = LU_data[I_U]

RuntimeError: The shape of the mask [1, 64, 64] at index 2 does not match the shape of the indexed tensor [1, 64, 1] at index 2

Changing the line 410 in qpth/solvers/pdipm/batch.py
from
LU_A_invQ_AT = btrifact_hack(A_invQ_AT)
to
LU_A_invQ_AT = [x.cuda() for x in btrifact_hack(A_invQ_AT.cpu())]
seems to be suppressing the error.

I'm seeing a related error when running the same command on pytorch==1.0.1.post2 and qpth==0.0.13

File "./venv/lib/python3.6/site-packages/qpth/solvers/pdipm/batch.py", line 357, in solve_kkt
invQ_rx = rx.btrisolve(*Q_LU)
RuntimeError: invalid argument 3: dimensions of A and b must be equal at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:862