anucvml / ddn

Deep Declarative Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tips for getting lagrangian derivative to 0?

Parskatt opened this issue · comments

Hi,

I have an equality constrained problem which I solve with RANSAC and then refine in a similar fashion as done in your pnp node with an auxillary objective function (as the ransac objective is typically non-differentiable), i approximately enforce the constraints by adding them to the objective only during the refinement. This seems to work well and my constraints are still fulfilled after refining. However, I still get objective gradients which cannot be solved exactly from my constraints:

UserWarning: Non-zero Lagrangian gradient at y:
[15.481806 -9.70834 -7.652554 18.65691 3.6125593 11.075308
0.03811455 11.670857 13.675308 ]
fY: [ 2.615292 5.0672874 -7.8673334 57.839783 12.556461 29.84853
-5.591362 1.9208729 -3.0231378]

It can be seen that LY is smaller than fY, but not 0. Have you had any similar experiences? Is there some optimization tricks here which may be employed? I can note that my constraints are overspecified and could be reduced, but not sure if that would help.

I guess issues could also come from the fact that my constraints are only approximately satisfied after my optimization, but they are very close to fulfilled (about 1e-8).

Ah, and I'm using torch.optim.LBFGS as the optimizer.

Actually it seems that switching optimizer to Adam resolved most of the issues. Perhaps I was misusing the LBFGS optimizer somehow.

Looks like you've found a solution. In a discussion with @dylan-campbell yesterday he pointed out that the way the stopping criteria is evaluated on a batch with LBFGS means that some elements may not yet have converged (but the batch when taken as a whole has small enough norm). I'm not sure if this is related to the problem that you're seeing.

Thanks for the response! I think my problem is a combination of that and perhaps also that the optimization landscape is very spiky. Looking at my gradient norms through the optimization it seems like they never vanish.