aravindhm / nnpreimage

Code for Visualizing Deep Convolutional Neural Networks Using Natural Pre-images, IJCV 2016

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About back propagation of TV

GlowingHorse opened this issue · comments

commented

Thanks for your help, and I would like reading and try to reproduce the code.

The other question.
This is code from nnpreimage/layers/tv.m about the backpropagation of TV.

np = max(v, 1e-6).^(1 - 2/beta) ;
d1_ = np .* d1;
d2_ = np .* d2;
d11 = d1_(:,[1 1:end-1],:,:) - d1_ ;
d22 = d2_([1 1:end-1],:,:,:) - d2_ ;
d11(:,1,:,:) = - d1_(:,1,:,:) ;
d22(1,:,:,:) = - d2_(1,:,:,:) ;
dx = (beta/numPixels) * (d11 + d22) ;
If it's convenient, could you please tell me the paper or computing method of d11 = d1_(:,[1 1:end-1],:,:) - d1_ ; ? I just cannot get the right backpropagation function there. It seems like using Euler Lagrange equation to get this, not the common derivative.

Because, when I set

np = max(v, 1e-6).^(1 - 2/beta) ;
d1_ = np .* d1;
d2_ = np .* d2;
d11 = d1(:,[1 1:end-1],:,:) - d1;
d22 = d2([1 1:end-1],:,:,:) - d2_;
d11(:,1,:,:) = - d1(:,1,:,:) ;
d22(1,:,:,:) = - d2(1,:,:,:) ;
dx = (beta/numPixels) * (d11.*d1_ + d22.*d2_ ) ;
The value in dx cannot be kept in a reasonable range. Using your function in the code, the problem will not appear.