BachiLi / redner

Differentiable rendering without approximation.

Home Page:https://people.csail.mit.edu/tzumao/diffrt/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Redner structure (ambiguity in forward and backward mechanism)

arsabe opened this issue · comments

I am a newbie in differentiable rendering. I am trying to understand the concept following the code and examples for Redner as well as the published paper, but somehow its not clear to me how it handles the problem.

What I got is that (please correct me if I am wrong), in Redner, the forward pass is done with the C++ module returning the rendered image as a tensor. Then the loss tensor between the rendered image and the ground truth image is calculated and handed over to Pytorch for backward pass. So we have the forward pass done with the renderer module and the backward pass (including calculating gradient) done with Pytorch. As far as I understood, what generally a differentiable renderer offers is that it defines the rendering function such that it is differentiable with respect to parameters (like reparameterizing the discontinuous scene parameters, monte carlo importance sampling and etc), and then gradient of loss is calculated with respect to the scene parameters and passed into optimizer. Mathematically it completely makes sense.

What I have a hard time understanding is that how a continuous rendering function makes a difference in the output rendered image at each loop? because at each step we just subtract two grayscale (or multichannel RGB etc) images and calculate the gradient based on the pixel colors. So what happens if we replace the renderer module (forward pass) in Redner with an ordinary renderer? Will that give us an image with different pixel color distribution?

I appreciate your help and comments in advance and sorry for the long text.