nianticlabs / diffusionerf

[CVPR 2023] DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How is your gradient of RGBD data over (rgb,d) added into gradient of photometric loss over(σ,c)?

Starry-lei opened this issue · comments

Hi,
thanks for the excellent work, but it is really weird that your gradient of RGBD data over (RGB,d) added into the gradient of photometric loss over(σ,c), can you please help to elaborate on it?

Hi there,

Since the RGBD patch is a differentiable function of the RGBs and densities along the rays that were used to render the patch, any gradients with respect to the patch can be backpropped all the way through to the underlying ray RGBs and densities. So we get a gradient w.r.t. the patch from our diffusion model, and we backprop to turn it into a gradient w.r.t. the RGBs and densities along the rays of the patch, and in turn we backprop that to get a gradient w.r.t. the underlying parameters of the NeRF. Then we feed that gradient into the optimizer, along with the gradient from the usual photometric NeRF loss.

Best,
Jamie Wynn