graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"

Home Page:https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

difference between "padded_grad" and "torch.norm(grads, dim=-1)" when perform densification

NeutrinoLiu opened this issue · comments

hi when i compare the condition of split densification and clone densification, i found the definition of "too large gradient" is slightly between the two.

For split, the mask is generated by

selected_pts_mask = torch.where(padded_grad >= grad_threshold, True, False)

while for clone, the mask is generated by
selected_pts_mask = torch.where(torch.norm(grads, dim=-1) >= grad_threshold, True, False)

I am not quite sure about the functionality of "padded_grad" here. considering the arguments passed to this two functions are identical, is there any difference between this two method to filter out large gradient gaussians?
Thx

Same question
=> Solved

The shape of tensor grads is (N, 1) where N is the total number of points.
So torch.norm(grads, dim=-1) doesn't change the gradient values

Same question => Solved

The shape of tensor grads is (N, 1) where N is the total number of points. So torch.norm(grads, dim=-1) doesn't change the gradient values

confusing that they use two different representation for the same functionality, but anyway, thx.