difference between "padded_grad" and "torch.norm(grads, dim=-1)" when perform densification
NeutrinoLiu opened this issue · comments
hi when i compare the condition of split densification and clone densification, i found the definition of "too large gradient" is slightly between the two.
For split, the mask is generated by
gaussian-splatting/scene/gaussian_model.py
Line 354 in 472689c
while for clone, the mask is generated by
gaussian-splatting/scene/gaussian_model.py
Line 376 in 472689c
I am not quite sure about the functionality of "padded_grad" here. considering the arguments passed to this two functions are identical, is there any difference between this two method to filter out large gradient gaussians?
Thx
Same question
=> Solved
The shape of tensor grads is (N, 1) where N is the total number of points.
So torch.norm(grads, dim=-1) doesn't change the gradient values
Same question => Solved
The shape of tensor grads is (N, 1) where N is the total number of points. So torch.norm(grads, dim=-1) doesn't change the gradient values
confusing that they use two different representation for the same functionality, but anyway, thx.