wbhu / Tri-MipRF

Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields, ICCV'23 (Oral, Best Paper Finalist)

Home Page:https://wbhu.github.io/projects/Tri-MipRF

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Radius computation when normalizing ray directions

MrMois opened this issue · comments

Hello, I got a question regarding the ball radius.

In camera.py the ray directions are normalized, hence t values (and distances) are in euclidean space. But the radii are computed on the image plane w.r.t. the unnormalized distances. Shouldn't the radii also be scaled by 1 / ||directions|| before the directions are normalized so you get the radii on the unit sphere?

Greets!

I think that is done in compute_ball_radii inside trimipRF.py. The radii are scaled depending on the angle to the principal camera ray.

commented

Hello, I got a question regarding the ball radius.

In camera.py the ray directions are normalized, hence t values (and distances) are in euclidean space. But the radii are computed on the image plane w.r.t. the unnormalized distances. Shouldn't the radii also be scaled by 1 / ||directions|| before the directions are normalized so you get the radii on the unit sphere?

Greets!

I have the same question , i ’m looking forward to relative reply!

The radii are comupted on a virtual imaging plane with a depth value (coordinate along z-axis) as 1 w.r.t. the camera frame. No matter whether rays are normalized or not, this unit depth value has defined coordinates of these pixels.