wbhu / Tri-MipRF

Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields, ICCV'23 (Oral, Best Paper Finalist)

Home Page:https://wbhu.github.io/projects/Tri-MipRF

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Difference between Zip-NeRF and Tri-MipRF?

YJ-142150 opened this issue · comments

Great Work!!!
Could you explain more about "Zip-NeRF introduces a multi-sampling-based method to address the same problem, efficient anti-aliasing, while our method belongs to the pre-filtering-based method."?
Tri-MipRF seems to be much faster. Does the PSNR also higher than Zip-NeRF?

Thanks for your interest. We didn't compare Tri-MipRF with Zip-NeRF in our paper as they are concurrent works. From my point of view, both of the two works try to address the efficiency issue for anti-aliasing NeRF, but Zip-NeRF adopts a multi-sampling-based strategy while our method belongs to the pre-filtering-based method. For PSNR, we can only compare the results on the Blender dataset at the current point.
image
(Tab.5 from our paper)

image
(Tab.4 from ZipNeRF paper)

Wow! It seems Tri-MipRF is better than Zip-NeRF in Blender synthetic dataset!!
Did you tried on nerf-360 dataset, too?

Can Tri-MipRF support 360 unbounded scene ? the experiment datasets in paper are objects with mask. thanks.

Currently, it cannot support unbounded 360 scenes