googleinterns / IBRNet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The evaluated performance is a little bit lower than in the paper

fomalhautb opened this issue · comments

I downloaded the pretrained weights and used the eval_deepvoxels.sh script to evaluate the model. But the evaluated performance is a little bit lower than in the paper:

------cube-------
final coarse psnr: 32.93767583847046, final fine psnr: 32.02404493331909
fine coarse ssim: 0.9823936659097672, final fine ssim: 0.9840710365772247 
final coarse lpips: 0.019328504391014575, fine fine lpips: 0.019714539949782194 

------vase-------
final coarse psnr: 34.84811542510986, final fine psnr: 35.24699348449707
fine coarse ssim: 0.9875932204723358, final fine ssim: 0.9840710365772247 
final coarse lpips: 0.01578717289492488, fine fine lpips: 0.015975559004582463 

------armchair-------
final coarse psnr: 39.09828433990479, final fine psnr: 38.42974273681641
fine coarse ssim: 0.9945194631814956, final fine ssim: 0.9945065212249756 
final coarse lpips: 0.027687973510473966, fine fine lpips: 0.02769788108766079 

------greek-------
final coarse psnr: 38.57263313293457, final fine psnr: 38.17310089111328
fine coarse ssim: 0.984335949420929, final fine ssim: 0.9856698685884475 
final coarse lpips: 0.024405473172664643, fine fine lpips: 0.022999074896797537 

No matter the average of coarse or fine models, the average score is about 3-5% lower than in the paper.

Hi sorry about the confusion. If you set white_bkgd = True, the performance should be fairly close to that reported in the paper. Thank you for catching this and we will fix this issue in the code.