google / nerfies

This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Home Page:https://nerfies.github.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

possible error in the warp code

xvdp opened this issue · comments

commented

repro -

  • capture a head as still as possible - not one's own but a subject against a wall looking at fixed point.
    in my test the starting point of the capture was not aligned to face center but to right ear, then an arc and an orbit.

  • follow the example colabs - I tested twice, once on a colab without changing parameters, once in jupyter with scale factor of 2

There should be no 'warp' on any of those. And yet there is, causing the test cameras to be wrong - unless 'warp' is introduced from the capture camera at smallest rotation from the test camera one is trying to render.

I noticed that there are scaling and reorientation of scenes, I traced and found no obvious fault with the rotation code.

I see mediapipe is used, and while mp is great it is returns probabilistic fitting, not pixel accurate - but didnt purse further where the error in the warping code comes from - could it be mediapipe?

I can send rendered videos or even the scene if necessary - but not on an open forum.
thanks
xvdp

If Im wrong, please telll me what is wrong w my process.

Hi, I'm not sure I completely understand the question. Are you using COLMAP to do the camera registration? People tend to me, and even if you try really hard to stand still you're still likely moving. However, if you're standing against a textured background (textured is really key here) you should be able to get a pretty good calibration.

We don't use mediapipe for calibration, only to compute an optional foreground mask that can help make sure that the calibration is only done based on the background.

commented
commented

@keunhong

I got around to run the exact same test with one simple change: ModelConfig.use_warp = False now it renders perfectly, as I said, the character scanned was very still with fixed head But that sort of defeats the purpose of your paper...

Colmap feature matching was ok, too. The only oddity in the capture is that I started it at character left, not character center see colmap image. But that should make no difference, as you pass origins and directions - looks like warp field is adding a shift towards camera_id 0 - from reading the code I cant find differences between what the eval and train expect or bugs in the rotation code that would cause that.
Unless there default options passed aren't the intended to reproduce the paper, there is an error on the implementation.

Try running the published notebooks on an unmovable object with the default settings - and you should hit that bug.

xbpoint_camerapath