Caoang327 / HexPlane

Official code for CVPR 2023 Paper, HexPlane: A Fast Representation for Dynamic Scenes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

about the result and code of the IPhone dataset

RedemptYourself opened this issue · comments

can you give the result and code of the IPhone dataset ?
its very important for the exp of real monocular data, thanks!

我自己尝试实现了关于hypernerf数据集的读取实验,但是似乎效果与论文中同为monocular数据集的iphonedataset 效果有较大出入 原因可能是因为参数以及loss的设置 能否给出关于iPhone数据集的具体实验结果与参数设置? 万分感谢

Hi. Working on monocular videos is am important direction and HexPlane doesn't work very well under monocular settings as it is an extremely ill-posed problem. IPhone dataset has depths supervision while Hypernerf doesn't provide depth. I tried HexPlane using HyperNeRF dataset and it doesn't work very well, not sure whether it is because of dataloader or monocular settings. It would be really exciting to extend HexPlane to monocular settings. Could you message me your email and I could share my dataloader (potentially wrong) to you?

grateful for your sharing!my email is 809207013@qq.com,thanks a lot

Hi Cao,

Thanks for sharing.

I am very curious about Hex-plane's results on monocular setting.

In Figure-7 of your main paper, you showed some of the results of two video sequences (mochi and paper-windmill) on the iphone dataset. But it seems that config and code are not provided for this dataset. I have a two questions related to it.

  1. Did you use depth supervision or mask supervision for it?
  2. Is the model configuration quite different for the iphone dataset?

Hi Tianyuan:

  1. Yes I used the depth supervision, which is the same as the DyCheck Paper.
  2. It is not that different except I use the depth supervision. The major change here is: I use a relative high TV_t_s_ratio, like 100-500, which resulting a very high TV loss along time axis.

The reason I don't put the monocular video code in the github is: HexPlane currently works well on several scenes but works bad on others. In general, HexPlane doesn't have a deformation field so its results could not be that great for monocular videos. But I found there are duplicate/diverging objects in the test views and it is a little bit strange. So I suspected that there might be some problems with my cameras/depths code and I planned to revisit it when I have time. Since I am not sure if the code is correct, I don't put them.

Sorry for the inconvenience.

Thanks Ang Cao.

Hi Cao,

Thanks for sharing.

I have tried add the deformnet and the vanila tensorf ,the result cant be optimized well,even it cant learn the dynamic。
不知道您是否也尝试了类似的设置,我觉得是因为tensorf本身 vector与matrix 相乘以及deform的坐标映射 导致deform难以映射到正确的tensorf的索引而难以优化,另外 您提到的 duplicate/diverging objects 是否是指因为hypernerf或者dycheck的实验设置,只在train view上有正确的渲染,而在test视角出现的几何结构错误

@Caoang327 @RedemptYourself Hi, thank you for your excellent work. Could you also share the iPhone dataloader to me? And my email is angelou@gwmail.gwu.edu
Thank you very much.

Hi Cao,

Thanks for sharing.

I have tried add the deformnet and the vanila tensorf ,the result cant be optimized well,even it cant learn the dynamic。 不知道您是否也尝试了类似的设置,我觉得是因为tensorf本身 vector与matrix 相乘以及deform的坐标映射 导致deform难以映射到正确的tensorf的索引而难以优化,另外 您提到的 duplicate/diverging objects 是否是指因为hypernerf或者dycheck的实验设置,只在train view上有正确的渲染,而在test视角出现的几何结构错误

If the deformation is neural network and canonical space is explicit representation, it should be fine. You can refer https://github.com/hustvl/TiNeuVox.

For Iphone dataset, we have correct results on training but wrong results in test set. I don't know why.

@Caoang327 @RedemptYourself Hi, thank you for your excellent work. Could you also share the iPhone dataloader to me? And my email is angelou@gwmail.gwu.edu Thank you very much.

sent.

@Caoang327 @RedemptYourself
Hi,
thank you for your excellent work. I am really curious about the results of running on the iphone dataset. Could you also share the iPhone dataloader to me? And my email is
zhangyawen818@gmail.com
Thanks a lot.

Hello,
Thank you very much for your great effort!
I am also very curious about the result for iphone dataset. Could you also share the iPhone dataloader to me?
My email is
jm.park@kaist.ac.kr
Thank you.

Hi:

The code is here. https://drive.google.com/file/d/1hkSVyy05f1pP4sDJSjqNFUobpkSVGT_g/view?usp=sharing Sorry for the late response ( I haven't check the github and email for several weeks).