lingjie0206 / Neural_Actor_Main_Code

Official repository of "Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control" (SIGGRAPH Asia 2021)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Rendered result is wrong using pretrained model

fantasyfw opened this issue · comments

I got the correct predicted texture maps, but the rendered result seems wrong. I have torch 1.6.0 and torchvision 0.7.0 installed as mentioned in readme

image

I have the same issue too.

PyTorch version: 1.6.0
CUDA used to build PyTorch: 10.2
OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
GCC version: (GCC) 5.5.0
Libc version: glibc-2.17
CUDA runtime version: 10.2.89
GPU models and configuration: 
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB

000000

Hi, can you share your running script?

I am basically following the steps of the rendering pipeline.
This is the output of "STEP 3: generate video given camera poses".

Can you print the environment? Like this:

>>> import torch; print(torch.__version__, torch.version.cuda)
1.6.0 10.2
>>> import torchvision; print(torchvision.__version__)
0.7.0```
>>> import torch; print(torch.__version__, torch.version.cuda)
1.6.0 10.2
>>> import torchvision; print(torchvision.__version__)
0.7.0

Hi @fantasyfw , @BadourAlBahar ,
Let me try to help you. I just checked it now in my local version, and the STEP 3 in the rendering pipeline is working fine:
000000
000000

So, here are some additional info about the source code:

$ git rev-parse HEAD
8868e89ced76339fffe9937c4bb7b144191cbfad

$ git status
On branch master
Your branch is up to date with 'origin/master'.

Untracked files:
  (use "git add <file>..." to include in what will be committed)

        3rdparty/

nothing added to commit but untracked files present (use "git add" to track)

In 3rdparty/ I have only the apex and opendr sources.

You can also double check the models:

$ md5sum workplace/nerf_lan.pt 
73ebee1097edda2580ba8e2f67a6da48  workplace/nerf_lan.pt

$ md5sum workplace/vid2vid_lan.pt 
534785c96cfed27a0a6d94e2f57a7f39  workplace/vid2vid_lan.pt

The sample dataset:

$ ls workplace/sample
canonical.obj  images  intrinsics.txt  normal  output   seg_maps  skinning_weight.txt  tex  transform  transform_tpose.json  uvmapping.obj

here images -> normal and seg_maps -> normal.

Finally, if nothing is wrong until here, I would suppose about the conda environment. You can check mine:
package-list.txt

what do you mean by here images -> normal and seg_maps -> normal?

$ ls workplace/sample
canonical.obj   output               transform
intrinsics.txt  skinning_weight.txt  transform_tpose.json
normal          tex                  uvmapping.obj

I do have open3d version 0.9.0 following this solution. Could this be the issue?

I also encountered same issue. I use PyTorch 1.6.0, cuda 10.1, and torchvision 0.7.0. And my sample dataset has same structure with @BadourAlBahar

what do you mean by here images -> normal and seg_maps -> normal?

$ ls workplace/sample
canonical.obj   output               transform
intrinsics.txt  skinning_weight.txt  transform_tpose.json
normal          tex                  uvmapping.obj

He meant symbolic links, as it's described here: https://github.com/lingjie0206/Neural_Actor_Main_Code/blob/master/docs/rendering_pipeline.md#step-2-predict-texture-maps

Hi @BadourAlBahar , @euneestella

I do have open3d version 0.9.0 following this solution. Could this be the issue?

Maybe this could be the issue. From https://github.com/lingjie0206/Neural_Actor_Main_Code/files/9475584/package-list.txt, I have open3d=0.10.0.0=pypi_0 instead.

If open3d is the issue, you should be able to quickly check it by saving the intermediate normal maps and predicted texture maps.
They should look similar to the training data for this subject.

Got same issue with open3d=0.10.0.0.
Almost same result picture.

Hi, I would suggest to investigate it by following the path from normal maps to the generated texture maps.

I realized now that I had an issue in the past where the normal maps were completely black. This issue was related to the version of opendr. Currently, I have two environments, one only for generating the normal maps with opendr==0.77, and another one (as in the previous comments) for the texture+nerf parts.

Hi, I checked my environment configuration, generated normal maps, and predicted texture maps.

  • I have open3d 0.10.0.0, opendr 0.77
  • Here are one sample of generated normal maps and texture maps. Both seems generated well.
    000394
    000395
    I removed cloned one and started again but the rendered result is still the same.

Can you try using this "transform_tpose.json" instead? I think the issue is due to incorrect transformation. The previous file was not transforming the T-pose to an X-pose.
transform_tpose.json.zip

Hi

I encounter the same problem.

And I also replace the ransform_tpose.json

But I could not generate the normal output.

Any suggestions?

My environment is:

pytorch: 1.6.0 , cuda :10.2

Thanks !!!

@weiyichang Hi, can you also add a screenshot of your generated output? Do they look like the same issue as above?

Our original checkpoints were trained under PyTorch 1.4. Are you also able to check if you can generate properly using 1.4?

I’ve replaced the transform_tpose.json but the issue still occured,can anyone has more ideas about the issue?I will be sincerely appreciate it if you can tell me the solution!Thank you~

I have also experienced this situation. Is there a solution

That's weird. I don't know what the problem is. It may be due to some version issues. I haven't got any similar problems in my environment.

I met the same problem. The texture seems well in step2, but the rgb and normal in step3 is the same as @fantasyfw .

tex.mp4
full_2023-08-14.15-33-39.mp4

My environment is:
Python 3.8.17, torch 1.6.0, cuda 10.2, open3d 0.14.1+a95136a2f, opendr 0.77, other infos are the same as @dluvizon, the full list of my environment is:
myenv.txt

No error report in running step3 script:
step3.log

Any suggestions? Thanks~