skhu101 / SHERF

Code for our ICCV'2023 paper "SHERF: Generalizable Human NeRF from a Single Image"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to get a high resolution and different motion image

bachongyou opened this issue · comments

Hello! Thank you for sharing your excellent work.

I have 3 questions:

  1. When I was testing the renderhuman dataset, I found that in the code processing, it generates images with a resolution of 6464, which appears too blurry. Additionally, I noticed that in the input of train.py, there are two parameters, ‘neural_rendering_resolution_initial‘ and ‘neural_rendering_resolution_final’, which seem to be related to image size. I would like to inquire whether modifying ‘neural_rendering_resolution_initial’ or ‘neural_rendering_resolution_final’ is sufficient to obtain 512512 images?

2)If I want to obtain rendered images of characters in different poses, as shown on the project homepage, is it sufficient to modify the input_data['params']['poses'] of line 334 in training_loop.py, or are there other modifications required?

  1. Additionally, I noticed that the model's input includes an instance ID. If I use a full-body photo that has not appeared in the dataset, will it still be able to produce accurate SMPL-rendered images?

Looking forward to your response! Thank you again!

Hi, thanks for your interest in our work.
1). In our code, the neural_rendering_resolution_initial is set to be 512 for RenderPeople, THuman and Zju Mocap datasets. We do not set the neural_rendering_resolution_initial as we do not use a super-resolution module. If you use our dataloader, you can specify image size with the resolution you want by setting the image_scaling factor. Could you have a check about the image size from the dataloader outputs?
2). When you hope to render images of characters in different poses, one quick way is to change the target pose parameters (theta in SMPL) of SMPL in the dataloader file.
3). If you hope to use a full-body photo that has not appeared in the dataset, you can first use the single-view SMPL estimation method like CLIFF to estimate SMPL and camera parameters, and then you can use the full-body photo, its SMPL and camera parameters and a sequence of SMPL motion parameters (theta in SMPL) to animate the character.
If you have any further questions, please do not hesitate to let me know.