shunsukesaito / PIFu

This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

Home Page:https://shunsukesaito.github.io/PIFu/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Why Use render img over original image?

LazerLikeFocus opened this issue · comments

I have see that in paper you are trying to convert img to 3d model.
So the img should be real img (camera clicked)

SO why is Pifu using a rendered img for training?

rndr_uv.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan)

rndr_uv.set_albedo(texture_image)

cv2.imwrite(os.path.join(out_path, 'RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*out_all_f)

Why dont we just train using the original jpg img? Can't we just resize it ?

Well. PIFu is trained on textured meshes, so you need to render them somehow. If of course you do have the raw data used for generating those you could train on the actual images.