qianlim / CAPE

Official implementation of CVPR2020 paper "Learning to Dress 3D People in Generative Clothing" https://arxiv.org/abs/1907.13615

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Demo can save the results of obj, but it cannot display the results

LiuXinqi12 opened this issue · comments

Hi, I can run your program and save the results of the model, but it doesn't seem to be able to visualize the model effect. The display effect is a black background, and there is no rendered model. It may be the problem of meshviewers in psbody, but I used different versions of psbody, and the problem is still not solved. More specifically, it seems that meshviewers are not initialized, but it is not clear how to solve it
image

viewer = MeshViewers(shape=(1, 2), titlebar=titlebar):
image

Hi, it's weird as I cannot reproduce the error. Can those saved obj files be opened with e.g. Meshlab without problems? If so, can you try:

from psbody.mesh import Mesh, MeshViewers
m1 = Mesh(filename=<path to a saved obj>)
m2 = Mesh(filename=<path to another obj>)

m1.show() # this should pop up a single mesh viewer window

# the following should create a 1x2 meshviewer and the 2 loaded meshes should be visualized
viewer = MeshViewers(shape=(1, 2), titlebar=titlebar)
viewer[0][0].static_meshes = [m1]
viewer[0][1].static_meshes = [m2]

Do you get expected visualization out of it?

Thank you very much for your reply. The obj model can be saved well, but the result obtained by following your suggestion is still a black background and no rendering model. This is mainly due to PSBody. I directly run the meshviewer in PSBody project get
the same wrong result.

Then it seems to be really a problem of psbody.mesh itself and specific to your machine environment probably (I tested on both ubuntu 18.04 and macos but both works). Please raise an issue on the psbody.mesh repo. For the live visualization in the CAPE demos, you can simply try the visualizer from trimesh or open3d, both should work.

OK, I've arisen an issue on the psbody.mesh repo. I'll try to use different visualization lib. Thank you very much!

In addition, I would like to ask, the current open code mainly uses pose and clothing type parameters as input. Is it possible to directly use RGB images as input to get the fitted body and clothing results? In order to achieve this goal, what I can think of is to first use SMPLift or HMR to estimate the parameters, and then use your model to get the result.

Right, this codebase is about the generative model and what you described is actually what we did in the image fitting experiment in the paper: we implemented SMPLify in tensorflow (so that it works with this code), use SMPLify to get the unclothed body, use CAPE to produce clothing offset layer, use a differentiable renderer to project the clothed body onto the image, and then optimize the {body pose, shape, clothing shape z} jointly to minimize the silhouette loss.

Unfortunately our tf implementation of SMPLify is not optimized and cleaned so I have not included it in this repo.

(you can take a look at my answer at another relevant issue.

Okay, thank you very much for your reply! Looking forward to the release for your CAPE in Pytorch.