huangyangyi / TeCH

[3DV 2024] Official repo of "TeCH: Text-guided Reconstruction of Lifelike Clothed Humans"

Home Page:https://huangyangyi.github.io/TeCH/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about model

fatbao55 opened this issue · comments

Dear authors,

thanks for the great work! I would like to check if the approach is only generated from text prompts (text-to-3d similar to dreamfusion) or does it also take image as input (image-to-3d with text prompts similar to zero123/ magic123)?

Thanks for asking, TeCH takes both image and prompt for reconstruction, and the prompt is derived from the input image via VQA model. It's closer to Magic123.

Thanks so much for the clarification! Is there an estimated date for the code release?

Thanks so much for the clarification! Is there an estimated date for the code release?

Thanks for your interests! We plan to release the code in October.

Muito obrigado pelo esclarecimento! Existe uma data estimada para o lançamento do código?

Obrigado pelo seu interesse! Planejamos lançar o código em outubro.

Can we put it to test on colab?]