ashawkey / InTeX

Interactive Text-to-Texture Synthesis via Unified Depth-aware Inpainting.

Home Page:https://me.kiui.moe/intex/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Request of updating to SDXL and some more questions

P2Oileen opened this issue · comments

Thank you for your excellent work and open-sourcing, this is really helpful!
Do you intend to train a SDXL based Depth-aware Inpainting Prior Model and update from SD1.5 to SDXL? There are many good SDXL weights in civitai and other platforms that are available to generate high-quality stylized images. I hope they can be used in this project.

Besides, I have some more questions after reading your paper:

  1. I wonder why do you use DPT to generate depthmap since directly generate depthmap from mesh is accurate, fast and good? I saw that is because "resource constraints" but not really get it.

  2. This is a small suggestion. In the paper you mentioned:

Previous methods [2,3,36] have attempted to tackle this issue by combining depth-to-image and image inpainting models [37]. However, these models are usually separately trained under distinct conditions, which constrains the quality of synthesized textures and leads to 3D inconsistency.

After reading this I tried to use SD img2img inpaint + controlnet depth control to generate some results by myself and as you said, the results are not that good, so I think maybe you can add some results in fig3 to show that seperate model results are not as good as the results by your model's, then the statement will explain itself by image results.

Anyway thank you for kindly replying my questions🙂