prs-eth / Marigold

[CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation

Home Page:https://marigoldmonodepth.github.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

finetune on other domain, the validation is a bit noisy

jingyangcarl opened this issue · comments

Awesome work!! I'm trying to use the same fine-tuned protocol over another domain.

However, I suffered from the noisy results. The training lasts for 2 days on A100.

Is there any chance I can get some insights to improve the results?

Best

Can you give more details?

Hi @markkua thanks for the message. I modified my training code from instruct_pix2pix to enable the marigold fine-tuning protocol and I changed the input and output conv layer of the unet to match my task, e.g. I'm taking RGB images as input and output albedo (also in RGB). I also modified the Marigold pipeline to adapt the inference to enable the three-channel output.

Also, I'm using DDPM during training with 1k diffusion steps and DDIM with 50 sample steps at inference. I also set the ensemble to 10 to match your implementation details. The training takes roughly 48 hrs on A100 and I attached an example below.

Input Output
image image

I think the training works since it generates the albedo, however, it's noticeable that there is still noise on the centered object. The generation is not clean enough. The results presented in the paper are stunning. I think I may missed something I didn't notice yet.

Looking forward to hearing your insights.

Best,
Jing

Hi,

From your description, I assume that you adjust the unet output layer to have three latents. As you are exporting three-channel albedo, have you tried directly using one latent, which gives you three channels after decoding.

On the other hand, adapting Marigold to a different modality might require different training hyperparameters.

Best.

Hi @markkua ,

Thanks for the message. I'm currently outputting only one latent code (4-dim) as the output of the unet. I believe the network structure has not changed that much. The colored output is also via a similar decoder in marigold_pipeline but comment this line.

As for the training, I also prepared 10k image pairs of input RGB images and albedo images.

Please let me know if I missed anything or misunderstood anything.

Best,
Jing

Solved, thx