frank-xwang / InstanceDiffusion

[CVPR 2024] Code release for "InstanceDiffusion: Instance-level Control for Image Generation"

Home Page:https://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some questions about decode_item.py

Hazarch opened this issue · comments

That's great work! I'm currently trying to train a personalized model on my own dataset.
However, I'm encountering some issues with decode_item.py.
Is there a dimension handling error in the center_crop_arr function?
And, the memory usage of decode_item.py keeps increasing during the training process. Regarding the second point, I cannot say for certain if it is a problem unique to me being a Python novice.
Looking forward to your answer.

Hi, Thank you for your interest in our research! We didn't find the dimension handling error during our training process. If you could share a screenshot of the error message you encountered, it would greatly assist us in understanding and addressing the issue. Thank you

Regarding your second question, I didn't observe the issue of increasing memory usage during the training. While it's possible for memory usage to rise during the initial several iterations, it typically stabilizes afterward. The main reason is that some images have fewer instances.

Thank you for your response. Regarding the second point, I found that it was caused by the high resolution of the images in my dataset. As for the first point, I believe that in segs = [seg.resize(tuple(x // 2 for x in pil_image.size), resample=Image.Resampling.BOX) for seg in segs] and segs = [seg.resize(tuple(round(x * scale) for x in pil_image.size), resample=Image.Resampling.NEAREST) for seg in segs], we should not divide by 2 or multiply by the scale respectively. Otherwise, there will be a dimension error when assigning segs[i] = all_obj_segs[idx].