huangyangyi / TeCH

[3DV 2024] Official repo of "TeCH: Text-guided Reconstruction of Lifelike Clothed Humans"

Home Page:https://huangyangyi.github.io/TeCH/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

question about scripts/run.sh

cwwjyh opened this issue · comments

Hello, this is a good work!
I ran "sh scripts/run.sh input/g.png exp/examples/name", but encountered an issue as below:
image

When running: python utils/ldm_utils/main.py -t --data_root exp/examples/name/png/ --logdir exp/examples/name/ldm/ --reg_data_root data/dreambooth_data/class_man_images/ --bg_root data/dreambooth_data/bg_images/ --class_word man --no-test --gpus 2,3

Can you give me some advice? Thank you very much!

Hello, this is a good work! I ran "sh scripts/run.sh input/g.png exp/examples/name", but encountered an issue as below: image

When running: python utils/ldm_utils/main.py -t --data_root exp/examples/name/png/ --logdir exp/examples/name/ldm/ --reg_data_root data/dreambooth_data/class_man_images/ --bg_root data/dreambooth_data/bg_images/ --class_word man --no-test --gpus 2,3

Can you give me some advice? Thank you very much!

Hi, i have solved this problem

@cwwjyh How did you fix it?

@cwwjyh How did you fix it?

Because the GPU setting is wrong, it can run after modification.
image

@cwwjyh
Thanks. As per the example, sh scripts/run.sh input/examples/name.img exp/examples/name ....

where is the folder and files located ?

Are, do I need to provide it myself ?

And what is in your g.png ? A human pose ?

@cwwjyh How did you fix it?

Because the GPU setting is wrong, it can run after modification. image

Hello, may I ask how much memory each of your GPUs has? I encountered Out-Of-Memory (OOM) errors when running the code on a server with 4 GPUs, each having 24GB of VRAM. Thanks.

@cwwjyh How did you fix it?

Because the GPU setting is wrong, it can run after modification. image

Hello, may I ask how much memory each of your GPUs has? I encountered Out-Of-Memory (OOM) errors when running the code on a server with 4 GPUs, each having 24GB of VRAM. Thanks.

I use 32GB of VRAM and reset batch_size=6 in test.yaml. If you use 24GB, you can try to set batch_size=1.

@cwwjyh How did you fix it?

Because the GPU setting is wrong, it can run after modification. image

Hello, may I ask how much memory each of your GPUs has? I encountered Out-Of-Memory (OOM) errors when running the code on a server with 4 GPUs, each having 24GB of VRAM. Thanks.

I use 32GB of VRAM and reset batch_size=6 in test.yaml. If you use 24GB, you can try to set batch_size=1.

Hello, I would like to double-check one thing. Since I couldn't find the test.yaml file, are you referring to the v1-finetune.yaml file?
image
image

@cwwjyh How did you fix it?

Because the GPU setting is wrong, it can run after modification. image

Hello, may I ask how much memory each of your GPUs has? I encountered Out-Of-Memory (OOM) errors when running the code on a server with 4 GPUs, each having 24GB of VRAM. Thanks.

I use 32GB of VRAM and reset batch_size=6 in test.yaml. If you use 24GB, you can try to set batch_size=1.

Hello, I would like to double-check one thing. Since I couldn't find the test.yaml file, are you referring to the v1-finetune.yaml file? image image

image

@cwwjyh How did you fix it?

Because the GPU setting is wrong, it can run after modification. image

Hello, may I ask how much memory each of your GPUs has? I encountered Out-Of-Memory (OOM) errors when running the code on a server with 4 GPUs, each having 24GB of VRAM. Thanks.

I use 32GB of VRAM and reset batch_size=6 in test.yaml. If you use 24GB, you can try to set batch_size=1.

Hello, I would like to double-check one thing. Since I couldn't find the test.yaml file, are you referring to the v1-finetune.yaml file? image image

image

Haha, perhaps you made a mistake. This work is from TeCH, not HumanGaussian.

@cwwjyh How did you fix it?

Because the GPU setting is wrong, it can run after modification. image

Hello, may I ask how much memory each of your GPUs has? I encountered Out-Of-Memory (OOM) errors when running the code on a server with 4 GPUs, each having 24GB of VRAM. Thanks.

I use 32GB of VRAM and reset batch_size=6 in test.yaml. If you use 24GB, you can try to set batch_size=1.

haha, sorry. I use 2GPU 32GB of VRAM in TeCH project. Then it can work.

@cwwjyh How did you fix it?

Because the GPU setting is wrong, it can run after modification. image

Hello, may I ask how much memory each of your GPUs has? I encountered Out-Of-Memory (OOM) errors when running the code on a server with 4 GPUs, each having 24GB of VRAM. Thanks.

I use 32GB of VRAM and reset batch_size=6 in test.yaml. If you use 24GB, you can try to set batch_size=1.

haha, sorry. I use 2GPU 32GB of VRAM in TeCH project. Then it can work.

By the way, may I ask if the final output file 000_texture.obj contains color? My 000_texture.obj file is the same as 000_geometry.obj, both lacking color. When I proceed with the optional code below, the program becomes stuck.
python cores/main.py --config configs/tech_texture_export.yaml --exp_dir $EXP_DIR --sub_name $SUBJECT_NAME --test
And I'm not sure if running the final script generates an OBJ file with color.
image

@glorioushonor
image
I get an error when I run this line, so I'm not sure what the end result will be.

@glorioushonor image I get an error when I run this line, so I'm not sure what the end result will be.

What problem have you encountered? Maybe I can help you.

@glorioushonor
image
I encounter this problem. The command in the red box doesn't seem to run, it then just runs the next line, resulting in no g_geometry.obj file.

@glorioushonor image I encounter this problem. The command in the red box doesn't seem to run, it then just runs the next line, resulting in no g_geometry.obj file.

I suggest commenting out the commands after this line of code in the script first to obtain the error message and locate the issue.
image