cswry / SeeSR

[CVPR2024] SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cuda out of memory(Training process)

Shengqi77 opened this issue · comments

Hellow !
I follow the following settings, and I used the NVIDIA GeForce RTX 3090 (24GB) to run the trianing code. However, I met the problem of cuda out of memory. Is it because the VRAM of the 3090ti graphics card is insufficient for training?

single gpu

CUDA_VISIBLE_DEVICES="0," accelerate launch train_seesr.py
--pretrained_model_name_or_path="preset/models/stable-diffusion-2-base"
--output_dir="./experience/seesr"
--root_folders 'preset/datasets/train_datasets/training_for_seesr'
--ram_ft_path 'preset/models/DAPE.pth'
--enable_xformers_memory_efficient_attention
--mixed_precision="fp16"
--resolution=512
--learning_rate=5e-5
--train_batch_size=1
--gradient_accumulation_steps=2
--null_text_ratio=0.5
--dataloader_num_workers=0
--checkpointing_steps=10000

I've encountered the same issue. Has the problem been resolved?