EndyWon / AesUST

Official Pytorch code for "AesUST: Towards Aesthetic-Enhanced Universal Style Transfer" (ACM MM 2022)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When training, GPU memory is insufficient

yixiu351 opened this issue · comments

Hello, may I ask why there is insufficient GPU memory when I use RTX2080ti 11GB GPU, but I can train normally when I use batch_size=2? The batch_size=4 mentioned in the article can be trained using the RTX2080 8GB GPU. Hope to get your answer, thank you