batmanlab / Mammo-CLIP

Official Pytorch implementation of MICCAI 2024 paper (early accept, top 11%) Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPU memory

emrekeles-arch opened this issue · comments

How much GPU memory is used during training and how long does training take?

We train it using 1 NVIDIA RTX6000 GPU for 10 epochs. It took 3 days for UPMC data (image-text). It took around 4.5 days for UPMC (image-text) + VinDr (image-labels) data. We are now migrating to distributed parallel to pre-train with bigger datasets.