HLR / DomiKnowS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cuda out of memory in semantic loss training

DariusNafar opened this issue · comments

@hfaghihi15 check this log out:

a116599

semantic loss causes Cuda out of memory even with very small batches even on Avicenna after a small number of iterations (<10).
if you want to get the same error run Chen's code in his branch:

https://github.com/HLR/DomiKnowS/tree/chen_zheng_procedural_text

with this command:
python WIQA_aug.py --cuda 0 --epoch 10 --lr 2e-7 --samplenum 1000000000 --batch 2 --beta 1.0 --semantic_loss True

Hi @AdmiralDarius, is this problem resolved? Could you elaborate on what the issue was and how it was handled?