Problem on NextQA Inference
Franklee95 opened this issue · comments
Franklee95 commented
Hi,
When I using your SeViLA localizer on QVHighlights and Inference on NextQA data.I found all predictions are option1.And output key frames of the localizer are indential( like [0,1,2,3]).So what's the potential reason of this problem?
Franklee95 commented
Franklee95 commented
Thanks god! Finally I solve the problem! The real reason is that my gpu device(V100) cannot support torch.bfloat16.So I change from bfloat16 to float16 , which will lead to out of mixture.Should change from bfloat16 to float32 will get the correct answer