Fanghua-Yu / OSRT

Official code of OSRT: Omnidirectional Image Super-Resolution with Distortion-aware Transformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cuda Out of Memory when doing Inference

marcobarbierato opened this issue · comments

I wanted to try this model to inference single images without validation.

So I tried modifying the option.yml files appropriately following the implementation in HAT (link).

However, even with small images of 300x300, 40GBs of GPU RAM are not enough for the model.

So I tried implementing tile-based mode following the implementation in HAT link, but even with smaller tile sizes I still run out of memory.

How much memory does the model need? Am I doing something wrong? Thanks.