Pinning memory issue
qqaatw opened this issue · comments
Li-Huai (Allan) Lin commented
Hi,
I'm currently using ckip-transformers-ws as a preprocessing tool in my project, and I noticed that the DataLoader's pin_memory flag was hard-coded True
in util.py
.
As pinning memory is incompatible with multiprocessing (or multiple workers) [1], when users leverage ckip-transformers in their collate_fn of DataLoader with multiple workers, a CUDA error will occur as shown in [1], even if only using CPU for inference.
Therefore, I think it would be better that:
- Pin memory only when the device is GPU.
- Add an option to decide whether or not to enable memory pinning.
Regards.
[1] https://discuss.pytorch.org/t/pin-memory-vs-sending-direct-to-gpu-from-dataset/33891/2