numba / numba

NumPy aware dynamic Python compiler using LLVM

Home Page:https://numba.pydata.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

memory leak caused by PinnedArray

doronf-cortica opened this issue · comments

Reporting a bug

class PinnedArray has no __del__ function, so when memhostalloc allocates memory and passes it the pointer, that pointer will never be freed when the PinnedArray object is deleted.

see: https://github.com/numba/numba/blob/main/numba/cuda/cudadrv/driver.py#L2077

example:

import numba.cuda
import psutil
import os
import numba

proc = psutil.Process()
for i in range(10):
    print('memory before alloc: ', proc.memory_info().rss)
    f = numba.cuda.pinned_array(200000000)
    print('memory before del: ', proc.memory_info().rss)
    del f
    print('memory after del: ', proc.memory_info().rss)

i see now that numba.cuda.current_context().reset() fixes it. this isn't documented, though

You shouldn't need to reset the context just to get your memory back though! :-)

The implementation of pinned_array() indeed looks wrong - I don't think it's even using PinnedArray, but if it did then the PinnedArray instance would (or should have) been constructed with a finalizer to free the memory when it's been garbage collected.

yeah, i noticed the finalizer after i had posted this issue and didn't look further into it...

Isn't this behavior expected because how the cuda Context defer deallocations?

Isn't this behavior expected because how the cuda Context defer deallocations?

The problem is that we're not even registering a finalizer when calling pinned_array() - so it's expected that a deallocation occurs eventually, but it's just never occurring in this case.