johnh2o2 / cuvarbase

Python library for fast time-series analysis on CUDA GPUs

Home Page:https://johnh2o2.github.io/cuvarbase/index.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Memory leak in BLS

johnh2o2 opened this issue · comments

commented

Kevin Burdge (MIT) has discovered a memory leak issue when using cuvarbase.bls.eebls_gpu_fast; to reproduce the problem, make a file called memory_leak_test.py containing the following code (only dependency is numpy and cuvarbase)

import numpy as np
import cuvarbase.bls as bls

def run_BLS(t,y,dy):
    # set up search parameters
    search_params = dict(qmin=1e-2,
                         qmax=0.12,

                         # The logarithmic spacing of q
                         dlogq=0.1,

                         # Number of overlapping phase bins
                         # to use for finding the best phi0
                         noverlap=1)

    # derive baseline from the data for consistency
    baseline = max(t) - min(t)

    # df ~ qmin / baseline
    df = search_params['qmin'] / baseline
    fmin = 4/baseline
    fmax = fmin + 1000000 * df

    nf = int(np.ceil((fmax - fmin) / df))
    freqs = fmin + df * np.arange(nf)

    bls_power = bls.eebls_gpu_fast(t, y, dy, freqs,
                                    **search_params)

baseline = 1000.
n_obs = 100
rand = np.random.RandomState(1)
t = baseline * np.sort(rand.rand(n_obs))
dy = 1 + 0.05 * rand.randn(n_obs)
y = rand.randn(n_obs)

for i in range(100):
    print(i)
    run_BLS(t,y,dy)

running

 mprof run --interval 0.01 --include-children python memory_leak_test.py

and subsequently running

mprof plot --output memory-profile.png

will produce the following plot

memory-profile

which clearly demonstrates a memory leak, since each iteration of the for loop should begin with approximately the same amount of memory.

commented

Now fixed in master