OATML / EVE

Official repository for the paper "Large-scale clinical interpretation of genetic variants using evolutionary data and deep learning". Joint collaboration between the Marks lab and the OATML group.

Home Page:http://evemodel.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Lots of memory usage when running evol_indices with many sequences

brycejoh16 opened this issue · comments

Hi EVE team,

I'm running compute_evol_indices.py on a dataset with many variants in a single csv file(>400k, specifically UniProt ID SPG1_STRSG_Olson_2014).

When I try to compute evolutionary indices of these variants it requires over 100GB of memory, and my job stalls out. I think pytorch maybe keeping in memory previously computed batches, because one batch only requires roughly 1GB of memory.

It's easy to fix this issue simply by breaking up the dataset, but rather inconvenient, so it would be great if this issue could be fixed.

Let me know if this issue makes sense, and if it is reproducible.

Take care,
Bryce

Dear Bryce,

Computing the evolutionary indices will require creating a prediction_matrix which size directly depends on a) the number of mutants you want to compute scores for b) the number of samples from the approximate posterior of the VAE. Additionally, since we populate this matrix batch by batch, the batch_size parameter will also play a role in the total memory used during scoring.

Based on your note, it seems that the batch_size is less of an issue but it is rather the sheer size of the prediction_matrix that takes a hit on memory, in particular due to the very large number of mutants in the SPG1_STRSG_Olson_2014 assay.
If you do not care about the standard deviation of scores across samples (for which having access to the full matrix is handy), then there is a very simple fix that would consist of simply using a vector of size num_mutants (instead of prediction_matrix of size num_mutants * num_samples) and sum the scores across samples instead of persisting every score values across samples in the matrix. This should significantly reduce the memory footprint, will have no impact on the average scores per mutant and will not require you to break up the dataset. You would lose the ability to easily compute the standard deviation across samples though, which is why we coded things that way.

Kind regards,
Pascal