sergivalverde / nicMSlesions

Easy multiple sclerosis white matter lesion segmentation using convolutional deep neural networks.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Running in Google Colaboratory

isabellameds opened this issue · comments

Hi Sergi,
I would like to know if it's possible to run it using Google Colab.

Thank you for replying me!
I tried to run it, but Colab doesn't recognize some commands in the code, maybe I need to change some configurations then.

Hi Sergi, after some small modifications, I was able to start training using Google Colab, but it stops running after the used memory reaches the limit of 25 GB.
Those are the messages appearing before it stops:

CNN: Starting training session
CNN: training net with 38 subjects
CNN: loading training data for first model
/content/gdrive/My Drive/nicMSlesions_SergiValverde/libs/CNN/base.py:503: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use arr[tuple(seq)] instead of arr[seq]. In the future this will be interpreted as an array index, arr[np.array(seq)], which will result either in an error or a different result.
patches = [new_image[idx] for idx in slices]
tcmalloc: large alloc 1448271872 bytes == 0x185254000 @ 0x7fc72f0a01e7 0x7fc72c50341e 0x7fc72c553bdb 0x7fc72c556f23 0x7fc72c5fb494 0x7fc72c5fb65d 0x566f73 0x59fd0e 0x7fc72c540f63 0x50a12f 0x50beb4 0x507be4 0x509900 0x50a2fd 0x50beb4 0x507be4 0x509900 0x50a2fd 0x50beb4 0x507be4 0x509900 0x50a2fd 0x50beb4 0x5095c8 0x50a2fd 0x50beb4 0x507be4 0x50ad03 0x634e72 0x634f27 0x6386df
tcmalloc: large alloc 1781858304 bytes == 0x1f1eb2000 @ 0x7fc72f0a01e7 0x7fc72c50341e 0x7fc72c553bdb 0x7fc72c553c78 0x7fc72c5fae10 0x7fc72c5fb53c 0x7fc72c5fb65d 0x566f73 0x59fd0e 0x7fc72c540f63 0x50a12f 0x50beb4 0x507be4 0x509900 0x50a2fd 0x50beb4 0x507be4 0x509900 0x50a2fd 0x50beb4 0x507be4 0x509900 0x50a2fd 0x50beb4 0x5095c8 0x50a2fd 0x50beb4 0x507be4 0x50ad03 0x634e72 0x634f27
^C

I used the below configuration, do you recommend changing some of the parameters?

[database]
train_folder = /content/gdrive/My Drive/Programa_unet/MS_Data
inference_folder = /content/gdrive/My Drive/Programa_unet/MS_Data
flair_tags = FLAIR_preprocessed
t1_tags = T1_preprocessed
mod3_tags = None
mod4_tags = None
roi_tags = ManualSegmentation_1
register_modalities = False
denoise = True
denoise_iter = 3
skull_stripping = False
save_tmp = False
debug = True

[train]
full_train = True
pretrained_model = baseline_2ch
balanced_training = False
fraction_negatives = 2.0
load_weights=False

[model]
name = baseline_2ch
pretrained = None
train_split = 0.25
max_epochs = 60
patience = 50
batch_size = 5000
net_verbose = 1
gpu_number = 0

[postprocessing]
t_bin = 0.5
l_min = 10
min_error = 0.5

Many thanks in advance,
Isabella

Hola!

You should reduce batch size to something reasonable like 128....

hope it helps,
s

it's ok now, thank you so much for your attention.