cdiazbas / enhance

Enhance

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error on inference

AKMourato opened this issue · comments

commented

While running the pre-trained model got the following output:

Using TensorFlow backend.
WARNING:tensorflow:From /home/amourato/enhance/models.py:19: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
WARNING:tensorflow:From /home/amourato/enhance/models.py:19: The name tf.logging.ERROR is deprecated. Please use tf.compat.v1.logging.ERROR instead.
('tensorflow version:', '1.14.0')
('keras version:', '2.3.1')
Model : intensity
('Size image: ', (4096, 4096))
Setting up network...
Loading weights...
Predicting data...
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)

This issue happens with tensorflow==1.15 and tensorflow==2.5 in compatibility mode.
Any solution?

Hi, it doesn't seem to be due to the tensorflow version (I've checked that 1.13, 1.14, 1.15 work with various keras). The 'std::bad_alloc' error implies that you are running out of memory. The memory you need to process a 4096x4096 input image is larger than the test image in the example (200x200). Therefore, I would recommend using another machine or cropping your FOV until your machine can handle the transformation well.

commented

Thanks. Yes, the code runs on the cpu, unfortunately I'm working with full-disk images.
Is there a straightforward tweak to send the whole model to the gpu? I work with pytorch so I'm not familiar with tf practices.

If the problem is memory, I'm not sure that sending it to the gpu is the best thing to do because usually the gpu ram is smaller than the one associated with the cpu. I would recommend checking if you have other processes on your machine that are consuming memory and stopping them. If you have seen the code (https://github.com/cdiazbas/enhance/blob/master/enhance.py#L77), there is already a loop that crops very large images into chunks and recomposes them. Feel free to modify that loop to make the chunks smaller and fit in your memory.

commented

It would be to send for a remote gpu server. Ok I see, will check it then, thanks.