How to predict with GPU
yevgeniyclaudio opened this issue · comments
Hello! Im trying to predict my own image using 2D_versatile_fluo model. And its working very well! But my CPU usage is going up to 80%, but when trying to set 'use_gpu' in config, nothing changes, CPU is 80% usage(
my script:
from stardist.models import StarDist2D, Config2D
from stardist import gputools_available
from skimage import data, util, measure
import cv2
from csbdeep.utils import normalize
conf = Config2D(
n_rays = 1024,
use_gpu = True and gputools_available(),
grid = (2,2))
#n_channel_in = n_channel)
print (conf)
model = StarDist2D(conf).from_pretrained('2D_versatile_fluo')
#model = StarDist2D.from_pretrained('2D_versatile_fluo')
while True:
img = cv2.imread("e:\\instanse_segmentation\\8.png")
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)#_BINARY)[1]
labels, details = model.predict_instances(normalize(img),prob_thresh=0.01)
GPU not activated(
When run this^
import importlib, platform
print(f'os: {platform.platform()}')
for m in ('stardist','csbdeep','tensorflow'):
try:
print(f'{m}: {importlib.import_module(m).__version__}')
except ModuleNotFoundError:
print(f'{m}: not installed')
import tensorflow as tf
try:
print(f'tensorflow GPU: {tf.test.is_gpu_available()}')
except:
print(f'tensorflow GPU: {tf.config.list_physical_devices("GPU")}')
output is:
os: Windows-10-10.0.19041-SP0
stardist: 0.8.3
csbdeep: 0.7.2
tensorflow: 2.6.0
WARNING:tensorflow:From segmentationStar3.py:12: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2022-11-03 09:21:46.580886: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-03 09:21:48.640517: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /device:GPU:0 with 1344 MB memory: -> device: 0, name: NVIDIA GeForce GTX 950, pci bus id: 0000:01:00.0, compute capability: 5.2
tensorflow GPU: True
I have GPU memmory is only 2GB but trying not train but predict
Can anyone help me what to do! Thanks
Hi @yevgeniyclaudio, the use_gpu
configuration flag only affects model training, concretely using GPU-based acceleration of the data generator. It specifically does not affect at all whether TensorFlow uses the GPU.
model = StarDist2D(conf).from_pretrained('2D_versatile_fluo')
This does not work as you think it does. The call to from_pretrained
(which is a class method and can be called before creating a new model) will always create a new model with the config from the pre-trained model that you select. The line should be model = StarDist2D.from_pretrained('2D_versatile_fluo')
.
But my CPU usage is going up to 80%
The StarDist prediction step includes a CPU-only post-processing step after the (potentially GPU-accelerated) neural network prediction. Hence, it's normal that you see high CPU load.
Sorry for the late reply,
Uwe
By the way, but this is more a general question and not a bug report. Please ask questions like this at the forum in the future.
By the way, but this is more a general question and not a bug report. Please ask questions like this at the forum in the future.
Thanks for the answer!!! But GPU runs only after I installed pytorch == 1.7.1+cu110
But after that another problem RAM memory begins to grown! (
But GPU runs only after I installed pytorch == 1.7.1+cu110
StarDist uses TensorFlow, not PyTorch. You were probably missing CUDA then.
But after that another problem RAM memory begins to grown! (
The post-processing may require a lot of RAM if you have huge images with lots of objects.
See this notebook if you are working with big data.