ethz-asl / voxblox-plusplus

A volumetric object-level semantic mapping framework.

Home Page:https://arxiv.org/pdf/1903.00268.pdf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to use RealSense D435i camera instead of using the datasets bags?

mhaboali opened this issue · comments

Hello, thanks for sharing this great project!

I would like to know step by step how to use my own D435i camera instead of using those datasets samples you provide. I tried to run yumi.launch from gsm_node pkg and I remapped all ROS topics in the proper way but it didn't work as it worked with the dataset samples.

Here's the rqt_graph output of the running system:
image

Everything looks good but all output topics are empty

Also, I have another question about tf, does any of those nodes in yumi.launch create some tf operation? I didn't see /map frame published and it looked strange for me.

Thanks,
Your help will be appreciated!

Hi @margaritaG

Could you help me with this? I've tried my best to get it working but there was no output like the rosbag one.

@mhaboali

You do not need to use the yumi.launch file, you can rely on the more generic vpp_pipeline.launch and pass in the right parameters (the most important ones: sensor_name and scene_name).

The realsense topics and corresponding parameters are already included in the pipeline. Specifically, among the configs of the depth segmentation node. Therefore, you can directly pass in sensor_name:=realsense to vpp_pipeline.launch.

What you should add is a config file for Voxblox++ to specify some required parameters custom to your setup. The most important bit is the world/global frame_id, so that the voxblox++ node can lookup the transform of the input depth stream in world coordinates.

By the way, Voxblox++ does not perform camera tracking, so you will need to rely on an external tool for camera pose estimation which would publish the tf between the frame_id of the input depth images and some global frame. Which tool are you using?

To specify the frame_id of the global frame and other Voxblox++ parameters you can create a new config file in the cfg folder, e.g. voxblox-plusplus/global_segment_map_node/cfg/example.yaml. It should contain at least a line which specifies through world_frame_id parameter what is the global frame present in your tf tree, e.g.:

world_frame_id: "world" 

Then you can just run the vpp_pipeline launch file with the following arguments:

roslaunch gsm_node vpp_pipeline.launch sensor_name:=realsense scene_name:=example

Let me know if this works.

Thanks, @margaritaG for your reply, I just got a chance to test it.
I used RTAB-Map to track the camera and publishes the map frame but it didn't work and here's the log

SUMMARY
========

PARAMETERS
 * /depth_segmentation_node/depth_camera_info_sub_topic: /camera/depth/cam...
 * /depth_segmentation_node/depth_discontinuity_display: False
 * /depth_segmentation_node/depth_discontinuity_kernel_size: 3
 * /depth_segmentation_node/depth_discontinuity_ratio: 0.05
 * /depth_segmentation_node/depth_discontinuity_use_depth_discontinuity: True
 * /depth_segmentation_node/depth_image_sub_topic: /camera/depth/ima...
 * /depth_segmentation_node/dilate_depth_image: False
 * /depth_segmentation_node/dilation_size: 1
 * /depth_segmentation_node/final_edge_display: False
 * /depth_segmentation_node/final_edge_morphological_closing_size: 1
 * /depth_segmentation_node/final_edge_morphological_opening_size: 1
 * /depth_segmentation_node/final_edge_use_morphological_closing: True
 * /depth_segmentation_node/final_edge_use_morphological_opening: True
 * /depth_segmentation_node/label_display: True
 * /depth_segmentation_node/label_inpaint_method: 0
 * /depth_segmentation_node/label_method: 1
 * /depth_segmentation_node/label_min_size: 500
 * /depth_segmentation_node/label_use_inpaint: False
 * /depth_segmentation_node/max_distance_display: False
 * /depth_segmentation_node/max_distance_exclude_nan_as_max_distance: False
 * /depth_segmentation_node/max_distance_ignore_nan_coordinates: False
 * /depth_segmentation_node/max_distance_noise_thresholding_factor: 10.0
 * /depth_segmentation_node/max_distance_sensor_min_distance: 0.02
 * /depth_segmentation_node/max_distance_sensor_noise_param_1st_order: 0.0012
 * /depth_segmentation_node/max_distance_sensor_noise_param_2nd_order: 0.0019
 * /depth_segmentation_node/max_distance_sensor_noise_param_3rd_order: 0.0001
 * /depth_segmentation_node/max_distance_use_max_distance: True
 * /depth_segmentation_node/max_distance_use_threshold: True
 * /depth_segmentation_node/max_distance_window_size: 1
 * /depth_segmentation_node/min_convexity_display: False
 * /depth_segmentation_node/min_convexity_mask_threshold: -0.0005
 * /depth_segmentation_node/min_convexity_morphological_opening_size: 1
 * /depth_segmentation_node/min_convexity_step_size: 1
 * /depth_segmentation_node/min_convexity_threshold: 0.94
 * /depth_segmentation_node/min_convexity_use_min_convexity: True
 * /depth_segmentation_node/min_convexity_use_morphological_opening: True
 * /depth_segmentation_node/min_convexity_use_threshold: True
 * /depth_segmentation_node/min_convexity_window_size: 5
 * /depth_segmentation_node/normals_display: False
 * /depth_segmentation_node/normals_distance_factor_threshold: 0.05
 * /depth_segmentation_node/normals_method: 3
 * /depth_segmentation_node/normals_window_size: 13
 * /depth_segmentation_node/rgb_camera_info_sub_topic: /camera/color/cam...
 * /depth_segmentation_node/rgb_image_sub_topic: /camera/color/ima...
 * /depth_segmentation_node/semantic_instance_segmentation/enable: True
 * /depth_segmentation_node/visualize_segmented_scene: False
 * /gsm_node/debug/multiple_visualizers: False
 * /gsm_node/debug/save_visualizer_frames: False
 * /gsm_node/debug/verbose_log: False
 * /gsm_node/gsm/label_propagation_td_factor: 1.0
 * /gsm_node/gsm/min_label_voxel_count: 20
 * /gsm_node/icp/enable_icp: False
 * /gsm_node/icp/keep_track_of_icp_correction: True
 * /gsm_node/meshing/mesh_filename: vpp_mesh.ply
 * /gsm_node/meshing/update_mesh_every_n_sec: 0.0
 * /gsm_node/meshing/visualize: True
 * /gsm_node/meshing/visualizer_parameters/camera_position: [-0.73071, 1.3589...
 * /gsm_node/meshing/visualizer_parameters/clip_distances: [5.87024, 8.29843]
 * /gsm_node/pairwise_confidence_merging/enable_pairwise_confidence_merging: True
 * /gsm_node/pairwise_confidence_merging/merging_min_frame_count: 2
 * /gsm_node/pairwise_confidence_merging/merging_min_overlap_ratio: 0.1
 * /gsm_node/publishers/publish_object_bbox: False
 * /gsm_node/publishers/publish_scene_map: True
 * /gsm_node/publishers/publish_scene_mesh: True
 * /gsm_node/segment_point_cloud_topic: /depth_segmentati...
 * /gsm_node/semantic_instance_segmentation/class_tsdk: coco80
 * /gsm_node/semantic_instance_segmentation/enable_semantic_instance_segmentation: False
 * /gsm_node/use_label_propagation: True
 * /gsm_node/voxblox/max_ray_length_m: 3.5
 * /gsm_node/voxblox/min_ray_length_m: 0.1
 * /gsm_node/voxblox/truncation_distance_factor: 3.0
 * /gsm_node/voxblox/voxel_carving_enabled: False
 * /gsm_node/voxblox/voxel_size: 0.02
 * /gsm_node/voxblox/voxels_per_side: 8
 * /gsm_node/world_frame_id: map
 * /mask_rcnn/depth_camera_info_sub_topic: /camera/depth/cam...
 * /mask_rcnn/depth_image_sub_topic: /camera/depth/ima...
 * /mask_rcnn/rgb_camera_info_sub_topic: /camera/color/cam...
 * /mask_rcnn/rgb_image_sub_topic: /camera/color/ima...
 * /mask_rcnn/visualization: True
 * /rosdistro: melodic
 * /rosversion: 1.14.5

NODES
  /
    depth_segmentation_node (depth_segmentation/depth_segmentation_node)
    gsm_node (gsm_node/gsm_node)
    mask_rcnn (mask_rcnn_ros/mask_rcnn_node.py)

auto-starting new master
process[master]: started with pid [26515]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to c1b4e2fa-9f8e-11ea-8399-04d9f52030c2
process[rosout-1]: started with pid [26528]
started core service [/rosout]
process[mask_rcnn-2]: started with pid [26536]
process[depth_segmentation_node-3]: started with pid [26537]
process[gsm_node-4]: started with pid [26538]

Voxblox++ Copyright (C) 2016-2020 ASL, ETH Zurich.

I0526 14:23:27.879663 26537 depth_segmentation_node.cpp:641] Starting depth segmentation ... 
I0526 14:23:28.084986 26537 depth_segmentation.cpp:289] Dynamic Reconfigure Request.
Using TensorFlow backend.
WARNING:tensorflow:From /home/wsamer/vox_ws/src/mask_rcnn_ros/scripts/mask_rcnn_node.py:20: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /home/wsamer/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:508: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /home/wsamer/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:68: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From /home/wsamer/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:3837: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

WARNING:tensorflow:From /home/wsamer/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:3661: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

WARNING:tensorflow:From /home/wsamer/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:1944: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

WARNING:tensorflow:From /home/wsamer/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py:320: The name tf.log is deprecated. Please use tf.math.log instead.

WARNING:tensorflow:From /home/wsamer/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py:374: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From /home/wsamer/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py:398: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version.
Instructions for updating:
box_ind is deprecated, use box_indices instead
WARNING:tensorflow:From /home/wsamer/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py:703: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:From /home/wsamer/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py:712: The name tf.sets.set_intersection is deprecated. Please use tf.sets.intersection instead.

WARNING:tensorflow:From /home/wsamer/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py:729: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
2020-05-26 14:23:32.119899: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-26 14:23:32.123465: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2020-05-26 14:23:32.177320: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-26 14:23:32.177612: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5561e1862c10 executing computations on platform CUDA. Devices:
2020-05-26 14:23:32.177624: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5
2020-05-26 14:23:32.197407: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3600000000 Hz
2020-05-26 14:23:32.199306: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5561e195dfa0 executing computations on platform Host. Devices:
2020-05-26 14:23:32.199368: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2020-05-26 14:23:32.199850: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-26 14:23:32.201083: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: GeForce RTX 2080 SUPER major: 7 minor: 5 memoryClockRate(GHz): 1.83
pciBusID: 0000:01:00.0
2020-05-26 14:23:32.201862: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2020-05-26 14:23:32.205748: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2020-05-26 14:23:32.208953: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0
2020-05-26 14:23:32.209949: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
2020-05-26 14:23:32.213307: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
2020-05-26 14:23:32.214029: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
2020-05-26 14:23:32.216661: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-05-26 14:23:32.216768: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-26 14:23:32.217102: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-26 14:23:32.217314: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-05-26 14:23:32.217372: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2020-05-26 14:23:32.218486: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-26 14:23:32.218498: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0 
2020-05-26 14:23:32.218503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N 
2020-05-26 14:23:32.218580: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-26 14:23:32.218840: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-26 14:23:32.219104: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6660 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5)
I0526 14:23:54.427927 26537 depth_segmentation.cpp:170] DepthSegmenter initialized
I0526 14:23:54.428089 26537 depth_segmentation.cpp:32] CameraTracker initialized
2020-05-26 14:23:55.927007: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2020-05-26 14:23:56.000632: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-05-26 14:23:56.002090: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-05-26 14:23:56.008418: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-05-26 14:23:56.024908: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2020-05-26 14:23:56.025713: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
Traceback (most recent call last):
  File "/home/wsamer/vox_ws/src/mask_rcnn_ros/scripts/mask_rcnn_node.py", line 211, in <module>
    main()
  File "/home/wsamer/vox_ws/src/mask_rcnn_ros/scripts/mask_rcnn_node.py", line 207, in main
    node.run()
  File "/home/wsamer/vox_ws/src/mask_rcnn_ros/scripts/mask_rcnn_node.py", line 106, in run
    results = self._model.detect([np_image], verbose=0)
  File "/home/wsamer/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py", line 2373, in detect
    self.keras_model.predict([molded_images, image_metas], verbose=0)
  File "/home/wsamer/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1835, in predict
    verbose=verbose, steps=steps)
  File "/home/wsamer/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1331, in _predict_loop
    batch_outs = f(ins_batch)
  File "/home/wsamer/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2482, in __call__
    **self.session_kwargs)
  File "/home/wsamer/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/wsamer/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/wsamer/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
    run_metadata)
  File "/home/wsamer/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
  (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[node conv1/convolution (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:3341) ]]
	 [[mrcnn_detection/Reshape_1/_4279]]
  (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[node conv1/convolution (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:3341) ]]
0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node conv1/convolution:
 zero_padding2d_1/Pad (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:2204)	
 conv1/kernel/read (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:396)

Input Source operations connected to node conv1/convolution:
 zero_padding2d_1/Pad (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:2204)	
 conv1/kernel/read (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:396)

Original stack trace for u'conv1/convolution':
  File "/vox_ws/src/mask_rcnn_ros/scripts/mask_rcnn_node.py", line 211, in <module>
    main()
  File "/vox_ws/src/mask_rcnn_ros/scripts/mask_rcnn_node.py", line 206, in main
    node = MaskRCNNNode()
  File "/vox_ws/src/mask_rcnn_ros/scripts/mask_rcnn_node.py", line 67, in __init__
    config=config)
  File "/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py", line 1768, in __init__
    self.keras_model = self.build(mode=mode, config=config)
  File "/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py", line 1824, in build
    _, C2, C3, C4, C5 = resnet_graph(input_image, config.BACKBONE, stage5=True)
  File "/vox_ws/src/mask_rcnn_ros/src/mask_rcnn_ros/model.py", line 152, in resnet_graph
    x = KL.Conv2D(64, (7, 7), strides=(2, 2), name='conv1', use_bias=True)(x)
  File "/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 619, in __call__
    output = self.call(inputs, **kwargs)
  File "/.local/lib/python2.7/site-packages/keras/layers/convolutional.py", line 168, in call
    dilation_rate=self.dilation_rate)
  File "/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 3341, in conv2d
    data_format=tf_data_format)
  File "/.local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 894, in convolution
    name=name)
  File "/.local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 971, in convolution_internal
    name=name)
  File "/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1071, in conv2d
    data_format=data_format, dilations=dilations, name=name)
  File "/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/.local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
    op_def=op_def)
  File "/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

[mask_rcnn-2] process has died [pid 26536, exit code 1, cmd /home/wsamer/vox_ws/src/mask_rcnn_ros/scripts/mask_rcnn_node.py ~input:=rgb_image_sub_topic /camera/rgb/image_raw:=/camera/color/image_raw /camera/rgb/camera_info:=/camera/color/camera_info /camera/depth/image_rect_raw:=/camera/depth/image_raw /camera/depth/camera_info:=/camera/depth/camera_info __name:=mask_rcnn __log:=/home/wsamer/.ros/log/c1b4e2fa-9f8e-11ea-8399-04d9f52030c2/mask_rcnn-2.log].
log file: /home/wsamer/.ros/log/c1b4e2fa-9f8e-11ea-8399-04d9f52030c2/mask_rcnn-2*.log

This topics /depth_segmentation_node/object_segment and /depth_segmentation_node/segmented_scene are empty and /maskrcnn crashed.

Your help is highly appreciated!

@mhaboali

It looks like mask_rcnn node crashes immediately after launching it, therefore not providing the rest of the pipeline with the predicted object masks required for semantic instance-aware mapping.

The reason for the crash is in the following lines of the log:

2020-05-26 14:23:56.000632: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-05-26 14:23:56.002090: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-05-26 14:23:56.008418: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-05-26 14:23:56.024908: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2020-05-26 14:23:56.025713: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
Traceback (most recent call last):

and

tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
  (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[node conv1/convolution (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:3341) ]]
	 [[mrcnn_detection/Reshape_1/_4279]]
  (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[node conv1/convolution (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:3341) ]]

At first glance, it seems that there are some version incompatibility issues with your CUDA, cuDNN, tensorflow-gpu install stack. You should look for an answer to this problem in the relative repositories, e.g. a quick search lead me to this: tensorflow/tensorflow#9489

In general, tensorflow requirements for mask_rcnn_ros are not different than the original https://github.com/matterport/Mask_RCNN. Once you are able to solve your issues on that framework, maybe getting help on their issue tracker, you should be able to run mask_rcnn_ros.

Thanks so much for the details, I'll try to figure it out once I get a chance and I'll keep you posted.

I appreciate your time!

Hi @margaritaG

I'm still not able to run it without those errors, I did my best to get it working but I always get these errors:

2020-05-26 14:23:56.024908: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2020-05-26 14:23:56.025713: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
Traceback (most recent call last):

and this one:

tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
  (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[node conv1/convolution (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:3341) ]]
	 [[mrcnn_detection/Reshape_1/_4279]]
  (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[node conv1/convolution (defined at /.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:3341) ]]

I have an intel machine with NVidia RTX 2070 Super, 32 Gb RAM, and i9 CPU 16 core. I invested the whole yesterday to install things from scratch but I always get those errors as they were.

I appreciate your efforts and time, thanks a lot!

@mhaboali
This seems to be a known issue with the latest generations of GPUs.
You can find more information at this issue.

commented

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs.