HKUST-Aerial-Robotics / open_quadtree_mapping

This is a monocular dense mapping system corresponding to IROS 2018 "Quadtree-accelerated Real-time Monocular Dense Mapping"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

So slow in Tx2 by using vins-fusion-gpu. But no problem by using example.bag

pamzerbhu opened this issue · comments

Thanks for your read, When I using Vins-fusion-GPU to estimate position, so slow to come out these msgs(about 30s). Also, there is no pic in topic/open_quadtree/depth.
Screenshot_2019-04-03_05-38-04

1, The image size is too big for TX2. Please try to resize it according to the paper.
2, The performance of the GPU on TX2 can be boosted, please check whether you are in low-power mode.
3, There is a filter to fuse temporal depth maps, and images are buffered first. It is normal to have nothing at the beginning. When the filter converges, you will get the result. I think the provided bag can demonstrate the effect.

YES, the bag performance is perfect,and I had set low size 320*240,start high-power mode as you can see in the picture. but still without depth image. Additional, I test it indoor. Is it has influence? My launch set as next pic:
Screenshot_2019-04-03_06-21-06
So I want to use my camera MYNT-EYE-D's depth for OpenChisel that you modified. but it reminds me unrecognized depth image format. How shuld I modified? Next pic is my OpenChisel launch. Thanks for your help and reply.

Screenshot_2019-04-03_06-17-21

1, Well, I can't see it is in high-performance mode from the picture. (Just make sure your self is ok).
2, The depth should be published in float32. I think my code publish this type of messages. If you are using other devices or codes, make sure the depth is in this format.
3, Indoor, outdoor does not make a difference for this method. Are you moving aggressively? Or the sequence is too short? You can check how I moved the camera in the bag or Youtube video.

My device SDK publish depth image format as mono16 ,colorimage format as rgb8. Now, I want to send these image into OpenChisel your modified directly. I should change the format first? In the kinect launch example, I found it support subscribed Kinect depth image directly. Can I send you a Wechat if it's convenient for you?

Kinect? I checked my code and did not find Kinect launch. The modified open_chisel is only used for my depth map fusion. If you want to use other input, I am afraid you need to modify yourself. Please see https://github.com/WANG-KX/OpenChisel/blob/f263445204a092f88944688401d1efde78f72bbb/chisel_ros/include/chisel_ros/Conversions.h#L162

My wechat tonybear_0728

Change Conversions.h line 168 to
if(image->encoding == "16UC1" || image->encoding == "mono16")
solved the OpenChisel problem. Thanks a lot