HKUST-Aerial-Robotics / open_quadtree_mapping

This is a monocular dense mapping system corresponding to IROS 2018 "Quadtree-accelerated Real-time Monocular Dense Mapping"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about running MH_01_easy.bag

DYYYYYYYYY opened this issue · comments

Hello, I have combined vins_estimator with open_quadtree_mapping, and the data type problem has been solved, but when I try to play MH_01_easy.bag, I find that his depth map is black for a long time, and the point cloud image does not exist(Only a few points). What is going on?

Screenshot from 2019-05-31 09-57-01
Screenshot from 2019-05-31 09-59-25

Dear,

Thanks for your interest! Euroc is a challenging dataset for monocular depth estimation, and we haven't tested the method on this dataset. One possible reason is that the rotation of the camera caused the depth filter hard to converge. To validate the hypothesis, you can disable the depth filter by commenting this line:

fuse_output_depth();

This will cause the method to generate noisy estimations. You can post the result here and we can see if the depth filter convergence is the reason.

Regards,
Kaixuan

First of all, thank you for your reply. In fact, I want to use MYNT® EYE Standard Camera with vins_estimator and open_quadtree_mapping to reconstruct dense maps. Would you like to ask if such a similar situation will happen to external cameras? What are the specific requirements for the camera's image frequency and the frequency of the imu?
     I look forward to your reply again!

Other than that, my platform is jetson TX2.

Dear,

If I remember correctly, MYNT EYE is a stereo camera, right? Why not using a stereo method to estimate the depth map? Cannot find any good efficient stereo methods?

If you insist on using a monocular method, you are recommended to try learning-based methods (for example, https://github.com/HKUST-Aerial-Robotics/MVDepthNet) as it can generate better results.

For quadtree_mapping, first, you need to get a good calibration. 30Hz images, 200Hz imu with 10~15Hz pose estimation. The quality of this method depends a lot on the camera movement as the disparity to triangulate the depth is from your camera movement.

Hope it helps.
Kaixuan

Hello, thank you for your reply. In fact, the MYNT EYE stereo camera has its own depth map estimation algorithm, but the depth map estimated in tx2 is too bad, and it can have a good effect on the gtx1060 graphics card notebook. I want to use the tx2 platform to get to the drone in the future.

Ok...
Depth maps from stereo cameras will be better than monocular cameras. Otherwise, you can simply drop the right images, right?

Maybe you need to check the image quality from MYNT EYE, I never used this camera before and do not know the image quality. In our experiments, using a monocular camera can generate depth maps sufficient for (not very fast) UAV navigation, and we have done plenty of demos and experiments (for example, https://www.youtube.com/watch?v=O4YJ0aXcP9I&t=192s).

I think the difference is the platform (In the video, we distribute VINS and mapping on a i7 and tx2, respectively. There are two computers on the drone, connected by ethernet. We use bluefox as the camera.) I understand that you only have one TX2, and we are also trying to reduce the devices on the drone. It needs algorithms optimization and maybe more research on this topic.

Good luck and you are welcomed to discuss the mapping optimization.

Thank you for sharing, is the content in your video already open source? If you don't have open source yet, I am currently working on a visually autonomous obstacle avoidance project. I want to learn from and use some of your code. Could you please give me some advice? I am very grateful for this.

Well...
For the drone navigation in the video, both the odometry and mapping part are open source under HKUST-Aerial-Robotics. And there are many planning methods too. All you need to do, if you want to reproduce the experiment, is a systematic work that integrate softwares and drone hardwares.

Can you please let me know which mapping framework and path planning is used in this video ?
Thank you