MichaelRamamonjisoa / SharpNet

SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Do you have a code with insertion Stanford Bunny into the scene?

Kakoedlinnoeslovo opened this issue · comments

I mean the code of implementation of Figure 1 of https://arxiv.org/pdf/1905.08598.pdf.
Thank you for reply!

Hello, thank you for your interest in the paper. Yes I have code somewhere on my computer however it is rather hard-coded at the moment, but here is the general idea if you want to generate the same figures (actually some explainations are in supplementary materials).

  1. Choose your image I1, and compute the estimated depth D1 with SharpNet
  2. Render a depth map D_bunny and an RGB image I_bunny of the Stanford Bunny using this ply model and a renderer of your choice (i used Blender), using an object pose of your choice.
  3. Compute the new RGB image I2 that is a copy of I1 but with
    I2[D_bunny<D1] = I_bunny[D_bunny<D1]
    This should put pixels of the rendered bunny image instead of original the image when the scene is occluded by the bunny, and vice/versa.
  4. Look at your image and see if the bunny is at the distance you hoped it would be in the image, if not, repeat from 2.

I am sorry if you were looking more for a true augmented reality method, I may do that later for sequence of registered images but not in a little while.

Hello, Michael, thank you for your reply! Do you use contours on step 3? What do you mean by designation D1?

Hey, sorry for the late reply. No I do not use contours in step 3, only depth prediction. As for D1, it denotes the predicted depth map with SharpNet.