FanglinBao / HADAR

This is an LWIR stereo-hyperspectral database to develop HADAR algorithms for thermal navigation. Based on this database, one can develop algorithms for TeX decomposition to generate TeX vision. One can also develop algorithms about object detection, semantic or scene segmentation, optical or scene flow, stereo depth etc. based on TeX vision instead of traditional RGB or thermal vision.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ask for some details

keke-2000 opened this issue · comments

Hi, FanglinBao:
Thank you very much to your lab for coming up with such groundbreaking results, which gave me some ideas that I can't wait to move forward with. After carefully reading your article, I currently have some questions and needs regarding the details in the supplementary material:

  1. In section SIA of supplementary materials, the parameter table of the detector is shown. I want to know the dark noise term ξ and the σ term mentioned in it should include the influence of what factors, I think ξ should include the spontaneous radiation of the detector, the internal radiation of the camera box and the radiation of the filter wheel. And σ should include the electronic noise (Johnson-Nyquist noise, Flicker noise, etc.). Is it correct?
    1.png
  2. For the 0-49 channels of emiLib.mat and heatcube data used in the code, do they correspond to the range of wave number of 720-1250cm^-1 or the range of wavelength of 8-14μm? These two understandings are completely reversed.
  3. In order to advance my idea, I really need the original data of optical diaphragm effect of the filter wheel mentioned in the supplementary material SIVB. I would be very appreciated if you could provide me with it.
    2.png

Hi Keke,
Very glad that you liked our work.

  1. You are right. The dark noise term ξ includes contributions from the self-radiation of the system and the sensor. And σ accounts for the electronic noise.
  2. All the data in the HADAR database is given in unit of wave number. (one exception is the test data in the TeX code package).
  3. The optical diaphragm effect and corresponding data is only for HADAR prototype-1 experiments. It is irrelevant to HADAR prototype 2 or the synthetic data. If you are sure you still want that data (basically an image, calibrated for our setup), you can email me at: baof_at_purdue_dot_edu

Hi,Fanglin Bao:
Congratulations to your lab for doing so well in this area. I read your article and code carefully, which gave me a lot of inspiration for my project "Texture enhancement of thermal infrared images". However, since there are not many public data sets in the scenario that my research is adapted to, I have been thinking about using blender to make synthetic data. Coincidentally, I found that your experimental data also has synthetic data based on blender. Although you gave a lot of details about the data, unfortunately, it seems that you did not give how to simulate with blender. I am really eager to know your method, I wonder if it is convenient for you to let me know?The details I want to know are:
1.blender involves attaching materials to objects, for example if I want to attach the material properties of aluminum to a block, then when I attach the material properties of the block, I need to attach the emissivity of aluminum to the self-emission properties of the block (so far this is how I think of it).But is his RGB attribute still needed? (including his RGB color, aluminum appearance, roughness, surface reflection value, etc.).
2.Then,how to obtain the gt of the image I obtained, such as vmap,emap,xmap, etc., used in Tex-Net.
If you are willing to provide me with additional information, I would appreciate it very much and look forward to your reply

Hi,Fanglin Bao: Congratulations to your lab for doing so well in this area. I read your article and code carefully, which gave me a lot of inspiration for my project "Texture enhancement of thermal infrared images". However, since there are not many public data sets in the scenario that my research is adapted to, I have been thinking about using blender to make synthetic data. Coincidentally, I found that your experimental data also has synthetic data based on blender. Although you gave a lot of details about the data, unfortunately, it seems that you did not give how to simulate with blender. I am really eager to know your method, I wonder if it is convenient for you to let me know?The details I want to know are: 1.blender involves attaching materials to objects, for example if I want to attach the material properties of aluminum to a block, then when I attach the material properties of the block, I need to attach the emissivity of aluminum to the self-emission properties of the block (so far this is how I think of it).But is his RGB attribute still needed? (including his RGB color, aluminum appearance, roughness, surface reflection value, etc.). 2.Then,how to obtain the gt of the image I obtained, such as vmap,emap,xmap, etc., used in Tex-Net. If you are willing to provide me with additional information, I would appreciate it very much and look forward to your reply

Hi MeloHX,
1, you'll need to match Blender parameters to the physics properties. For example, RGB gives you 3 channels to render your interested data in your wavelength. RGB color and reflection are where you assign the self emission and reflectance values according to your physics model. You'll need to tune the surface roughness as you want, etc.
2, For ground truth eMap & tMap, you can assign a uniform color to the same material and render it. You'll need to post-process to map the color to your material index... We used the ground truth eMap and tMap to solve vMap.

Another problem is that after I attach a unique color to each material, the color on the object will change after rendering, and the same material will have different colors due to reflection and diffusion. In this case, how can I obtain the material index by reversely checking the color?

Another problem is that after I attach a unique color to each material, the color on the object will change after rendering, and the same material will have different colors due to reflection and diffusion. In this case, how can I obtain the material index by reversely checking the color?

Please set the ray depth = 0 to disable scattering. You can check the README file in this database for more details.

Hi Fanglin Bao:
I would like to express my heartfelt congratulations and admiration for your outstanding work on research. I have been closely following your progress, and I must say it is truly impressive.
I am writing to seek some assistance with a problem I encountered during the visualization process. Specifically, I have noticed that most of the TeX-GT images in my visualization are empty, appearing as completely black. Moreover, I have also observed the same phenomenon of empty TeX-GT images in the visualization results of the provided pre-trained models, particularly in the data located under the "supervised_synexp_r50_fold0_new_eval_visualized" . I am very curious to understand the underlying cause of this issue.
I would be immensely grateful if you could provide some insights and explanations regarding this matter.

Hi Fanglin Bao: I would like to express my heartfelt congratulations and admiration for your outstanding work on research. I have been closely following your progress, and I must say it is truly impressive. I am writing to seek some assistance with a problem I encountered during the visualization process. Specifically, I have noticed that most of the TeX-GT images in my visualization are empty, appearing as completely black. Moreover, I have also observed the same phenomenon of empty TeX-GT images in the visualization results of the provided pre-trained models, particularly in the data located under the "supervised_synexp_r50_fold0_new_eval_visualized" . I am very curious to understand the underlying cause of this issue. I would be immensely grateful if you could provide some insights and explanations regarding this matter.

Thanks!
Can you load the 'empty' images and check the exact data value? We mainly work with the data values in the image files. I believe it is caused by the image format. Sometimes when you output values like 0.1 to a PNG image which only takes 0~255 integers, it will become black.