FanglinBao / HADAR

This is an LWIR stereo-hyperspectral database to develop HADAR algorithms for thermal navigation. Based on this database, one can develop algorithms for TeX decomposition to generate TeX vision. One can also develop algorithms about object detection, semantic or scene segmentation, optical or scene flow, stereo depth etc. based on TeX vision instead of traditional RGB or thermal vision.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to run the matlab code to get the complete data and run the successfully TeX-SGD (Semi-Global-Decomposition) to get the results?

jackygsb opened this issue · comments

Hi jackygsb,
The code to generate the semantic library has not been released yet. I'm still improving that algorithm and it will be released along with follow-up papers.
The code and data in the HADAR paper starts from the obtained semantic library (matLib2.mat, HueLib2.mat, Idx_EnvObj.mat, S_EnvObj.mat).
You can run TeX.SGD following the instructions in mainTeX.m and it will return eMap/tMap/vMap

Hi jackygsb, I'll check and update in a few days.

Hi jackygsb, I've uploaded the test data. You can now download the TeX_matlab_code_package_with_test_data.zip file from the same link, unzip it, and run mainTeX.m

test.txt
Hi jackygsb,
I've attached here my previous matlab code to visualize the TeX vision from TeX-Net outputs, to help you configure the TeX object.
You will need to specify some path/names/parameters, though, for your specific case.
Hope this can help.

HubLib2 and matLib2 in the test folder are only for the test, corresponding to the experimental subset of the full database.
HubLib_full and matLib_full are given in the HADAR database folder, named as *Lib_fullDatabase.mat. You can copy and paste and rename them.

commented

Can you provide Python visualization code for TeX-Net outputs?

HubLib2 and matLib2 in the test folder are only for the test, corresponding to the experimental subset of the full database. HubLib_full and matLib_full are given in the HADAR database folder, named as *Lib_fullDatabase.mat. You can copy and paste and rename them.

Hi Fanglin: If my understanding is correct, even TeX-SGD requires a lot of pre-knowledge, such as material library, number of object categories and other information. The overall feeling is that you must know the information of a specific scene to calculate its ground truth.

So for real scenes, for example, when we collect heatcube, we don't even know how many categories and other information are in the scene. How to get the results of TEX? Can the author provide a general program that can optimize the input heatcube based on the material library provided by author to obtain ground truth?

Hi jackygsb,

As stated in the paper,
" The material library explains the physics but requires on-site collection/calibration. We have also provided a generalized HADAR theory that does not require an input of material library (see section SV.C of the Supplementary Information)... "
We started with a material library to better explain how we implement the TeX decomposition. But we did have a generalized method called the 'semantic library' for real-world scenes where the environment is unknown. The semantic library, the number of categories, and the environmental radiations can be estimated from the data itself. We are working on a follow-up paper which has more details about this semantic library approach. I'll be able to share it once it's done.

HubLib2 and matLib2 in the test folder are only for the test, corresponding to the experimental subset of the full database. HubLib_full and matLib_full are given in the HADAR database folder, named as *Lib_fullDatabase.mat. You can copy and paste and rename them.

Hi Fanglin: Thanks. It seems that the unsupervised loss (physics-based loss) could not work independently? I have tried to train the Tex-net only using the unsupervised loss. However, the results looks bad especially for producing eMap. I am thinking about how to verify the performance of TEX-net without groundtruth data, and how to provide reference for subsequent research. In particular, you did not publish the detailed method of making heatcube and groundtruth, such as how the groundtruth emap that may highly dependent on the the semantic library was obtained, and whether you can provide the corresponding code. Thanks.

Thanks for your continuing efforts to follow the results.

  1. For the moment, we only have good results for supervised and hybrid learnings (hybrid loss). The completely unsupervised learning (physics-based loss) is more difficult, and we are still working on that.
  2. For synthetic scenes, the heatcube and the ground truth T/e maps are generated by path tracing (Blender). This is mentioned in the Methods. There is no code for this. For experimental scenes, the heatcube is collected by the hardware imager. We haven't released the code to generate the semantic library. We are still improving this part and will release it in follow-up papers.

@jackygsb
If you would like to, we can connect in some social apps for future discussions and sharing of latest results. I mainly use WeChat, but pls let me know if you can suggest an alternative app. You can email me. Thanks!

@jackygsb If you would like to, we can connect in some social apps for future discussions and sharing of latest results. I mainly use WeChat, but pls let me know if you can suggest an alternative app. You can email me. Thanks!

baof[at]purdue[dot]edu