FanglinBao / HADAR

This is an LWIR stereo-hyperspectral database to develop HADAR algorithms for thermal navigation. Based on this database, one can develop algorithms for TeX decomposition to generate TeX vision. One can also develop algorithms about object detection, semantic or scene segmentation, optical or scene flow, stereo depth etc. based on TeX vision instead of traditional RGB or thermal vision.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

S_EnvObj Generation unclear

Daepilin opened this issue · comments

Hello Dr. Bao,

Sorry first up for more questions. I spent the last weeks trying to get TexSdG working, and while your previous clarification about the Material mapping definitely helped, I'm still facing issues.

I ported the marlab Code over to python for my own experiments, but confirmed the optimization produces the same results (and all the steps in between as well). So far so good. (I can also recommend this, as some scipy optimizers are much quicker for me than matlab itself)

I teste this on the real world Scene11 and got great results!

But on the simulated scene I simply cannot get all materials to properly 'estimate'.

Scene 7, first frame for example, most materials are correct but the trees and flowers/Grass tufts simply always come out as water. The Grass surfaces on the other Hand works fine.
20231211_133437
Please ingore Lack of textur, etc. I have not ported some of the visualization modes/details)

As scene 11 works I think this must be due to the environment object estimation (or maybe temperature or radiance parameters, but I've tested a lot there and have similar results), as you provide a ground truth environment estimation for that scene (which I used for testing)

Other scene Show similar errors with a few materials being categorically wrong.

In your follow up "why are thermal images blurry" you weite that you use K-means to group environment objects and the average channel wise across each object. Do I understand that part correctly?

I get very clean clusters in the aforementioned scene (1 cluster being the sky and stones, the other being everything else) but I feel like I'm doing sth wrong still.
20231211_133310

I then take each cluster and calculate the mean for each of the 54 channels.

Any idea on why this might not work as expected?

Ps: I'm sorry I cannot post proper screenshots, these Photos are all I can provide.

Hi Daepilin,

Thanks for sharing your thoughts about a python version of TeX-SGD. I'm definitely interested and want to have a test if you've released it somewhere.

Since you have good results for Scene 11. I think the error you met with synthetic scenes is not a bug of your codes. Environment estimation does matter. K-means clustering is the best solution we have for the moment, but there are many remaining aspects deserving further exploration.

For objects in crowded scenes, the non-uniformity of an object's environment starts to dominate over. In principle, more environmental terms are needed for crowded scenes (e.g., scene 7 or scene 1) than open scenes (like experimental scene 11 or synthetic scene 9). But 'with more environmental terms' immediately requires more spectral bands to solve the problem, and the solution is less robust (trapped in local minima). Also note that the synthetic data contains numeric error as well in Monte Carlo simulations (finite sampling).

We think using spatial+spectral information for TeX decomposition (i.e., TeX-Net) is a promising way to go in the future.

Hope this can help!

Definitely interesting input! Thanks for that!

Unfortunately for scene 7 I could, even with 6 en objects, not quite reproduce correct results.

But your opinion and my experiments seem to confirm my idea that it's the environmental radiation and I will continue my investigations there.

Except for temperature. The matlab example uses 10-20C for the real scene, but for simulated data this is obviously not correct. Would it be possible to know that range for those as well?

As for sharing code: I'm not sure i will be able to share everything, but I will do what I can when I can :)

what I can say already is that I have good experiences using the scipy slsqp optimizer (maybe a bit more noisy than interior point but much much faster)

Yes, you can check the tMap in the ground truth folder for the ground truth temperatures. You can set upper and lower bounds accordingly.
'Scipy slsqp optimizer' is the information I would like to know. That is sufficient. Thanks.

Glad I could provide some useful info. I'm also still experimenting with other optimizer algorithms but the final results seem very similar right now.

Maybe 2 more things on this:

  1. what did you use for estimating the environment in scene 11,if you can share thst info? I've compared the GT you provide with my own K-means and the estimations are quite significantly different. Do you provide your own cluster centers or do you use an automatic approach?

  2. Do you do further processing with the cluster means before using them in later steps? Right now I'm basically simply using
    cluster_idx = kmeans(reshape(HSI, [], 49) 2);
    Env1 = mean(reshape(HSI, [], 49)(cluster_idx==1, :));
    Env2 = mean(reshape(HSI, [], 49)(cluster_idx==2, :));
    S_EnvObj = cat(1,Env1,Env2); (example what I tried for comparing your scene 11 GT with my Code)

Of course this does not guarantee me which of the 2 (or more) clusters is the sky, but from my understanding thst is only relevant for visualization?

Of course only if you want/can answer this. I know you wrote in other issues that you plan further releases so I dont want to pry too much.

Yeah, we plan to confirm everything is correct and make the process more robust, before we can release it.
For Scene 11, mostly the sky is not included in the images. So, we estimated the sky radiance from the reflection of the checkerboard marker. 2 out of 4 squares of the marker are highly reflective and diffusive.

Thank you very much :)