juglab / n2v

This is the implementation of Noise2Void training.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Hallucinations

VolkerH opened this issue · comments

Hi,
first of all thanks for the new, pip-installable version ... I had wanted to give n2v a spin for a while and this new version just made it so easy.

This is not so much an issue with the code-base but some issue with the method itself, so I'm not sure whether a github issue is the best place to discuss this. Would be happy to move the discussion to image.sc if you think that would be a better place.

The issue I see is hallucinations. They are clearly visible when running your 3D example notebook (flywing). The following is a result for running the notebooks as is, without any additional parameter tuning:

image

I assume that the region on the right is outside of the wing and therefore should not contain any structure. However, n2v hallucinates cell membrane-like structures (blue outlined area, red arrow) and bright spots (blue arrow) in the empty space.

I'm not too surprised as the method learns to predict a pixel from its surroundings. If much of the training data contains such structures (cell membrane - honeycomb like pattern) this is probably to be expected.
What would be a good strategy to reduce the occurance of such artifacts? Include more patches with just background? That would probably reduce the predictive capability of the model.

I guess it comes down to the question under what scenario I can use n2v and whether I can trust data from subsequent image analysis when n2v has been used as a pre-processing step.

Hi Volker,

happy to hear that trying N2V is by now an easy task. We indeed invested quite some time to make it so simple.

The image.sc forum is an ideal place to discuss such matters, but maybe you decide after reading the rest of this response.

I say that, because I think N2V just showed you something you did not look careful enough for. The hallucination you point out is, in fact, data. I have tried to show these intensities the good old way, using only Fiji (max-projection (only planes 29 to 35), a tiny bit of gauss (sigma=1.0), and the fire LUT).

forVolker

The big halo around the dead pixel on the top right is bugging me a bit more, but since throughout the entire stack this pixel is super bright and just inside the darkest background area I would also not call this an hallucination.

What I often do is to download the final 3D prediction, open it in Fiji, and make a two channel image together with the original, noisy input data. In this way it is very fast to browse through the stack and switch quickly back and forth between reconstruction and raw data. Usually that convinces me about the sanity of our reconstructions even in places that seem surprising at first.

I hope this answer is helpful for you,
Best,
Florian

PS: in case you want to continue this discussion, please re-open this issue any time!

Hi Florian,

thanks for taking the time to reply.
Indeed, it appears that I was fooled by my own assumptions here (assuming that there should be no structures outside the wing) in combination with the fact that I was running the notebook on a remote server and therefore only looked at the projections.

I now see some of the structure in the raw data (although I still wonder why it is there, maybe residual material in the medium or reflections somewhere in the lightpath ?).
The dead pixel is indeed more difficult. However, there is at least one such dead pixel (looks like a needle in 3D) in the wing that ends in a bright structure. So maybe the network learnded something from that occurence.

Thanks for taking the time to answer and prompting me to have another look. It has restored enough confidence to now apply this to some of my own data.
image

Cool! Let me know about your experiences! Enjoy! :)