Barlows redundancy
ArminMne opened this issue · comments
Hello,
If you can please help in understanding the Barlows redundancy fact mentioned in the paper, I am a bit confused to grasp it. I found someone else also asking about it on Reddit.
Here is the Reddit link: https://www.reddit.com/r/MachineLearning/comments/ma10iu/d_barlow_twins_ssl_via_redundancy_reduction/?utm_source=share&utm_medium=ios_app&utm_name=iossmf
Thanks,
Armin
Hi Armin,
Natural images are highly redundant, in the sense that nearby pixels often tend to have very close RGB values. A redundancy-reduced code of images is a code where components carry non-redundant information, and can thus be more compact. This is for example how JPEG compression works: by taking advantage of the high redundancy of nearby pixels it creates a compressed representation of the image. Another example of a redundancy-reduced code is found in the retina: https://direct.mit.edu/neco/article/2/3/308/5533/Towards-a-Theory-of-Early-Visual-Processing
In general, the principle of redundancy-reduction has been a powerful principle to explain the structure and organization of the visual system in Neuroscience. In Barlow Twins, we use this principle to build representations which do not collapse, in the sense that the representations learned are richly informative about the input image.
Let us know if you have more questions about this neat principle.