brain-research / tensorfuzz

A library for performing coverage guided fuzzing of neural networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Mapping normalized images to real images

shaobo-he opened this issue · comments

Hi tensorfuzz developers,

Thank you for making this tool public. I have a quick question about the quantization example. It seems that tensorfuzz works on a normalized image where each entry in the matrix is a fp value between [-1, 1]. So it appears to me that a mutated normalized image, despite having different prediction, could not map to a different image in original MNIST format where entries are integers.

I noticed that there's a piece of code that double check the validity of the mutated image. Is it related to this question?

I may miss something. Please let me know if it makes sense.

Hi, I'm not totally sure I understand your question, but I will try to answer and you can tell me if it was helpful:

In general the MNIST digits are integers in the range [0,255].
When you train vision models on them, generally you cast those ints to floats and normalize the floats to live in [-1,1].
If you find a disagreement on an particular input and that input uses more precision than
the original MNIST dataset has, then technically you will not then be able to map that input back to any original MNIST digit, but that's ok because:

a) that was never really the goal, we are concerned about the accuracy of the quantized model under unseen inputs, which may come with more precision
b) the precision may go away when you feed the input through a quantized model anyway, depending on the implementation
c) we already checked that no disagreements were found on the test set for our example model.

The check you found is for a different reason:
The classifier may give outputs that differ between the original and the quantized version simply due to stochasticity in e.g. the tensorflow matrix multiply implementation.
Thus, when we find a disagreement, we check that it persists across multiply tries of the same inference.