brain-research / tensorfuzz

A library for performing coverage guided fuzzing of neural networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What is a spurious disagreement in the quantization example?

vv-ss opened this issue · comments

commented

Hi,

I am interested in the accuracy loss due to quantization and was running the quantized_fuzzer.py example. In the script I see that we first get a "result" when the objective function is not met, namely argmax for logits and quantized_logits differ. And then, we check whether the disagreement is correct or spurious. Is this to capture non-determinism in floating point operation? I see that the loop runs 10 times for the same input. Is that intentional?

Thanks!