Different results when giving multiple test images
xge opened this issue · comments
Hello and thank you for this great tool. I was trying to generate FLIP metrics and images for a big set of images and thought I could speed things up by computing multiple comparisons at once. So instead of running flip-cuda.exe -r Reference.png -t TestX.png
for every image in my set, I tried flip-cuda.exe -r Reference.png -t Test1.png -t Test2.png [...]
.
I attached a screenshot of the result:
While the pairwise execution produces the result I expected (hard to see in the thumbnail), the combined execution produces completely unexpected results. Did I misunderstand the purpose of supplying multiple test images or is it a bug in the tool?
Thanks for your help and the great software!
Thanks for the bugreport... we'll look into this tomorrow!
Btw, I'm curious: what are you using FLIP for?
Bugfix pushed!
Tested the fix and can confirm that it works. Thanks!
Btw, I'm curious: what are you using FLIP for?
I'm currently writing a research paper for my dissertation on performance evaluation of rendering techniques. The samples in the screenshot above are different particle renderers.
Ok, cool! Good luck and let us know if there are other problems or if you find any cases where you do not think that FLIP works as expected.
Hi xge,
Thanks for pointing out the bug!
Good luck with the paper! In case you are comparing both performance and quality of renderers, you might have use of "Pareto diagrams". You can find an evaluation using those in our RTG2 chapter, for example.
Good luck with the paper! In case you are comparing both performance and quality of renderers, you might have use of "Pareto diagrams". You can find an evaluation using those in our RTG2 chapter, for example.
Thank you so much for your input. That figure (Fig. 19-8) made me actually try out FLIP in the first place.
Cool!
I'm looking forward to reading your paper. It sounds like an interesting topic!