inference_benchmark.py script always returs code 0
nberezina opened this issue · comments
We have noticed that inference_benchmark.py script always returns error code 0.
Is it intended behavior?
This approach can complicate troubleshooting when integrate this tool to CI.
Our suggestion is to modify code to return 1 if exception happened during inference and print a proper traceback.
We can handle the implementation as well.
A return code of "0" (zero) is supposed to indicate SUCCESS, isn't it?
ERROR_SUCCESS
0 (0x0)
Yes it is. And it indicates success even if inference is actually failed
@nberezina, согласна с точки зрения CI здесь будут проблемы. Если есть возможность внести соответствующие изменения, с нашей стороны возражений нет.
@nberezina Do you have specific requirements for different, detailed error-codes, or just "zero==success (including no object detected, nothing was classified, just without any errors occurring)" and "non-zero==something failed"?
@brmarkus zero-nonzero works fine for us.
@valentina-kustikova will come up with PR. please assign the issue to me for implementation.