Testing of failure conditions is difficult
daverumph opened this issue · comments
I'm trying to write tests for the KPF pipeline, both that operate correctly and ones that I expect to fail in various ways, for example syntax errors in our recipe format. I have run into two issues:
For tests that I expect to fail, I expect the pipeline code to raise the RecipeError exception, and the test to confirm that RecipeError was raised by catching it. In the present Framework implementation, this doesn't work, because 1) MainLoop catches all Exceptions, and 2) it calls os._exit(), and thus my test python code never gets control back at all to validate that the proper exception was raised. (The error log contains that information, but parsing the error log is not very well suited to automated testing.)
For tests of proper operation, because main_loop exits, those tests must be implemented as shell scripts. It's workable I suppose, but not my preference for test automation.
I think in general that the Framework needs to shut down by thread joining, and ultimately returning back to the code that started it. I'm willing to work to that end, but I'm raising this issue to start a conversation about this topic and what if anything to do about it.
--Dave
I have been working on improving the framework for the last two weeks.
Amongst other things, I am looking into what happens after the action is completed, successfully or otherwise.
At this time, I am thinking of introducing a kind of return code for actions.
Maybe we can talk about this over email.
When the framework reaches that point where _exit() is called, it means that there are no more events to process. There could be a problem if there are threads running, that the framework didn't start.
Pipeline primitives should not start threads in the background.
If they do start threads, all threads must terminate before leaving the primitive.
Instead of calling os._exit(), I am adding a hook self.on_exit(), so you can override it, just in case, there are other clean up tasks to do.
I pushed an update to the develop branch.
Take a look at unit_tests/test_run_example.py.
First, you instantiate the framework object, then ingest some data files.
Then call main_loop(), instead of start().
Then check the output.
You can keep using the same framework object, or get a new one if you want to change pipeline.
I use the shell script to test multi-processing. You may not need that.