outcome has a test dependency on trio, while trio has a regular dependency on outcome
catern opened this issue · comments
This makes it difficult to run outcome's tests while building a set of packages; if one's Python build process runs the tests for each package as it is built, then the only possibility is to just disable tests for outcome.
Is it possible to break the circular dependency?
This is a niche requirement. As an example, at least a few of pytest's own dependencies test themselves using pytest.
I'm not familiar with NixOS, but isn't this part of the attrs package disabling testing for this same reason?
Not only is it niche, it's also not a requirement. :) As you point out, the attrs package disables tests because it also has a circular dependency with pytest.
Disabling the tests is always an option, but obviously it is better to run the tests than not to run them. Most (if not all) other package managers that try to run tests will have the same requirement.
For what it's worth, the tests will run whenever any of outcome's dependencies change (admittedly just attrs), even running when the Python interpreter changes. That could alert you to problems and give an extra layer of assurance.
Of course, this is all irrelevant if it's not possible to break the circular dependency. It looks like the usage of trio in the test is minimal. How hard would it be to break?
async_generator
had some messy problems like this – it used to use pytest-asyncio
to test things, but then pytest-asyncio
itself gained a dependency on async_generator
and it messed up our coverage and things.
The way we fixed it there was to switch to using a silly little fake async loop: https://github.com/python-trio/async_generator/blob/master/async_generator/_tests/conftest.py
Basically that conftest.py
does two things:
-
It defines a
mock_sleep
routine (maybe this should have been calledcheckpoint
) and a tiny coroutine runner where the only primitive async calls it knows how to handle are calls tomock_sleep
-
It arranges to use this runner for any tests that are marked as
async
I think that might work for outcome as well? outcome's async tests are pretty simple.
Since it's possible that outcome could break the part of trio that's running the async functions under test, I'll switch the tests to use asyncio.
@carlwgeorge 1.0 released
Thanks @RazerM. If anyone wants to follow along, I've submitted python-outcome to Fedora here: https://bugzilla.redhat.com/show_bug.cgi?id=1628300