rcarriga / vim-ultest

The ultimate testing plugin for (Neo)Vim

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No output

smichaud opened this issue · comments

NVIM v0.5.0-dev+1385-g93f15db5d
No output is displayed.
Python, Pytest.

Step:

  1. Open the test file,
  2. run the test (:UltestNearest)
  3. try to display the output (:UltestNearest)
    I expect the output of the ran test to be display in a floating window.

image

Log:

20:35:29 | INFO | MainThread | logging.py:create_logger:101 | Logger created
20:35:30 | DEBUG | MainThread | __init__.py:__init__:43 | Handler created
20:35:30 | DEBUG | Thread-1 | __init__.py:_handle_coroutine:58 | Starting job with group update_positions
20:35:30 | INFO | Thread-1 | tracker.py:_async_update:55 | Updating positions in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | file.py:parse_file_structure:25 | Converted pattern {'test': ['\\v^\\s*%(async )?def (test_\\w+)'], 'namespace': ['\\v^\\s*class (\\w+)']} to {'test': [re.compile('^\\s*(?:async )?def (test_\\w+)')], 'namespace': [re.compile('^\\s*class (\\w+)')]}
20:35:30 | DEBUG | Thread-1 | tracker.py:_async_update:82 | New test server/api/test/test_list_field_values.py found in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | tracker.py:_async_update:82 | New test TestListProjectsFieldValues5844803339631356660 found in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | tracker.py:_async_update:82 | New test test_givenProject_thenReturnAllValues-957566468643009455 found in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | tracker.py:_async_update:82 | New test TestListProjectResourceFieldValues5844803339631356660 found in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | tracker.py:_async_update:82 | New test test_givenProjectPipes_thenReturnAllValues2789591621640477456 found in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | tracker.py:_async_update:82 | New test test_givenUnhandledResourceType_thenRaiseError2789591621640477456 found in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | tracker.py:_async_update:82 | New test test_givenInvalidField_thenRaiseError2789591621640477456 found in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | tracker.py:_async_update:82 | New test test_givenHanledResource_thenReturnValues2789591621640477456 found in server/api/test/test_list_field_values.py
20:35:30 | DEBUG | Thread-1 | tracker.py:_remove_old_positions:125 | No tests removed
20:35:30 | DEBUG | Thread-1 | __init__.py:_handle_coroutine:78 | Finished job with group update_positions
20:35:39 | INFO | MainThread | __init__.py:run_nearest:125 | Running nearest test in server/api/test/test_list_field_values.py at line 30
20:35:39 | DEBUG | MainThread | __init__.py:_register_started:281 | Registering test_givenProjectPipes_thenReturnAllValues2789591621640477456 as started
20:35:39 | DEBUG | Thread-1 | __init__.py:_handle_coroutine:58 | Starting job with group test_givenProjectPipes_thenReturnAllValues2789591621640477456
20:35:39 | DEBUG | Thread-1 | processes.py:run:50 | Starting test process test_givenProjectPipes_thenReturnAllValues2789591621640477456 with command ['poetry', 'run', 'pytest', 'server/api/test/test_list_field_values.py::TestListProjectResourceFieldValues::test_givenProjectPipes_thenReturnAllValues'], cwd = None, env = None
20:35:41 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:41 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:41 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:41 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:41 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:50 | DEBUG | Thread-3 | handle.py:forward:85 | Writing data to output file
20:35:51 | DEBUG | Thread-1 | processes.py:run:76 | Process test_givenProjectPipes_thenReturnAllValues2789591621640477456 complete with exit code: 0
20:35:51 | DEBUG | Thread-1 | __init__.py:_register_result:292 | Registering test_givenProjectPipes_thenReturnAllValues2789591621640477456 as exited with result {"id": "test_givenProjectPipes_thenReturnAllValues2789591621640477456", "file": "server/api/test/test_list_field_values.py", "code": 0, "output": "/tmp/ultest0us1r641/server__api__test__test_list_field_values_py/test_givenProjectPipes_thenReturnAllValues2789591621640477456_out"}
20:35:51 | DEBUG | Thread-1 | __init__.py:_handle_coroutine:78 | Finished job with group test_givenProjectPipes_thenReturnAllValues2789591621640477456

The plugin is amazing btw!

So the reason is that the test is passing, therefore the output is generally seen as not useful. For instance pytest by default hides output from tests unless they fail. I made that choice a long time ago though and can definitely see a use case for seeing passing output. I've changed the behaviour in the latest commit. The output for passing tests will now show with :UltestOutput

The logic behind not displaying succeeding tests output totally make sense. I just didn't think about it.

On the other hand, you could sometime want to see the output of succeeding tests if you use some option to display more details (pytest -s). You could want to know why succeed if it should, but it should fails. At this point it is a preference and you could work around it by adding a failing assertion.

Thank you for the adjustment! Keep on the good work.