vim-test / vim-test

Run your tests at the speed of thought

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TestNearest doesn't work with Rust tests organized in nested modules

dyllandry opened this issue · comments

This might be by design, but when organizing rust tests using nested modules, TestNearest doesn't work.

#[cfg(test)]
mod tests {
    mod my_module {
        #[test]
        fn fail() {
            assert!(false);
        }
    }
}

With the cursor inside fail and running :TestNearest, I get:

    Finished test [unoptimized + debuginfo] target(s) in 0.20s
     Running unittests src/main.rs (target/debug/deps/rust_tic_tac_toe-39273d94b1802694)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s

EDIT: The below actually doesn't work with the cursor in the fail function. It only happens to run the test if my cursor is in the module my_module_tests but outside of any actual test in that module. So for example it works if you put your cursor here and try running :TestNearest:

#[cfg(test)]
mod tests {
    mod my_module_tests {
        // Put cursor on this empty line & it works <---
        #[test]
        fn fail() {
            assert!(false);
        }
    }
}

I found out it works if

  1. nextest is installed
  2. the module has the word "tests" in it
#[cfg(test)]
mod tests {
    mod my_module_tests {
        #[test]
        fn fail() {
            assert!(false);
        }
    }
}

With the cursor inside fail and running :TestNearest, I get:

    Finished test [unoptimized + debuginfo] target(s) in 0.00s
    Starting 1 tests across 1 binaries
        FAIL [   0.001s] rust-tic-tac-toe::bin/rust-tic-tac-toe tests::my_module_tests::fail

--- STDOUT:              rust-tic-tac-toe::bin/rust-tic-tac-toe tests::my_module_tests::fail ---

running 1 test
test tests::my_module_tests::fail ... FAILED

failures:

failures:
    tests::my_module_tests::fail

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s


--- STDERR:              rust-tic-tac-toe::bin/rust-tic-tac-toe tests::my_module_tests::fail ---
thread 'tests::my_module_tests::fail' panicked at 'assertion failed: false', src/main.rs:122:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

   Canceling due to test failure: 0 tests still running
------------
     Summary [   0.002s] 1 tests run: 0 passed, 1 failed, 0 skipped
        FAIL [   0.001s] rust-tic-tac-toe::bin/rust-tic-tac-toe tests::my_module_tests::fail
error: test run failed

I don't know much vimscript, so this is as far as I can go, but looking at vim-test it looks like this is intended to work. I see at least with nextest its intended to find the nearest #[test] and run that test. Seems that match isn't working in this case, but the one searching for "mod tests" is.

let g:test#rust#cargonextest#test_patterns = {
\ 'test': ['\v(#\[%(\w+::|rs)?test)'],
\ 'namespace': ['\vmod (tests?)']
\ }

I found the command this plugin ran to run tests is output in the statusbar of my nvim window.

I tested this out while I was editing a module named game.rs.

I get these outputs with :TestNearest when my cursor is in these locations when it's run:

#[cfg(test)]
mod tests {                  // cargo nextest run 'game::'
    mod my_module_tests {    // cargo nextest run 'game::'
                             // cargo nextest run 'game::'
        #[test]              // Error detected while processing function test#run[19]..test#base#build_position[1]..test#rust#cargonextest#build_position[16]..<SNR>180_nearest_test: line 19: E684: list index out of range: 0 E116: Invalid arguments for function join
        fn fail() {          // cargo nextest run 'game::tests::fail'
            assert!(false);  // cargo nextest run 'game::tests::fail'
        }                    // cargo nextest run 'game::tests::fail'
    }
}

You can see my_module_tests never shows up in any of the nextest commands vim-test ran.

Thank you for looking into and reporting this. Could you please create a pull request that adds examples to the spec and fixtures and then hopefully or a volunteer can fix those failing tests.

I'll take a look and try it out. First time though, no promises.