exercism / cpp

Exercism exercises in C++.

Home Page:https://exercism.org/tracks/cpp

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Allow running `ctest` and `make test`

senarclens opened this issue · comments

Albeit maybe being a habit of potentially more seasoned programmers, it might also be helpful for beginners to run make test or ctest to execute tests. I know that the tests are currently run upon automatically via compiling the source (make), but still, the absence of a make test target as well as a configuration for ctest are confusing to some who didn't read the documentation upfront (like me ;).
I could gladly provide a PR that adds the add_test (https://cmake.org/cmake/help/v3.27/command/add_test.html) required to solve this to the existing exercises if that's something y'all welcome and accept.

Hello. Thanks for opening an issue on Exercism. We are currently in a phase of our journey where we have paused community contributions to allow us to take a breather and redesign our community model. You can learn more in this blog post. As such, all issues and PRs in this repository are being automatically closed.

That doesn't mean we're not interested in your ideas, or that if you're stuck on something we don't want to help. The best place to discuss things is with our community on the Exercism Community Forum. You can use this link to copy this into a new topic there.


Note: If this issue has been pre-approved, please link back to this issue on the forum thread and a maintainer or staff member will reopen it.

The three main benefits of the current setup are: It works on all platforms, the students can build and run the tests with a single command, and if some tests fail they see the error message from Catch2. Example:

$ make
-- Configuring done
-- Generating done
-- Build files have been written to: /.../exercism/cpp/leap/build-gcc-9.5.0
Consolidate compiler generated dependencies of target leap
[100%] Built target leap

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
leap is a Catch v2.13.8 host application.
Run with -? for options

-------------------------------------------------------------------------------
divisible_by_200_not_divisible_by_400
-------------------------------------------------------------------------------
/.../exercism/cpp/leap/leap_test.cpp:34
...............................................................................

/.../exercism/cpp/leap/leap_test.cpp:36: FAILED:
  REQUIRE( !leap::is_leap_year(1800) )
with expansion:
  false

===============================================================================
test cases: 6 | 5 passed | 1 failed
assertions: 6 | 5 passed | 1 failed

make[2]: *** [CMakeFiles/test_leap.dir/build.make:70: CMakeFiles/test_leap] Error 1
make[1]: *** [CMakeFiles/Makefile2:867: CMakeFiles/test_leap.dir/all] Error 2
make: *** [Makefile:101: all] Error 2

I'm not an expert on CTest so please correct me if the following is wrong:

  • The simplest option is to replace the add_custom_target() at the end of the CMakeLists.txt with this:
    add_test(NAME ${exercise} COMMAND ${exercise})
    But then the students would have to call make and make test separately (because make test only runs the tests, it does not rebuild the program), and when some tests fail they can only see that they failed, not the name of the tests nor the error messages from Catch2:

    $ make
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /.../exercism/cpp/leap/build-gcc-9.5.0
    Consolidate compiler generated dependencies of target leap
    [100%] Built target leap
    
    $ make test
    Running tests...
    Test project /.../exercism/cpp/leap/build-gcc-9.5.0
        Start 1: leap
    1/1 Test #1: leap .............................***Failed    0.00 sec
    
    0% tests passed, 1 tests failed out of 1
    
    Total Test time (real) =   0.01 sec
    
    The following tests FAILED:
              1 - leap (Failed)
    Errors while running CTest
    Output from these tests are in: /.../exercism/cpp/leap/build-gcc-9.5.0/Testing/Temporary/LastTest.log
    Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
    make: *** [Makefile:71: test] Error 8
    

    To see the actual error message the students would have to read the LastTest.log or run ctest manually with the option --output-on-failure.

  • There is a CMake/CTest/Catch2 integration that allows the tests to be discovered and run separately (catch_discover_tests(${exercise})):

    $ make test
    Running tests...
    Test project /.../exercism/cpp/leap/build-gcc-9.5.0
        Start 1: not_divisible_by_4
    1/6 Test #1: not_divisible_by_4 ......................   Passed    0.00 sec
        Start 2: divisible_by_2_not_divisible_by_4
    2/6 Test #2: divisible_by_2_not_divisible_by_4 .......   Passed    0.00 sec
        Start 3: divisible_by_4_not_divisible_by_100
    3/6 Test #3: divisible_by_4_not_divisible_by_100 .....   Passed    0.00 sec
        Start 4: divisible_by_100_not_divisible_by_400
    4/6 Test #4: divisible_by_100_not_divisible_by_400 ...   Passed    0.00 sec
        Start 5: divisible_by_400
    5/6 Test #5: divisible_by_400 ........................   Passed    0.00 sec
        Start 6: divisible_by_200_not_divisible_by_400
    6/6 Test #6: divisible_by_200_not_divisible_by_400 ...***Failed    0.00 sec
    
    83% tests passed, 1 tests failed out of 6
    
    Total Test time (real) =   0.04 sec
    
    The following tests FAILED:
              6 - divisible_by_200_not_divisible_by_400 (Failed)
    Errors while running CTest
    Output from these tests are in: /.../exercism/cpp/leap/build-gcc-9.5.0/Testing/Temporary/LastTest.log
    Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
    make: *** [Makefile:71: test] Error 8
    

    But that still requires the students to execute make and make test separately and it only shows the name of the failed test, not the error message from Catch2 (which is IMHO more helpful.)


tl;dr I don't know if using CTest would keep the build system as easy to use, and I'm not sure what value it would add for the students (beginners or more advanced), for the maintainers, or the automated test runner. But I'm open to be convinced otherwise.