trivago / cluecumber

Clear and concise reporting for the Cucumber BDD JSON format.

Home Page:https://www.softwaretester.blog

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Scenario is counted as failed in case of afterScenario hook failure

rc2201 opened this issue · comments

Describe the bug
When a afterScenario hook fails for a test, the actual test is counted as failed in the report irrespective of the test result / status.

To Reproduce
Steps to reproduce the behavior:

  1. Add a afterScenario hook to the test. Introduce an error so that the hook fails.
  2. Execute the test.
  3. Review the cluecumber report.

Observed behavior
Test should be counted as skipped / failed / passed based on the test result itself. Hooks result should not have any impact on it.

Expected behavior
Test scenario is being counted towards as failed irrespective of actual test result.

Additional context

  1. I am using cluecumber-core version 3.5.1

I can add more details and even provide a minimal repo to reproduce the scenario if needed.

This is the correct behavior according to the Cucumber OSS community. This was also clarified here:
#68
Hook results count towards the overall status of the test case.

Thanks @bischoffdev . I missed mentioning that ours is a API test suite written in karate DSL.

And to my observations, it seems behavior is different in karate. A failure of afterScenario hook does not mark the test as failed in karate.

This seems to be the reason of discrepancies i am seeing in karate vs Cluecumber report.

Do you have any thoughts on how this can be handled?

Is this documented somewhere for Karate? If so, then I think there should be a dedicated configuration parameter like ignoreHooksFailure. What do you think?

@bischoffdev I did not find any reference in karate documentation but this is the behavior we are currently observing. To give more context, we are using this after scenario hook from our config to publish a link to splunk and other logging tool. This afterhook fails when the scenario is aborted due to no response being found.

I really like your idea of a boolean flag so that it can be controlled by user based on their use case. It certainly gives flexibility to the testing teams.

Karate is not Cucumber though. Due to a shared historical origin they do happen to share a report format, but they're very different otherwise.

So I would suggest using the reports Karate provides. There are likely to be more accurate and always match Karate's semantics.

@mpkorstanje Agreed on karate not being cucumber. And we are using karate report itself. But karate report does not provide separate count for skipped scenarios. Because of that, we started exploring this project to get more granular report with separate count for skip.

The reason we want skip count is because of extensive use of karate.abort() function to handle different environment, test data, dynamic response limitation.

Then in the first instance I think it would be prudent to either request that feature from Karate or contribute it to the Karate project.

@mpkorstanje I fully agree that Karate is not Cucumber. And the first point of reference is certainly the original Cucumber JSON format. Karate's output format is so close to the Cucumber JSON format that it is not too much effort to support it as well.

In this special case, though, I would opt for not supporting it but indeed request a change on Karate side.

Karatelabs itself recommends not using karate.abort in their example, so this is an absolute edge case:

image

(source: https://github.com/karatelabs/karate/blob/master/karate-demo/src/test/java/demo/abort/abort.feature)