cncf / k8s-conformance

🧪CNCF K8s Conformance Working Group

Home Page:https://cncf.io/ck

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Update verify-conformance-tests to have better xml parsing

bernokl opened this issue · comments

This comes from:
#1418 (comment)
This was a failed conformance submission that was a result of new formatting that was not planned for in the original code.

We want to update the following to allow for testsuite and testsuites
Update https://github.com/cncf-infra/prow-config/blob/9d3c12d48ea6dd8f42dfd864e047dffac1ea59b2/prow/external-plugins/verify-conformance-tests/plugin/plugin.go#L365

The junit that we failed on is here:

<testsuites>
	<testsuite name="Kubernetes e2e suite" tests="311" skipped="0" failures="0" time="6847.197805231001">
		<testcase name="[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]" classname="Kubernetes e2e suite" time="15.435896438"></testcase>

Which is a valid junit structure (and is created by some build processing tooling). It looks like the PR check is only checking what sonobuoy generates, which is a bit too strict. It's probably enough to have the unmarshal struct have both "testsuites" and "testsuite" and scan both. In the meantime I'll manually update the junit.

https://llg.cubic.org/docs/junit/

"what sonobuoy generates" for the e2e junit is actually "what the Kubernetes e2e binary generates".

It seems wrong to submit results that are not from the Kubernetes e2e binary.

/cc @smarterclayton what did you use to generate your results? Is there a use case here?

@smarterclayton would you like to keep this open?

/close
Seem to only be an edge case scenario

classify as flake

It seems wrong to submit results that are not from the Kubernetes e2e binary.

This is summary of results generated by multiple executions of the kube e2e binary. To do that, you have to merge the files somehow from those executions. The tool I used was a simple merge tool, but you can absolutely run the kube e2e suite in multiple chunks and then have to merge the data afterwards. I don't think conformance should be based only on "the kind of junit output created by a tool wrapped by another tool" - but instead junit (which has a spec, and makes no assumptions that the top level is a single suite). That being said, it's fairly easy for me to work around this for now. I just wanted to register that (in my official Kube conformance hat) I'm extremely leery of assumptions that overly bind conformant distributions from automating their pipelines and this was on the border for me.