cncf / k8s-conformance

🧪CNCF K8s Conformance Working Group

Home Page:https://cncf.io/ck

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tracking issue - Identify top Pod APIs to prioritize and increase Conformance coverage in v1.12

AishSundar opened this issue · comments

As per guidance from Sig-Arch we will focus our Conformance coverage efforts first on areas where functionality can be swapped out by providers, specifically around Pod functionality(Node). Towards this for v1.12 we are working with Sig-Node to identify API endpoints to prioritize in the first round of conformance test authoring and define meaningful test scenarios / user journeys to automate.

This issue exists as a tracking reference for the community to reference on issues/PRs related to Node e2e tests needed for conformance efforts(s).

/cc @wangzhen127 @dchen1107 @timothysc @mithrav
/sig node
@kubernetes/sig-node-bugs

/cc @jagosan
@kubernetes/sig-node-bugs

There are 2 questions:

  1. Which pod APIs are important to start with?
  2. What pod APIs are already covered in conformance tests and other e2e tests?

First of all, not every pod API is implemented in client-go/kubernetes, which the e2e tests are using. The ones implemented in client-go/kubernetes represent the often used ones. I think it is a good starting point for us to focus on. The implementations are:

The pod APIs implemented in client-go/kubernetes are:

Function Method URL
Create POST /api/v1/namespaces/{namespace}/pods
Update PUT /api/v1/namespaces/{namespace}/pods/{name}
UpdateStatus PUT /api/v1/namespaces/{namespace}/pods/{name}/status
Delete DELETE /api/v1/namespaces/{namespace}/pods/{name}
DeleteCollection DELETE /api/v1/namespaces/{namespace}/pods
Get GET /api/v1/namespaces/{namespace}/pods/{name}
List GET /api/v1/namespaces/{namespace}/pods
Watch GET /api/v1/namespaces/{namespace}/pods
Patch PATCH /api/v1/namespaces/{namespace}/pods/{subresources}
Bind POST /api/v1/namespaces/{namespace}/pods/{name}/binding
Evict POST /api/v1/namespaces/{namespace}/pods/{name}/eviction
GetLogs GET /api/v1/namespaces/{namespace}/pods/{name}/log

It is a little hard to check which pod APIs are already covered manually, because many tests involve manipulating pods in some way. I suggest to use some tool. For example, the following thread mentioned some sort of coverage analysis for conformance tests. It would be nice to use the tool to check the coverage in all e2e tests.

Thanks for the initial analysis @wangzhen127 and narrowing down to a few APIs to focus on. @timothysc @dchen1107 as FYI, please let us know if these APIs look good to prioritize for Pod testing

As a next step let me run both conformance and non-conformance e2e tests and gather API coverage via both apicoverage tool and APISnoop to identify
(i) which of the APIs above are already covered in Conformance and don't need any additional effort
(ii) which APIs are covered in existing e2e tests (and corresponding tests) but not conformance and hence need promotion
(iii) which APIs are uncovered, to define e2e tests in v1.12

@AishSundar, the above are for rest API operations. Is there a plan to analyze the test coverage for pod spec?

@wangzhen127 the current coverage tools only measure rest API coverage and its quite possible to have a API covered without full coverage on all fields in the Pod spec for it.

I look at it as a 2 step process. Step 1 is to identify APIs covered in existing e2e and Conformance tests. Then in step 2 dig a level deeper to (a) to map each covered API to existing e2e tests that exercise them, and (b) from these tests identify gaps in Pod spec coverage to fill in. For APIs that have no coverage at all in e2e tests, we will have to ensure we comprehensively cover the fields Pod spec when identifying new e2e tests to be added.

I will work with @hh to see if we can extend APISnoop to help with this analysis

Update on the progress we have made so far on analyzing coverage of the POD APIs prioritized here

  • We have been working with APIsnoop to correlate e2e tests to the APIs they call, in an effort to identify which of these prioritized APIs are already covered in existing Conformance suite and in e2e tests (not in conformance). e2e-audit-correlate
  • We have initial results from this now
  • For each of these tests identified, working closely with Globant vendors to understand
    • exactly how the API and Verb combination is being exercised by the test
    • assess actual POD API spec coverage from each of these tests
    • identify which of the e2e tests (not in conformance) are worth promoting to Conformance
  • Some initial findings are here

As next steps

  • we plan to finish this analysis for 1 API:VERB combination i.e., PATCH /api/v1/namespaces/{namespace}/pods/{name}
  • take results to Sig Node to help identify which cases to add or promote in 1.12
  • Assess if this approach is the right one and/or scalable for all prioritized POD APIs

As a followup on the AI from above, here's a draft tesplan for "PATCH /api/v1/namespaces/{namespace}/pods/{name}"

https://docs.google.com/spreadsheets/d/1q1JXwsZArA8kPfkD5K72NyYm233zBMC2sMFqQXWD19o/edit#gid=1547980847

@wangzhen127 @yujuhong will be reviewing the scenarios to identify ones that are worth adding and promoting to conformance

I took a first pass and have some high level comments.

  1. Patch contains 3 operation types: add, remove and update (keep the key, change the value). Most test scenarios cover add/remove, but not update, which should be included.

  2. There are 3 patch types: json patch, json merge patch, and strategic merge patch. This is not mentioned in the test scenarios. I think they should all be tested. Strategic merge patch can be skipped if it is not available for a field.

  3. Please organize the "field under test" into 2 groups: metadata and spec. For metadata, only pod related things are needed. Within each group, order the fields in alphabetical order. Currently it is not ordered and it is easy to miss some fields.

Worked on all comments zhen.
Sharing with you a tab with all the "metadata" scenarios organized an expanded.
Please find in this tab.

I will do the same for "spec" section and let you know once ready.
Thanks a lot!

Thanks! Overall looks ok to me. I added a few comments.

Also notice that initializer is going away, the new suggested way is to use webhooks instead of initializer. So I am not sure whether we need to add tests for initializer. This is something to be decided by the testing team, I think.

I have one suggestion on the test plan formatting. It would be nice to have a list of test scenarios grouped together, and then link to the detailed testing steps. Currently it is a giant spreadsheet that is hard to review. For example, something like the following:

pod1, Annotations, self, add an annotation to the pod, [link to detailed testing steps]
pod1, Annotations, self, remove an annotation from the pod, [link to detailed testing steps]
...

And for the detailed testing steps, since the detailed steps are all the same for all 3 patch types, you can just keep one copy and then indicate it applies to all 3 patch types.

Let me know if this is easier to read. Also please suggest what would be the next steps. Thanks a lot!

@AishSundar @wangzhen127

Most fields of pods cannot be updated.

In any case, the apiserver operations are unlikely to behave significantly differently than for other resources, all flavors of update share most of their implementations, and the apiserver implementation is independent of scheduler, kubelet, CRI, and CNI implementations. The latter are known to have multiple implementations.

Therefore, what I care most about is Pod features, functionality, and behaviors.

/close
This is now stale, we have <120 APIs to go!