[RFC] Syncing Kubernetes Client versions with upstream Kubernetes versions
palnabarun opened this issue · comments
Current Scenario
The Kubernetes Python client follows a versioning schema x.y.z{a|b}N
where x is Kubernetes Minor Version - 4
. For example, the Kubernetes Python client 12.0.0 is based on Kubernetes 1.16.
Reasons
To make the numbering coherent and reduce confusion.
What other clients do
- Go: client-go used to use a versioning scheme like us, but they eventually moved to semver compatible versions - v0.y.z where y and z are from Kubernetes versions 1.y.z. Ref1 Ref2
- Java: A similar schema like but the Java client major version number is Kubernetes minor version number - 8. For example, Java client 9.0.0 is based on Kubernetes 1.17.
Proposed Versioning Schemes
We can have a versioning scheme similar to client-go. Kubernetes 1.x.y would correspond to Python client 0.x.y.
Or, the versions can be 1.x.y exactly equal to the Kubernetes versions. This results in us being not able to do our own patch releases since there are changes done on the client code too. Also, the client-go adopted the conventions they have currently because of certain limitations with Go Modules, which we don't have. Moving back version numbers is also detrimental since pip install kubernetes
without a version number can
Another option we have is to have client releases versioned as x.y.p
where x is the Kubernetes Minor release number, y is the Kubernetes patch release number, and p is the Python client patch specific number. In order to achieve this option, client releases henceforth will jump a few version numbers to achieve the coherency, and the same needs to be documented in the README and CHANGELOG along with proper communication to k-dev when the release happens.
Resolution
Based on the discussion in the bi-weekly meeting, it looks like the latter option is preferable.
Action Items
- Create
release-17.0
branch for client based on Kubernetes 1.17.p @roycaihw / @yliaog - Create
release-18.0
branch for client based on Kubernetes 1.18.p @roycaihw / @yliaog - Create
release-19.0
branch for client based on Kubernetes 1.19.p @roycaihw / @yliaog - Write in the Python Client 17.0.0 release notes about this change in versioning scheme @palnabarun
- (not needed as 17.y.z release had well known doc) Write in the Python Client 18.0.0 release notes about this change in versioning scheme @palnabarun
- (not needed as 17.y.z release had well known doc) Write in the Python Client 19.0.0 release notes about this change in versioning scheme @palnabarun
- Close this issue after all of the above has been done. @palnabarun
Edits
- Updated the issue after the discussion on the same on 14th September.
- Updated the action items to include the creation of branches
/assign
/kind design
/remove-kind documentation
Using 0.x.y corresponding to Kubernetes 1.x.y has the following downsides:
- The python client version goes backwards (from 11.0.0 to 0.x.y). People running
pip install --upgrade
may not be able to pick up the latest versions. - Sometimes we do patch releases in this client for bug fixes (e.g. 8.0.1, 8.0.2). We don't have digits left to number our own patches.
(For the record:) The proposal was accepted in the bi-weekly client-python meeting, and has been implemented for client v17: https://github.com/kubernetes-client/python#homogenizing-the-kubernetes-python-client-versions. Please feel free to close this issue once the action items are finished.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Issue is not resolved. Python Client 18.0.0 and Python Client 19.0.0 are not released yet. Hence their release notes cannot contain information about the change in versioning scheme.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This is in progress.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-kind design
/kind feature
kind/design
is migrated to kind/feature
, see kubernetes/community#6144 for more details
/remove-lifecycle rotten
/lifecycle frozen
We are now releasing Kubernetes Python clients with the above-accepted semver. No actions are required for this issue.
/remove-lifecycle frozen
/close
@palnabarun: Closing this issue.
In response to this:
/remove-lifecycle frozen
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.