kubernetes / steering

The Kubernetes Steering Committee

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Clarify CNCF maintainers list + Service Desk policy

justaugustus opened this issue Β· comments

Problem Statement

Follow-up/extension of #196.

Part of Steering Committee onboarding (example: #219) includes updating the CNCF maintainers lists and Service Desk access.

I think there is some confusion (at the very least on my part πŸ™ƒ) on the correct grouping of users and what they should authorized to use/do.

Proposed Solution

See "Open Questions".

Open Questions

These should be clearly linked out in documentation.

  • What is a "CNCF Maintainer" entitled to do?
  • What is a CNCF Service Desk user entitled to do?
  • For Kubernetes, are these the same set of users?
  • Who is empowered to vote in CNCF matters on behalf of the Kubernetes project?
  • Who is empowered to use the CNCF Service Desk on behalf of the Kubernetes project?

My initial take...

  • CNCF Service Desk usage is separate from any CNCF "Maintainer" definition, though:
  • There is a set of contributors that occupy both roles (namely @kubernetes/steering-committee)

Next Steps

  • Document a bunch of stuff based on forthcoming conversations with Steering, @amye, and @idvoretskyi

Other Considerations, Notes, or References

  • Fixes #
  • xref #

/assign

This is already written down, so no need to redo all of this.
(1) Service Desk for Kubernetes is already defined here:
https://github.com/kubernetes/steering/blob/main/service-desk.md

(2) The members of the steering committee are the authorized voting members.

We can likely close this with that piece of documentation above added to #219.

This is already written down, so no need to redo all of this.

Still a bit more that we'll want to do πŸ™ƒ

Based on the convo with a few Steering members, I'm hearing:

  • Audit the list of existing Service Desk users
  • Verify email addresses from leads of:
    • SIG ContribEx
    • SIG Release
    • SIG K8s Infra
  • Prune emails from folks that are not in:
    • Steering
    • SIG ContribEx
    • SIG Release
    • SIG K8s Infra
  • Update CNCF maintainers CSV to represent: cncf/foundation#229
    • Voting maintainers:
      • Steering
    • Non-voting maintainers:
      • SIG ContribEx
      • SIG Release
      • SIG K8s Infra
  • Update cncf-kubernetes-maintainers to distribute notifications to:
    • steering@
    • leads@

@mrbobbytables @parispittman @dims -- Can you check my work? ☝🏾

Ahhh, one more:

  • Enable Steering to receive notifications for all Service Desk tickets

lgtm πŸ‘

To clarify for passerbys, we're essentially delineating between:

  • who can vote (Steering)
  • who can access CNCF Service Desk (Steering, SIG ContribEx, SIG Release, SIG K8s Infra)
  • who gets CNCF Maintainer notifications (Steering, leads@)

...which are distinct separations of concern, but groups with significant overlap.

Update the CNCF maintainers list to reflect this discussion in cncf/foundation#229.

@justaugustus

Update cncf-kubernetes-maintainers to distribute notifications to:
steering@
leads@

We're able to add the real email addresses there only (not aliases), and our policy is that we add only folks listed in the CNCF maintainers list (maintainers.cncf.io). You are welcome to forward the necessary emails to the leads@ mailing list though :)

Enable Steering to receive notifications for all Service Desk tickets

This is not possible with our JIRA setup, unfortunately. However, all folks under the Kubernetes team should have access to the Kubernetes tickets after logging in to the JIRA UI.

Last item from #219:

Add new members to the cncf-kubernetes-maintainers mailing list and remove emeritus members

I'll follow-up on the list of current users that Ihor sent over.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.