error synchronizing: clusterrole.rbac.authorization.k8s.io "system:proxy-reader" not found
chuenlye opened this issue · comments
Description
In logs of master-api, there are tons of the following errors:
E0303 06:30:15.888846 1 cache.go:332] error synchronizing: clusterrole.rbac.authorization.k8s.io "system:proxy-reader" not found
During 3 weeks, there are 4506202 records of this error.
Version
Please put the following version information in the code block
indicated below.
- Your ansible version per
ansible --version
ansible 2.6.3
Steps To Reproduce
oc -n kube-system logs master-api-xxx
Expected Results
This error should be avoided.
Observed Results
Describe what is actually happening.
E0303 06:30:15.888846 1 cache.go:332] error synchronizing: clusterrole.rbac.authorization.k8s.io "system:proxy-reader" not found
......
Additional Information
okd 3.11 is used in our environment.
kube-proxy-and-dns is running well, and there seems no this kind of error which happened in master-api .
from this policy file:
https://github.com/openshift/openshift-ansible/blob/release-3.11/roles/kube_proxy_and_dns/files/kube-proxy-and-dns-policy.yaml
I noticed that both role of "cluster-reader" and "system:proxy-reader" was granted to the same service account, so I guess "system:proxy-reader" is not necessary for kube-proxy-and-dns pod
because "cluster-reader" should have more privileges than proxy-reader.
And after I delete the clusterrolebinding by oc delete ClusterRoleBinding proxy-reader
',
there are no any more this kind of errors in master-api logs.
kube-proxy-and-dns pod still runs ok.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.