kubernetes-retired / multi-tenancy

A working place for multi-tenancy related proposals and prototypes.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HNC: subnamespace creation failures should be reported in the anchors (at least)

JimBugwadia opened this issue · comments

I configured a Kyverno policy to block namespaces that do not follow a naming convention:

--- 
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-quota
spec:
  validationFailureAction: enforce
  background: false
  rules:
  - name: enforce-names
    match:
      resources: 
        kinds:
        - Namespace
    validate:
      message: >-
        The namespace name must end with -sm (small), -md (medium), or -lg (large)
      pattern:
        metadata:
          name: "*-sm | *-md | *-lg"
          

I then created a top-level namespace that complied with the naming convention:

~ kubectl create ns test-sm
namespace/test-sm created

I then created a subns that does not comply with the naming convention:

~ kubectl create ns test-sm
namespace/test-sm created
➜  ~ kubectl hns create s1 -n test-sm
Successfully created "s1" subnamespace anchor in "test-sm" namespace
➜  ~ kubectl hns tree test-sm
test-sm
➜  ~ kubectl get ns s1
Error from server (NotFound): namespaces "s1" not found
➜  ~

The subns does not get created, and no error is reported.

➜  ~ kubectl get subns -A
NAMESPACE   NAME   AGE
test-sm     s1     91s
➜  ~ kubectl get subns -A -o yaml
apiVersion: v1
items:
- apiVersion: hnc.x-k8s.io/v1alpha2
  kind: SubnamespaceAnchor
  metadata:
    creationTimestamp: "2021-01-13T00:33:26Z"
    generation: 1
    managedFields:
    - apiVersion: hnc.x-k8s.io/v1alpha2
      fieldsType: FieldsV1
      fieldsV1:
        f:status: {}
      manager: kubectl-hns
      operation: Update
      time: "2021-01-13T00:33:26Z"
    name: s1
    namespace: test-sm
    resourceVersion: "77595"
    selfLink: /apis/hnc.x-k8s.io/v1alpha2/namespaces/test-sm/subnamespaceanchors/s1
    uid: 1b610d22-b760-49cb-ae8f-caa5d8b44c39
  status: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
➜  ~
➜  ~ kubectl get hierarchyconfigurations -A
NAMESPACE   NAME        AGE
test-sm     hierarchy   4m27s
➜  ~ kubectl get hierarchyconfigurations -o yaml
apiVersion: v1
items: []
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
  
➜  ~ kubectl get hncconfigurations -o yaml                                          
apiVersion: v1                                                                      
items:                                                                              
- apiVersion: hnc.x-k8s.io/v1alpha2                                                 
  kind: HNCConfiguration                                                            
  metadata:                                                                         
    creationTimestamp: "2021-01-12T18:34:45Z"                                       
    generation: 17                                                                  
    managedFields:                                                                  
    - apiVersion: hnc.x-k8s.io/v1alpha2                                             
      fieldsType: FieldsV1                                                          
      fieldsV1:                                                                     
        f:spec: {}                                                                  
        f:status:                                                                   
          .: {}                                                                     
          f:resources: {}                                                           
      manager: manager                                                              
      operation: Update                                                             
      time: "2021-01-13T00:25:28Z"                                                  
    name: config                                                                    
    resourceVersion: "76702"                                                        
    selfLink: /apis/hnc.x-k8s.io/v1alpha2/hncconfigurations/config                  
    uid: 37342067-ecfc-45a2-9757-dd3ff1ebe963                                       
  spec: {}                                                                          
  status:                                                                           
    resources:                                                                      
    - group: rbac.authorization.k8s.io                                              
      mode: Propagate                                                               
      numPropagatedObjects: 4                                                       
      numSourceObjects: 0                                                           
      resource: rolebindings                                                        
      version: v1                                                                   
    - group: rbac.authorization.k8s.io                                              
      mode: Propagate                                                               
      numPropagatedObjects: 0                                                       
      numSourceObjects: 0                                                           
      resource: roles                                                               
      version: v1                                                                   
kind: List                                                                          
metadata:                                                                           
  resourceVersion: ""                                                               
  selfLink: ""                                                                      
  

/good-first-issue

We need to propagate this error at least to the SubnamespaceAnchor. We should also probably save it as an Event (especially if that event gets auto-appended to k describe subns command) and possibly to the hncconfiguration object too (though that last one is less critical).

@adrianludwin:
This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/good-first-issue

We need to propagate this error at least to the SubnamespaceAnchor. We should also probably save it as an Event (especially if that event gets auto-appended to k describe subns command) and possibly to the hncconfiguration object too (though that last one is less critical).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Hello, may I have a try? I think the root cause is the inconsistent state between the hnc CRDs and Namespace.
Hope for further guidance. 😃

https://github.com/kubernetes-sigs/multi-tenancy/blob/9327e368b19b27161f3a771893b4d95a87cb698d/incubator/hnc/internal/reconcilers/anchor.go#L100-L104

Thanks for helping, @rudeigerc ! Before fixing the issue, I may want to see what the issue is. Per @JimBugwadia 's subnamespace anchor output:

➜  ~ kubectl get subns -A -o yaml
apiVersion: v1
items:
- apiVersion: hnc.x-k8s.io/v1alpha2
  kind: SubnamespaceAnchor
  metadata:
    creationTimestamp: "2021-01-13T00:33:26Z"
    generation: 1
    managedFields:
    - apiVersion: hnc.x-k8s.io/v1alpha2
      fieldsType: FieldsV1
      fieldsV1:
        f:status: {}
      manager: kubectl-hns
      operation: Update
      time: "2021-01-13T00:33:26Z"
    name: s1
    namespace: test-sm
    resourceVersion: "77595"
    selfLink: /apis/hnc.x-k8s.io/v1alpha2/namespaces/test-sm/subnamespaceanchors/s1
    uid: 1b610d22-b760-49cb-ae8f-caa5d8b44c39
  status: {}       ################# Not sure why status is empty here ############
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

It seems that the status is not showing up correctly. It should show

  status:
    status: Missing

, meaning the subnamespace anchor is there but the subnamespace is "missing".

I agree that we should make it more visible, e.g. in events that users can get from kubectl describe or show it in the kubectl hns tree.

@JimBugwadia can you please confirm the subns.status field is empty for you? In addition, I was trying to reproduce the error, but couldn't find a good way. How can I use kyverno.io/v1/ClusterPolicy?

@yiqigao217 you could probably hack HNC to try to create a namespace with an illegal name or something instead of installing Kyverno; we just need to generate an error from K8s.

I did try kube-system but this only gave me status:Conflict and I cannot find a way to get status:Missing since if I just bypass the webhook and delete the subnamespace, HNC controller will just create the namespace immediately that I cannot get status:Missing.

Yeah you have to hack HNC so that K8s returns an error when you try to create a namespace that doesn't already exist. E.g. instead of creating "foo" hack HNC so that it tries to create "!foo", which will always fail.

Yeah you have to hack HNC so that K8s returns an error when you try to create a namespace that doesn't already exist. E.g. instead of creating "foo" hack HNC so that it tries to create "!foo", which will always fail.

Got you thanks!