jfrog / kubenab

Kubernetes Admission Webhook to enforce pulling of Docker images from the private registry.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

kubenab should forbid patch the mirror pod image tag

mzyfree opened this issue · comments

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
k8s v1.15.2
3 master + 1 worker

What happened:
kubectl failed to get kube-apiserver/kube-scheduler/kube-cm pod information

What you expected to happen:
kubectl get po -n kube-system
command will show kube-apiserver/kube-scheduler/kube-cm pod informations.

How to reproduce it (as minimally and precisely as possible):
kubeadm init

Anything else we need to know:
k8s will check mirror pod should not have image repo,due to it is a mirror for static pod.After tugger add the repo to image tag, k8s will fail to create mirror pod for static pod of kube-apiserver/kube-scheduler/kube-cm.So kubectl get po -n kube-system won't show any pod informations.
So I think tugger should not add repo to mirror pod,because it is against k8s code logic.

@mzyfree can you please provide Examples about your Problems (eg. Command Outputs).

And what is tugger?

Do I understand you correctly:
You have deployed and Configured kubenab so that kubenab will also check the image Field of the Pods in the kube-system Namespace. And now K8s has deployed more K8s-Specific Pods (why? do you have applied an Update?). And now when you run kubectl -n kube-system get po you do not get a response.

But what has kubenab to do with your Pod-Deploying problem?


Please be more precisely with your Problem.

Tugger is here.
https://github.com/jainishshah17/tugger
I have tested this question also happens in kubenab.
Before I use kubenab, kubectl get po -n kube-system output:

kube-apiserver-a1                         1/1       Running   0          1d
kube-apiserver-a2                         1/1       Running   0          1d
**kube-apiserver-a3                         1/1       Running   0          1d**
kube-controller-manager-a1                1/1       Running   0          1d
kube-controller-manager-a2                1/1       Running   0          1d
**kube-controller-manager-a3                1/1       Running   0          1d**
kube-multus-ds-amd64-4m44v                1/1       Running   0          1d
kube-multus-ds-amd64-7788s                1/1       Running   0          1d
kube-multus-ds-amd64-bdvvh                1/1       Running   0          1d
kube-proxy-546pb                          1/1       Running   0          1d
kube-proxy-gf8vc                          1/1       Running   0          1d
kube-proxy-ntrzc                          1/1       Running   0          1d
kube-scheduler-a1                         1/1       Running   0          1d
kube-scheduler-a2                         1/1       Running   0          1d
**kube-scheduler-a3                         1/1       Running   0          1d**

And kubectl get no ouput:

NAME      STATUS    ROLES     AGE       VERSION
a1        Ready     master    1d        v1.11.5
a2        Ready     master    1d        v1.11.5
a3        Ready     master    1d        v1.11.5

After I use kubenab and use kubeadm init to redeploy kube-apiserver/kube-cm/kube-scheduler of one master node(for example "a3" node) in my k8s cluster,Then the output of command kubectl get po -n kube-system becomes:

kube-apiserver-a1                         1/1       Running   0          1d
kube-apiserver-a2                         1/1       Running   0          1d
kube-controller-manager-a1                1/1       Running   0          1d
kube-controller-manager-a2                1/1       Running   0          1d
kube-multus-ds-amd64-4m44v                1/1       Running   0          1d
kube-multus-ds-amd64-7788s                1/1       Running   0          1d
kube-multus-ds-amd64-bdvvh                1/1       Running   0          1d
kube-proxy-546pb                          1/1       Running   0          1d
kube-proxy-gf8vc                          1/1       Running   0          1d
kube-proxy-ntrzc                          1/1       Running   0          1d
kube-scheduler-a1                         1/1       Running   0          1d
kube-scheduler-a2                         1/1       Running   0          1d
kubernetes-dashboard-767dc7d4d-bvn95      1/1       Running   0          1d

The kube-apiserver/kube-cm/kube-scheduler mirror pod of static pod on a3 node in k8s are created failed, So kubectl get po do not display these pods informations of node a3.I haven't recorded the logs.Forgive me..
But the k8s code is here:

    hasSecrets := false
    podutil.VisitPodSecretNames(pod, func(name string) bool {  
        hasSecrets = true  
        return false  
    })  
    if hasSecrets {  
        return admission.NewForbidden(a, fmt.Errorf("a mirror pod may not reference secrets"))  
    }  

kubenab adds imagePullSecret into pod yaml and leads to this admission forbidden.

// Add image pull secret patche
patches = append(patches, patch{
	Op:   "add",
	Path: "/spec/imagePullSecrets",
	Value: []v1.LocalObjectReference{
		v1.LocalObjectReference{
			Name: registrySecretName,
		},
	},
})

So I think for mirror pod should not change image tag,because is always use local image.

@mzyfree Can you please use Markdown formatting, it would be easier to read your Text...

And kubenab does not Change the Pod Image Tag.

@l0nax I have format the code with markdown.
kubenab does not Change the Pod Image Tag.Yes,it add private repo address to pod image.
But the problem still exists,After kubeadm init the mirror pods for static pods kube-apiserver/kube-cm/kube-scheduler are created failed.

Ok now I understand.
I will work on your Issue.

@rimusz Can you please assign me to this Issue and add the label bug?

@l0nax
OK.Maybe you can use the kubernetes code for mirror pod judgement below.

// MirrorAnnotationKey represents the annotation key set by kubelets when creating mirror pods
MirrorPodAnnotationKey string = "kubernetes.io/config.mirror"

if _, isMirror := annotations[core.MirrorPodAnnotationKey]; isMirror {
    ...
}

@mzyfree sorry for the late answer, I was busy. There is a problem with your simple approach: eg. if you deploy your K8s-Cluster via kubeadm there will be other Labels/ Annotations then if you using kops to deploy your Cluster. So it would be difficult to implement it for every Deployment-Type.

@l0nax The problem is :
If I have deploied kubenab,and I want to deploy static pod with kubelet,then the mirror pod for this static pod will be created failed in kube-apiserver.The k8s-cluster via kubeadm is a specific example.
For simple word,kubenab works well for normal pod but gets panic with static pod.