enix / kube-image-keeper

kuik is a container image caching system for Kubernetes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pull secret config in manually created repositories get overwritten

dudell-bud opened this issue · comments

I mentioned it in another ticket then realised it may not be expected behaviour so didn't want to go off topic.

I crafted a bunch of repository entries for the images I wanted to cache and I use a label to determine if the workload has cached images or not. As soon as I apply the label kuik pick it up and creates a corresponding CachedImage and at the same time updates the repository I created, editing pullSecretsNamespace and nulling pullSecretNames.

I don't think this should happen - if there are secrets in the repository I would like them to remain the same.

To try and get around it I create the CachedImages and the Repositories together - before I enable the label. This works initially, caches all my images, but after 10 minutes or so all the repos will update to blank out the pullSecretNames and change the pullSecretsNamespace. To be able to use kuik I'm having to run a cron that patches these back up.

I'm not sure what the intended behaviour here is but I would love the ability to override what kuik picks up as the right settings for pull secrets - I'm happy enough to do this in the repository but I need to be able to tell kuik not to change them back.

Hello,

Indeed this is a bug, thanks for reporting, I will start working on a fix soon.

This is the intended behavior, but I understand it can be a bit surprising at first glance. I will let @Nicolasgouze elaborate further.

I thought it may be - its one of those where I can understand why you might want to update the secrets in the repo as the deployment changes - but at the same time I would love to have the ability to just give everything a default secret in the kuik namespace and use that to pull everything. Its a good way around the GKE issues.

Struggling to find any way around it other than the cron to override the controller's updates - I don't have the ability to change deployments to reference secrets / a service account with secrets.

To achieve what you want to do, you could use a service account in your deployments and attach a pull secret to this service account, this is supported by kuik and will populate the repository pull secrets with the ones from the service account. Also we will certainly implement some auto-authentication against GKE as we did for ECR (see #113).

I don't have the ability to change deployments to reference secrets / a service account with secrets.

Yeah unfortunately I don't have the ability to add the service account into the deployments - and this would require the secret to be put into every namespace rather than deployed just in the kuik-system namespace which becomes a bit more complicated with out automation.

Happy to close it out - however the only way I've found to code around this is by having something continually re-patching the secrets. The frustrating thing is that the edit that controller is making is going from:

pullSecretsNamespace: kuik-system
pullSecretNames: ['my-pull-secret']

and patching it to:

pullSecretsNamespace: default
pullSecretNames: []

You'd almost want it such that if there is no secrets defined in the deployment SA then just leave it - don't set those fields. That way if the SA on the deployment references secrets they update to those, and if they don't the remain what they were when it was created (either blank in the general case, or in my case my specific set creds)

Something like:

		operation, err := controllerutil.CreateOrPatch(ctx, r.Client, repo, func() error {
			repo.Spec.Name = repository.Spec.Name
			if len(repository.Spec.PullSecretNames) > 0 {
				repo.Spec.PullSecretNames = repository.Spec.PullSecretNames
				repo.Spec.PullSecretsNamespace = repository.Spec.PullSecretsNamespace
			}
			return nil
		})

That way:

  • Default creds like i need can be supported with creds in one namespace and no need to change running config
  • If the pod references a SA with different creds they will be updated/take priority

Hi @dudell-bud, for the records, we have decided NOT TO update current kuik behaviour.
The main rationals behind this decision are the following one. We want :

  • To keep it simple
  • To stick to default kubernetes "way of doing" things
  • The k8s cluster to keep on working, as much as possible, in case kuik faces an issue (internal registry for example).

Hi @dudell-bud, for the records, we have decided NOT TO update current kuik behaviour. The main rationals behind this decision are the following one. We want :

  • To keep it simple
  • To stick to default kubernetes "way of doing" things
  • The k8s cluster to keep on working, as much as possible, in case kuik faces an issue (internal registry for example).

Hey @Nicolasgouze - just some clarification on this because I think some context went missing some where.

  • Point 1 i'm not sure about - through other issues the repository CRDs were there to help simplify the configuration of pull secrets. Not being able to manage those fields any more than when they were on the raw cached image doesn't really do that. The only solution being to add a SA to all deployments does not seem simpler. It also breaks gitops a bit if you want to pre-heat - argocd and kuik fight over those fields.

We will, in the next weeks, release a new version that will introduce a Repository CRD, pull secrets will then be handled at the Repository level. I hope it will make it easier to manually provide custom pull secrets.

  • Sticking to the k8s way of doing things - again not sure what that means in this case?
  • And the final point my suggested change would not impact the ability for the k8s cluster to continue working? If anything my suggestion actually prevents overly exposing secrets (why make the pull secret available outside of the kuik-system namespace if you don't have to?)

If y'all don't think the change make sense or it breaks a convention, I understand - specially since the solution @paullaffitte suggested similar to EKR login would work for me - just want to make sure it was clear what I was proposing cause I could not work out from your message what you thought I was suggesting

You'd almost want it such that if there is no secrets defined in the deployment SA then just leave it - don't set those fields. That way if the SA on the deployment references secrets they update to those, and if they don't the remain what they were when it was created (either blank in the general case, or in my case my specific set creds)

That is a good idea, and I've think of it first when I said I will write a fix. However, this go against the philosophy of the project, which is, as @Nicolasgouze said, "To stick to default kubernetes "way of doing" things". By that he meant that if you've come once to uninstall or deactivate kuik (by deleting the mutating webhook), your workloads should continue to work as expected. With this solution it would not be the case since pods are missing secrets and in that regards it differs from the default kubernetes functioning.

Normally, putting secrets in pods that requires them should not be an issue since when you don't use kuik you already have to do that. In your case it's a bit different because you rely on the automatic authentication against GCR normally provided by GKE so you indeed lose a feature and it makes it harder for you to deploy your workloads.

So, for now, the solution that I can offer you would be to have this automatic authentication implemented in kuik. Would this solve your issue? Otherwise we could eventually think of a solution based on the snippet you provided, but it would require to carefully think of it first and it surely would not be implemented in haste.

By that he meant that if you've come once to uninstall or deactivate kuik (by deleting the mutating webhook), your workloads should continue to work as expected. With this solution it would not be the case since pods are missing secrets and in that regards it differs from the default kubernetes functioning.

Ah yeah that was the bit of context I had missed - my thought process was going the other way - you'd have to have something in place to onboard kuik in this way (like we do with GKE) that offboarding would be seemless as it would still be in place.

Yeah - i think if y'all can support the auto login with GCR then that works for me and like i said in the other issue more than happy to put a hand up to test that all out. We have the little patcher thing running in our envs at the moment - its brittle but works for non-prod so we can wait. Thanks for clearing that up :)

Closing this and will watch for (or perhaps add a PR for if i can get the time) for GCR/GAR auto login