kubernetes-sigs / controller-runtime

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does controller-runtime support that multiple controllers share one Manager and use different clients(RBAC)?

luc99hen opened this issue · comments

I want to implement a component similar to Kube-Controller-Manager which consists of multiple controllers. Kube-Controller-Manager has one option UseServiceAccountCredentials to controller whether those controllers share one set of giant RBAC or use seperate ones.

Kube-Controller-Manager is built on client-go, I want to ask if there are similar mechanism in controller-runtime framework. I did some research and find that we can override the default NewClient func. However, it's not enough for this case which different controllers using different clients.

I don't think we have a mechanism for that

Thank you for your response.

Currently, I use a custom func GetClientByControllerName(mgr mananger.Manager, controllerName string) to replace the original mgr.GetClient().

func GetClientByControllerNameOrDie(mgr manager.Manager, controllerName string) client.Client {
	// if controllerName is empty, return the base client of manager
	if controllerName == "" {
		return mgr.GetClient()
	}

	clientStore.lock.Lock()
	defer clientStore.lock.Unlock()

	if cli, ok := clientStore.clientsByName[controllerName]; ok {
		return cli
	}

	// check if controller-specific ServiceAccount exist
	_, err := getOrCreateServiceAccount(mgr.GetClient(), "kube-system", controllerName)
	if err != nil {
		return nil
	}

	// get base config
	baseCfg := mgr.GetConfig()

	// rename cfg user-agent
	cfg := rest.CopyConfig(baseCfg)
	rest.AddUserAgent(cfg, controllerName)

	// add controller-specific token wrapper to cfg
	cachedTokenSource := transport.NewCachedTokenSource(&tokenSourceImpl{
		namespace:          "kube-system",
		serviceAccountName: controllerName,
		cli:                mgr.GetClient(),
		expirationSeconds:  defaultExpirationSeconds,
		leewayPercent:      defaultLeewayPercent,
	})
	cfg.Wrap(transport.ResettableTokenSourceWrapTransport(cachedTokenSource))

	// construct client from cfg
	clientOptions := client.Options{
		Scheme: mgr.GetScheme(),
		Mapper: mgr.GetRESTMapper(),
		// todo: this is just a default option, we should use mgr's cache options
		Cache: &client.CacheOptions{
			Unstructured: false,
			Reader:       mgr.GetCache(),
		},
	}

	cli, err := client.New(cfg, clientOptions)
	if err != nil {
		panic(err)
	}
	clientStore.clientsByName[controllerName] = cli

	return cli
}

Using this method is sufficient for basic purposes, but the client derived from func misses access to various client build options (such as cache option) that come with manager.New(). These client build options are only employed transiently for constructing the client within the manager, and the manager lacks an interface to retrieve them. I'm considering whether we can modify the mgr.GetClient() interface to mgr.GetClient(controllerName string) similar to mgr.GetEventRecorderFor(name string) in this scenario.

image

Hi, @sbueringer @alvaroaleman What do you think of this idea?

I would assume the kube-controller-manager uses impersonation rather than different tokens and that is likely something you could implement as a client wrapper

I would assume the kube-controller-manager uses impersonation rather than different tokens

This is wrong, it uses different tokens: https://github.com/kubernetes/kubernetes/blob/fd2d352d291bc4fb36d51e52b33a6f6849f20f35/staging/src/k8s.io/controller-manager/pkg/clientbuilder/client_builder_dynamic.go#L121

So yeah, the best way to replicate this behavior in controller-runtime is to construct a new client for every component using an empty kubeconfig with a dynamic token source like the one in the KCM.

@alvaroaleman Thank you for your reply~

the best way to replicate this behavior in controller-runtime is to construct a new client for every component using an empty kubeconfig with a dynamic token source like the one in the KCM.

I'm currently using this approach, but the client-wrapper poses an issue: it's challenging to inherit the manager's client settings, like the caching option.