sylvainkalache / Kubernetes-namespace-article-devops

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Overcoming Kubernetes Namespace Limitations

As companies are standardizing on Kubernetes and moving more of their workloads to the platform, the need emerges for resource isolation and, more generally, multi-tenancy features. Kubernetes Namespaces are the tool of choice to achieve that. The CNCF Survey Report 2020 found that 83% of companies used namespaces to separate Kubernetes applications.

This article will explore what Kubernetes namespaces are and go through classic multi-tenancy scenarios. It will explore some limitations and how they can be solved using hierarchical namespaces or Cloud Foundry Korifi.

What Are Namespaces?

Namespaces provide a logical partitioning for managing Kubernetes resource allocation, access control, and configuration management, allowing developers to manage multiple applications and environments within the same Kubernetes cluster.

This is beneficial in many use cases, such as when running unrelated applications in different namespaces or when running different versions (such as dev, test, and production) of the same application in a single Kubernetes cluster.

For example, a company could have two applications:

App1: An e-commerce website maintained by Team1
App2: A backend to manage orders and customers maintained by Team2

Both applications need to be deployed in the same Kubernetes cluster for cost optimization. Let’s look at how we can create a basic namespace setup. Before we start: for each step, a screenshot of the commands is provided as well as a link to a GitHub file for easy copy/pasting. Here is a link to the Github repo.

The first step is to create namespaces for each application by running these kubectl commands.

kubectl create namespace app1-namespace
kubectl create namespace app2-namespace
Then, we can configure access control and resource quotas for each namespace. Role-Based Access Control (RBAC) policies can limit which users or service accounts can access or manage resources in each namespace. In the example below, we will define a resource quota to restrict the amount of CPU each app can use.

For App1, we will limit the CPU usage to 80% of the cluster. Therefore, we create a app1-resource-quota.yaml file containing the following.

apiVersion: v1
kind: ResourceQuota
metadata:
name: app1-resource-quota
namespace: app1-namespace
spec:
hard:
cpu: "800m"

For App2, we will limit the CPU usage to 10% of the cluster. We create a app2-resource-quota.yaml file containing the following.

apiVersion: v1
kind: ResourceQuota
metadata:
name: app2-resource-quota
namespace: app2-namespace
spec:
hard:
cpu: "100m"

To apply the resource quotas, we need to run the following commands.

kubectl apply -f app1-resource-quota.yaml
kubectl apply -f app2-resource-quota.yaml

We confirm that the quotas have been applied with the following commands.

kubectl get resourcequota -n app1-namespace
kubectl get resourcequota -n app2-namespace

App1 and App2 are respectively managed by Team1 and Team2. Let’s give them the appropriate deployment rights.

First, we will create a Role for each namespace that allows managing deployments. Let’s start with App1 by creating the app1-role.yaml file.

apiVersion: v1
kind: ResourceQuota
metadata:
name: app1-resource-quota
namespace: app1-namespace
spec:
hard:
cpu: "800m"

And proceed with the same for App2 with the file app2-resource-quota.yaml.

apiVersion: v1
kind: ResourceQuota
metadata:
name: app2-resource-quota
namespace: app2-namespace
spec:
hard:
cpu: "100m"

The next step is to create RoleBinding resources to associate each role with the respective team.

We start with the binding between App1 and Team1 with the file app1-rolebinding.yaml.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deployment-manager-binding
namespace: app1-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: deployment-manager
subjects:
- kind: Group
name: team1
apiGroup: rbac.authorization.k8s.io

Now, let’s create the binding between App2 and Team2 with the file app2-rolebinding.yaml.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deployment-manager-binding
namespace: app2-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: deployment-manager
subjects:
- kind: Group
name: team2
apiGroup: rbac.authorization.k8s.io

We apply the RBAC configurations using these kubectl commands.

kubectl apply -f app1-role.yaml
kubectl apply -f app1-rolebinding.yaml
kubectl apply -f app2-role.yaml
kubectl apply -f app2-rolebinding.yaml

Those are straightforward examples, and Kubernetes namespaces have countless more possibilities. However, namespaces have limitations that can impact their flexibility in certain use cases.

For instance, when a team owns multiple microservices with distinct secrets and quotas, placing them in separate namespaces for isolation can lead to issues. Kubernetes lacked a common ownership concept for these namespaces, making it difficult to apply namespace-scoped policies uniformly across them.

Another problematic situation is that teams usually perform better when operating autonomously, but creating namespaces is a highly-privileged Kubernetes operation. As a result, developers must request a new namespace from the cluster administrator, which can create unnecessary administrative work, especially in larger organizations.

Hierarchical Namespaces

Kubernetes Hierarchical Namespace Controller (HNC) addresses these issues. A hierarchical namespace functions similarly to a standard Kubernetes namespace but includes a small custom resource that specifies an optional parent namespace. This introduces the notion of ownership across multiple namespaces, extending beyond the scope of individual namespaces.

This concept of ownership enables two additional types of behaviors:

This solves both of the problems for dev teams. The Kubernetes cluster administrators can create a single “root” namespace for the entire organization, along with all necessary policies, and then delegate permission to create subnamespaces to members, for example, one subnamespace for both App1 and App2. Team members can then create subnamespaces for their own use, without violating the policies that the cluster administrators imposed.

Note that HNC is an optional extension not shipped with Kubernetes by default. To install HNC, you can follow the instructions provided in the HNC GitHub repository.

The Cloud Foundry Alternative

The Cloud Foundry community recently launched a platform that solves these same concerns. Open source Korifi provides a modern cloud-native application delivery and management model for Kubernetes. And when it comes to handling multi-tenancy, the project goal is to mimic the same Cloud Foundry RBAC syntax that grants permissions to Cloud Foundry users, but for Kubernetes clusters.

Therefore, companies familiar with the way Cloud Foundry Orgs, Spaces, Roles, and Permissions will not need to learn anything new. Similarly to Hierarchical Namespaces, Korifi also allows giving Kubernetes permissions to one user over an entire Org and all the Spaces it contains. The installation can be easily done by following the instructions in the Korifi GitHub repository.

What to Keep in Mind

There is a classic misconception about Kubernetes namespaces. While they provide logical isolation at the API level, they do not inherently provide network isolation between namespaces. Network isolation between namespaces can be achieved using Network Policies, which are a separate Kubernetes feature. So be sure to keep this in mind when setting them up.

This article was originally published on Cloud Native Now https://cloudnativenow.com/features/overcoming-kubernetes-namespace-limitations/.

About


Languages

Language:Shell 100.0%