annasedlar / MarkdownTest

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

layout title date author tags
post
KubeCon Review - Containers, Kubernetes, Orchestration, woah!
2017-12-11 07:00:00 -0800
@annasedlar
containers
orchestration
deployment
devops
cloud
conferences
kubernetes
go
infrastructure

"Cloud native infrastructure is more than servers, network, and storage in the cloud—it is as much about operational hygiene as it is about elasticity and scalability” -- RedHat

What is Kubernetes and Why Should BNR Care?

Kubernetes is a container orchestration system built by teams at Google. The poroject now lives under the Cloud Native Computing Foundation, a vendor-neutral space with strong community support. According to the tech news, it’s one of the fastest growing projects of all time! You may have heard of Docker. While these two projects exists very closely, they are not the same. Docker is used to build containers themselves, it is considered a container runtime (Docker is also similar to Virtual Machines, in that both are abstraction layers running atop a machine. Docker containers, however are smaller and quicker to spin up than VMs.) Containers are a means of bundling or packaging a software application with it's dependencies that can then be run by a system that supports this format (ie. The Docker Engine). Docker is all about managing apps within an individual machine. Kubernetes is the platform that can manage, scale, monitor, and configure these containers.

Kubernetes was designed to operationalize containerized applications. Using Kubernetes, containers will run under a single service in what they introduced as Pods. Kubernetes is often referred to as a Container Orchestration Environment (COE). Think fleets of containers across multiple hosts. Docker recently released a new project, Docker Swarm which is it's own COE and addresses these functions similarly to Kubernetes. COEs manage the containers when running multiple instances of a containerized application. A COE’s simplest function is to launch an application and ensure that application is running. If a given Container instance fails, Kubernetes (or Docker Swarm) would recognize this and spin up another container (or rather, another Pod in Kubernetes case). Kubernetes can also be configured to scale the application up or down in response to demand.

Kubernetes is definitely more Devops than Dev work. In fact, it's more Ops than Devops. I was certainly in the 1% of least experienced at this conference of 4300. It was fasinating to witness the excitement in what normally is the slower, more stable world of Ops and I can certainly see the benefits of managing apps in containers in the cloud. Cloud-native - the concept of the cloud being the default deployment environment for apps (versus in proprietary data centers that must be monitored and controlled by a company's own employees). I left exhausted and excited to learn more.

What value does this hold for Big Nerd Ranch? This was on my mind throughout the conference and to be honest, I'd love to hear the thoughts from other, more experienced nerds on this topic. From what I understand, our client work is usually a code hand-off and the client is responsible for deploying the application where and how they see fit. And for our in-house apps, I have only seen us use Heroku, which seems perfectly sufficient for hosting our small apps. I imagine if we were a product team, Kubernetes would definitely be more relevant.

Gimme Some Context - The Big Dog$ in the Container Game:

(Or at least those that were represented at KubeCon)

  • Google
  • Google Cloud Platform
  • Heptio
  • IBM
  • Microsoft - Azure
  • Amazon - AWS
  • Docker
  • RedHat
  • CoreOS
  • Tigera
  • Mezosphere
  • DataDog
  • Sysdig
  • WeaveWorks
  • Mirantis
  • huawei
  • Meteor
  • Dynatrace

Helpful Resources

People to Follow

Nitty Gritty Components of Kubernetes (If you reeeeally want to know..)

Pods

The smallest deployable unit of computing that can be created and managed in Kubernetes. Pods can contain one single container, but they aren't limited to just one. All containers in a pod run as if they would have been running on a single host in pre-container world. They share a set of Linux namespaces and do not run isolated from each other. This results in them sharing an IP address and port space, and being able to find each other over localhost or communicate over the IPC namespace. Further, all containers in a pod have access to shared volumes, that is they can mount and work on the same volumes if needed. A YAML (Yet Another Markup Language) file is used to define a pod. Below is an example pod written in YAML:

kind: Pod
metadata:
 name: nginx-pod
 labels:
  app: nginx
spec:
 containers:
 - name: nginx
  image: nginx:1.7.9

ReplicaSets

A pod by itself is ephemeral/'mortal' and won’t be rescheduled if the node it is running on goes down. ReplicaSets ensure that a specific number of pod instances (or replicas) are running at any given time. If you want your pod to stay alive you make sure you have an according replica set specifying at least one replica for that pod. The ReplicaSet then takes care of (re)scheduling your instances for you. A ReplicaSet can not only manage a single pod but also a group of different pods selected based on a common label. This enables a replica set to for example scale all pods that together compose the frontend of an application together without having to have identical ReplicaSets for each pod in the frontend.

Deployments

A controller that provides declarative updates for Pods and ReplicaSets by changing the state of them at a controlled rate. Used for creating new ReplicaSets or removing existing Deployments. The folloring Deployment creates a ReplicaSet to bring up three nginx Pods:

kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Services

A service is a grouping of pods that are running on the cluster. Services are "cheap" and you can have many services within the cluster of Pods. Kubernetes services can efficiently power a microservice architecture. Services provide important features that are standardized across the cluster: load-balancing, service discovery between applications, health checks and features to support zero-downtime application deployments. The example below targets TCP port 80 on any Pod with the run: my-nginx label, and expose it similar to a REST API.

kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx

In Conclusion

To be quite honest, my experience level at this conference did inhibit me from getting the most possible out of it. Plus the fact that it isn't immediately relevant to my work at BNR means that Kubernetes lands on the back-burner. I have spent some hours studying up since returning however and do think learning this will allow me to learn a lot more about Linux and general computer architecture that I feel I am sorely lacking. If anyone is interested in learning more, I brought home several books and resources I'd love to share more!

About