This is a cheat sheet for using Docker. Based on https://docs.docker.com/get-started/
docker --version
docker info
or:
docker version
sudo groupadd docker
sudo usermod -aG docker $user
(After this it is necessary to logout and login again.)
docker run hello-world
sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "/home/$USER/.docker" -R
docker run hello world
docker image ls
docker image ls --all
or:
docker image ls -a
docker image ls -q
docker image ls -aq
In this step we define a new container, build it, run it, and share it on Docker public registry.
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
Flask
Redis
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
@app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
The option -t
will apply the friendly name.
docker build -t friendlyhello
docker image ls
The local machine's port 4000 is mapped to the container's published port 80 using -p
:
docker run -p 4000:80 friendlyhello
The URL where the app can be checked is
http://localhost:4000
Check the app from the terminal using curl
:
curl http://localhost:4000
docker run -d -p 4000:80 friendlyhello
Check the Docker container:
docker container ls
This step needs Docker ID - sign up at https://cloud.docker.com
.
docker login
Replace <username>
, <repository>
and <tag>
with your username, repository name and tag.
docker tag friendlyhello <username>/<repository>:<tag>
docker push <username>/<repository>:<tag>
The image can be seen on https://hub.docker.com/
.
docker run -p 4000:80 <username>/<repository>:<tag>
docker container ls -a
docker container stop <hash>
docker container kill <hash>
docker container rm <hash>
docker container rm $(docker container ls -a -q)
docker container ls
docker container rm <image id>
docker image rm $(docker image ls -a -q)
docker login
docker tag <image> <username>/<repository>:<tag>
docker push <username>/<repository>:<tag>
docker run <username>/<repository>:<tag>
In this part we will scale the app and enable load balancing.
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
docker swarm init
The app name is getstartedlab
docker stack deploy -c docker-compose.yml getstartedlab
This will run our service stack with 5 container instances of our deployed image on one host.
docker service ls
The service name is getstartedlab_web
.
A single container running in a service is a task.
docker service ps getstartedlab_web
docker container ls
curl http://localhost:4000
If this is run several times, the hostname, which contains the container id, will change. This demonstrates the load balancing.
By changing the replicas
value in docker-compose.yml
the app can be scaled. The docker stack depliy -c docker-compose.yml getstartedlab
command needs to be re-run after the change. An in-place update will happen, no need to shut down the app.
docker stack rm getstartedlab
docker swarm leave --force
docker inspect <task or container id>
Applications can be deployed onto a cluster, running on multiple machines. Multiple machines can be joined into a cluster which cluster is called a "swarm". (So from this point the "cluster" and the "swarm" will be used interchangeably.) Machines in a swarm can be physical or virtual, and after joining the cluster, they are called nodes. So a swarm is a group of machines running Docker and joined into a cluster. After building of this swarm, the usual Docker commands can be used, but they will be executed by a swarm manager.
The swarm manager can follow strategies for executing the commands, for example:
- emptiest node: the least utilized machines will be filled with containers
- global: each machine gets exactly one instance of the specified container
Roles of the machines in a swarm:
- swarm manager:
- execute commands
- authorizes machines to join the swarm
- worker:
- provide capacity for the work (they cannot tell other machines what to do)
Docker can be used in:
- single-host mode: no swarm, cointainers are run in the single host
- swarm mode: enables use of swarm. The current machine will become instantly a swarm manager.
TODO
TODO
TODO
Components of a Kubernetes cluster:
- Master nodes: entry point for all administrative tasks. (Communication to it vai CLI, GUI or API.)
- API server: all the administrative tasks are performed via the API server. (Receives REST commands, executes them and stores the resulting state in the distrubuted key-value store.)
- Scheduler: Schedules the work, has the resource usage info for each worker node. Takes into account QoS requirements, data locality, etc.
- Controller manager: manages non-terminating control loops so that the current state of the objects should be the same as the desired state. (Watches state of the objects through the API server.)
- distributed key-value store, eg.
etcd
: Stores cluster state, configuration details, subnets, ConfigMaps, Secrets, etc.etcd
is a distributed key-value store based on Raft Consensus Algorithm.
- Worker nodes:
- Container runtime: runs and manages container's lifecycle. Examples:
containerd
,rkt
,lxd
. Docker is not a container runtime, but a platform which usescontainerd
as a container runtime.- Services: group
Pods
and load balances- Pods: scheduling unit in Kubernetes. A logical collection of one or more containers which are always scheduled together.
- Containers
- Pods: scheduling unit in Kubernetes. A logical collection of one or more containers which are always scheduled together.
- Services: group
kubelet
: an agent running on the worker nodes and communicating with the master node. It receives Pod definitions through the API server and runs the containers of the Pod. Connects to the container runtime using the Container Runtime Interface (CRI). Thekubelet
is a grpc client and the CRI shim is a grpc server.kube-proxy
: listen to API server, sets up routes from/to services. Exposes the services to the external world.
- Container runtime: runs and manages container's lifecycle. Examples:
Container Runtime Interface (CRI):
- protocol bufferes
- gRPC API
- libraries
Services implemented by CRI:
ImageService
: image-related operationsRuntimeService
: Pod and container-related operations
CRI shims:
dockershim
cri-containerd
CRI-O
Installation types of Kubernetes:
- All-in-one single node
- Single-node etcd, single-master, multi-worker
- Single-node etcd, multi-master, multi-worker
- Multi-node etcd, multi-master, multi-worker
Other aspects of installation type:
- place of installation
- localhost
minikube
(preferred way for all-on-one)- Ubuntu on
LXD
- on-premise virtual machines
- virtual machine created with vagrant
- virtual machine created with KVM
- on-premise bare metal (automated installation tools:
ansible
,kubeadbm
)- RHEL
- CoreOS
- CentOS
- Fedora
- Ubuntu
- etc.
- cloud (private or public cloud)
- hosted solutions
- turnkey cloud solutions
- bare metal
- localhost
- Operating system
- RHEL
- CoreOS
- CentOS
- etc.
- Networking solution
- Further aspects to consider
Kubernetes installation tools:
kubeadm
- KubeSpray (Kargo)
- Kops