AUTHOR: Andy MacDonald

Learn Kubernetes from the ground up.

Kubernetes (k8s) has surged in popularity in recent years. If you’re looking to deploy many containerised applications, k8s is indisputably the current best way to do it, whether in private or public cloud environments.

All of the major and not-so-major cloud providers offer a managed k8s cluster service.

The web-based repository hosting platform/DevOps lifecycle tool/ “everything you could possibly need to do stuff” tool GitLab provides integrations to deploy directly into a Kubernetes cluster that you define. Every other VCS offering has followed suit.

The point is… it’s incredibly popular, and pretty much every major technology-based company on the planet is investing heavily in it. It’s not going away, and you should probably learn a thing or two about it.

What’s the Point of Kubernetes?

Why should you care about Kubernetes? What problems does it solve?

At its simplest level, k8s solves a problem generated by another solution: containerisation.

Containerisation, if you’re not familiar, is the process of taking an application and packaging it into a single runnable/executable software image.

The process of containerising an application requires understanding its inputs, dependencies, configuration files, and outputs, and then baking all of these things into an immutable image.

The process of developing this image can be quite difficult depending on what it is, but once it’s built, it can be instantiated as a container at will on any system (providing that system has a container runtime such as Docker).

Containerisation primarily solves the problem of portability of applications between development environments and production environments.

With this capability to easily spawn a variety of applications on any system comes the capability to readily spawn multiple instances. This solves the problems of scalability, but in turn, generating new problems of the unwieldiness of multiple instances and a need to manage them. This is the core problem that Kubernetes solves: orchestration.

Source: Google Cloud

What Can I Do With It?

What can you do with Kubernetes? Lots of things, obviously:

  • Deployment and running of containerised applications

But then also:

  • Service discovery and load balancing
  • Release management and automated rollouts/rollbacks
  • “Self-healing”
  • Secrets and configuration management

How does it do it?


node is either a virtual or physical machine within the cluster, capable of running container.

Nodes run the container runtime that is required (e.g. Docker — although containerd is the default), as well as the services kubelet and kube-proxy.

  • kubelet — an agent running on every worker node that manages containers that run in pods. It matches the container’s current status against the pod’s specification.
  • kube-proxy — a highly flexible proxy that forwards many different kinds of requests and is responsible for handling all interactions between nodes and Kubernetes.

As well as these base services, Pods run on nodes, and a node can run multiple Pods at once.


A Kubernetes pod is the smallest and simplest object that’s deployable on a k8s cluster.

Pods run on Kubernetes nodes. A Pod can consist of a single or multiple containers as well as volumes.


Similar to the concept of volumes in Docker, Kubernetes volumes are a way of abstracting the concept of persisting data relevant to a Pod, in spite of the ephemeral (short-lived) nature of containers, and ensuring data can persist across container restarts.

Volumes are essentially just a simple directory that’s accessible across containers running inside a Pod.

Kubernetes volumes are generally bound to Pods, and so while they will persist across individual container restarts, by default they will cease to exist if the Pod ceases to exist (unless you configure them to persist).

Control Plane

The control plane or Kubernetes Master is a series of services that run on master nodes.

These services control how the k8s software interacts with the cluster. The services primarily consist of:

  • kube-apiserver — a component responsible for exposing the k8s cluster and acting as the main interface for cluster communications between all master and worker components in the cluster.
  • kube-scheduler— a component that has a simple to define but vastly important responsibility of scheduling pods and at what point to run them in the cluster.
  • etcd — a distributed key-value store used by k8s and pods for storage of resource definitions, custom application configuration for pods, and the status of any object that exists within the cluster.
  • kube-controller-manager — a component responsible for managing the lifecycle of pods. kube-controller-manager retrieves desired and current cluster state from etcd through kube-apiserver and instantiates or removes the required resources as necessary.

Control plane components can reside in a single master node, or they can also be spread in various topologies across multiple master nodes.


The cluster is the group of nodes that are running services/pods/components that are being managed by Kubernetes or are the Kubernetes software itself.

Clusters must have at least one master node. For a cluster to do some meaningful work, you must also have some services deployed. These services must generally be deployed on worker nodes, although it’s possible and can make sense to deploy services on a master node as well.


Kubernetes Pods have a lifespan that’s not infinite. Eventually, they can be removed if a newer version of the containers running in the Pod is released or for some other reason.

Services are an abstraction, a type of pod that’s effectively a load balancer across pods — solving the problem of interdependence and availability of service between pods (which could randomly not exist).

If you have a slice of an application that runs in one Pod and another slice that resides in another, you can use a service as a way of maintaining a constantly available interface from one service to another, in spite of the individual composition of Pods in each service.


Ingress is k8s’ mechanism of exposing services running within a cluster to the outside world.

Ingress is managed by an ingress controller and provides routes to services via HTTP and HTTPS.


Deployment wraps all of the k8s concepts together and is an instruction you provide to k8s to create or cycle services running Pods, which are running your application containers.

Deployments in k8s are more than just simple scripts that install software.

Kubernetes manages the rollout of a deployment to maintain the availability of services, and also to handle rollback in the event of failure.

…Anything Else?


kubectl is the main CLI for controlling the k8s cluster and exposing information from the cluster.

The syntax is fairly simple, but there are quite a lot of commands to learn:


As well as infinitely long-running services, k8s also has the concept of jobs and cron jobs, which can run on a defined schedule.

As you might imagine, these sorts of workloads are scheduled via kube-scheduler and give a good option for services that have an explicit lifespan, such as running until completion of a process and then terminating.


etcd is more than just a ledger of everything created by the cluster ever — it can also be used as an application configuration and secrets store.

This means you can drop useful things like database URIs and passwords in, and then reference them as needed in your resource definitions!


Thanks for reading! Hopefully, this article has given you a good feel for Kubernetes and what it can offer.

This article really really really isn’t exhaustive, and there are some very advanced Kubernetes concepts out of the scope of what has been covered here.

If you want to have a play with Kubernetes, I recommend you check out the following, a zero config tool for bootstrapping k8s on a Linux host:

And if you’re interested in getting your teeth into some more advanced Kubernetes patterns, I recommend this book:

Finally, I’d also recommend you take a look at the following comic:

Sign up to our newsletter

Keep up-to-date with more news and views from Andy and our other contributors by subscribing to our monthly newsletter.

Who contributed to this article

  • Andy MacDonald
    Senior Software and DevOps Engineer

    Andy MacDonald is a senior software and DevOps Engineer at BlackCat. He is passionate about all things technology and loves learning about new technologies and their application. Andy has extensive technical skills across product development, application architecture and agile/DevOps process improvement. In his spare time, he’s a volunteer mentor and coding coach and an active member of the Birmingham tech scene. He’s also a regular guest writer for a number of online technical journals as well as a regular contributor to BlackCat’s own technical blog.