Meet Your Helmsman!

Meet Your Helmsman!

Kubernetes, from an Ancient Greek word meaning helmsman or pilot, has become the de-facto standard for running containerized application workloads in the cloud. This blog post provides a short introduction and explains why it’s such a powerful platform for microservices.

The following section will provide an introduction to what Kubernetes is and how it facilitates microservice architecture advantages. But, first and foremost: Kubernetes is awesome! Therefore, I’m very delighted you joined me on this journey to learn how we can leverage its powers.

A Short Introduction

Kubernetes is a refined version of Borg, a container-oriented cluster management system developed at Google – it makes use of some of its greatest ideas while eliminating its pain points. As a portable and extensible open-source orchestrator for deploying containerized applications, it has over time become the de-facto standard for building cloud-native applications, and today, nearly every cloud provider offers Kubernetes-as-a-Service products.

Declarative Configuration

The most fundamental concept in Kubernetes is declarative configuration. Everything in Kubernetes is a declarative configuration object that describes a certain desired state, and Kubernetes not only takes action to make the current state match the desired state, but continuously checks for any deviations between current and desired state, again taking action to align the former to the latter, if necessary. This turns Kubernetes into a self-healing system.

The idea of declaratively describing state is crucially important because it necessitates we fundamentally rethink how we interact with a system to achieve a certain goal (for example, roll out a new application artifact or scale an existing application). Traditionally, state change has been the result of a series of incremental updates applied either by a human operator or made by some script, but with Kubernetes, you simply describe the desired state and Kubernetes will make it happen, if possible. Therefore, using declarative configuration, Kubernetes facilitates immutability: Once a certain state has been reached, it does not change by means of user interaction, but only via actions taken by Kubernetes itself.

This makes life significantly easier for developers and operators alike – since declarative state descriptions are merely plain-text files and thus can be easily managed by version control systems, a single repository can describe the desired state of the entire system. Therefore, transitioning to a new state becomes as easy as updating a configuration file and handing it over to Kubernetes, and reverting to a previous state is as simple as reverting to the previous state in the repository and again handing all configuration files to Kubernetes.

Kubernetes And The Microservice Architecture

In a previous blog post, we’ve established the microservice architecture is superior to the monolithic architecture for managing the complexity of large applications because of some advantages it has. We still need a platform for putting those advantages into practice, though, and as it turns out, Kubernetes is a perfect fit for a microservice architecture.

The following sections describe how Kubernetes supports each of the advantages the microservice architecture provides.

Enabler For Continuous Delivery And -Deployment

The microservice architecture is an enabler for Continuous Delivery and -Deployment because it increases testability and deployability.

  • How Kubernetes supports testability: The average machine utilization is much higher in a Kubernetes cluster due to Kubernetes’ scheduling abilities. This makes it economically feasible to create a full testing environment in order to test each commit made to a microservice’s codebase by each developer. On top of that, thanks to a Kubernetes abstraction called Namespace, developers don’t need to worry about influencing each other’s testing because namespaces can isolate testing environments from each other.
  • How Kubernetes supports deployability: The blog post introducing the microservice architecture advantages claimed developers can not only implement, but also deploy their microservice and that the traditional gap between development and operations is thus gone. This is true – but only if the platform a release candidate is deployed to makes such deployments sufficiently straightforward by offering a high degree of automation. Kubernetes is such a platform because its declarative configuration approach means there is no imperative sequence of steps that need to be executed to deploy an update. Rather, developers only need to update the desired state description (e. g. put in a new Docker image version to use) and hand the new description to Kubernetes.

Lower Communication And Organization Overhead

This is a consequence of the above – due to the increased testability and deployability Kubernetes provides, teams don’t need to coordinate anymore when deploying and testing their services. Thus, Kubernetes lets teams extend the advantage of lower communication and organization overhead the microservice architecture provides in the development phase to subsequent phases of their software’s lifecycle.

Services Are Independently Scalable

If microservices are small, stateless, decoupled units of cohesive responsibilities, they can be scaled independently – that is, if the platform those microservice run on actually supports it. We’ll see that Kubernetes has powerful features that enable teams to scale their services either horizontally or vertically, and provided that those services implement reasonable health and readiness checks, Kubernetes can even do that automatically (although there are some limitations on vertical autoscaling, and horizontal and vertical autoscaling should not be used together).

Better Environment For Experiments And Updates

Due to Kubernetes’ isolation capabilities, new features and technology stacks – or even just that crazy idea you’ve been thinking about for the past couple of weeks – can easily be tested or prototyped on the same shared cluster everyone else runs their stuff on.

Start Using Kubernetes

There are multiple options at your disposal for getting started with Kubernetes – you can either consume a Kubernetes-as-a-Service (KaaS) offering by a cloud provider or go with one of the single-host solutions that – in one way or the other – simulate a cluster. If you happen to have a lot of hardware standing around, you might of course also install a cluster on your bare metal, but that’s beyond the scope of this blog post.

  • Minikube: Runs a single-node Kubernetes cluster inside a VM. Makes day-to-day development tasks very easy, and state can be thrown away simply by purging the VM. Installation instructions on kubernetes.io.
  • k3s: Rancher’s light-weight Kubernetes variant. Although designed with the hardware-constrained devices for Edge Computing in mind, it’s just as viable for use on your local workstation or laptop. Setup is as easy as just running a script. Installation instructions on k3s.io.
  • KinD (Kubernetes-in-Docker): Most recent addition to the solutions for running Kubernetes locally. Simulates a cluster by running its components within Docker containers on a single host. Slightly more complex setup compared to k3s. Installation instructions on kind.sigs.k8s.io.
  • KaaS offerings by cloud providers: By far the most comprehensive Kubernetes experience. May be overkill if you only want to play around a little, but great option in case you want to use Kubernetes more seriously. Google, Amazon and Microsoft all provide KaaS services (see here, here, and here, respectively). At the time of writing, Google grants 300$ of starting budget if you start using their service (which is why I’ve personally gone for Google – and also because it seems only fair, since they’ve essentially created and open-sourced Kubernetes in the first place).
  • RKE (Rancher Kubernetes Engine): Simplified installation and operation of Kubernetes in a wide range of environments thanks to all components running purely in Docker. More info on Rancher’s RKE website or in this blog post of mine. (I’ve been looking to install my own bare-metal Kubernetes cluster for a while, and RKE, being simple to install, was a great fit. After writing a blog post on the installation and all preparation involved, this list of options on how to get started was extended.)

No matter which option you choose, as soon as your Kubernetes cluster (put this into quotes for all local, single-host options) is ready, you’ll interact with it using the kubectl command-line tool. For example, to retrieve an overview of the nodes of your cluster, run the following (your output may look slightly different, of course):

$ kubectl get nodes
NAME                                         STATUS   ROLES    AGE     VERSION
gke-awesome-k8s-default-pool-b8e496f3-0797   Ready    <none>   3h20m   v1.15.12-gke.2
gke-awesome-k8s-default-pool-b8e496f3-lw68   Ready    <none>   3h20m   v1.15.12-gke.2
gke-awesome-k8s-default-pool-b8e496f3-xjd6   Ready    <none>   3h20m   v1.15.12-gke.2

The kubectl client is the most fundamental tool for interacting with any Kubernetes cluster, and we’ll make use of it in many other articles on this blog.

Wrap-Up

Kubernetes is the de-facto standard API for running containerized workloads on a cluster of machines. Its declarative configuration approach is incredibly powerful because it allows developers and operators alike to only think in terms of the desired target state, rather than having to also think about all the steps necessary to get there. Kubernetes also helps developers and organizations put the advantages of the microservice architecture into practice, enabling them to harness the full power of their microservices. For these reasons, we’ll use Kubernetes as the orchestrator for the containerized workloads of the microservices we’re going to build.