Creating a bare-metal cluster from scratch

In the previous section, we looked at running Kubernetes on cloud providers. This is the dominant deployment story for Kubernetes. But there are strong use cases for running Kubernetes on bare metal. I won't focus on hosted versus on-premises here. This is yet another dimension. If you already manage a lot of servers on-premises, you are in the best position to decide.

Use cases for bare metal

Bare-metal clusters are a bear, especially if you manage them yourself. There are companies that provide commercial support for bare-metal Kubernetes clusters, such as Platform 9, but the offerings are not mature yet. A solid open source option is Kubespray, which can deploy industrial-strength Kubernetes clusters on bare metal, AWS, GCE, Azure, and OpenStack.

Here are some use cases where it makes sense:

  • Price: If you already manage large-scale bare clusters, it may be much cheaper to run Kubernetes clusters on your physical infrastructure.
  • Low network latency: If you must have low latency between your nodes, then the VM overhead might be too much.
  • Regulatory requirements: If you must comply with regulations, you may not be allowed to use cloud providers.
  • You want total control over hardware: Cloud providers give you many options, but you may have special needs.

When should you consider creating a bare-metal cluster?

The complexities of creating a cluster from scratch are significant. A Kubernetes cluster is not a trivial beast. There is a lot of documentation on the web on how to set up bare-metal clusters, but as the whole ecosystem moves forward, many of these guides get out of date quickly.

You should consider going down this route if you have the operational capability to debug problems at every level of the stack. Most of the problems will probably be networking-related, but filesystems and storage drivers can bite you too, as well as general incompatibilities and version mismatches between components such as Kubernetes itself, Docker (or other runtimes, if you use them), images, your OS, your OS kernel, and the various add-ons and tools you use. If you opt for using VMs on top of bare metal, then you add another layer of complexity.

Understanding the process

There is a lot to do. Here is a list of some of the concerns you'll have to address:

  • Implementing your own cloud-provider interface or sidestepping it
  • Choosing a networking model and how to implement it (CNI plugin, direct compile)
  • Whether or not to use network policies
  • Select images for system components
  • Security model and SSL certificates
  • Admin credentials
  • Templates for components such as API Server, replication controller, and scheduler
  • Cluster services: DNS, logging, monitoring, and GUI

I recommend the following guide from the Kubernetes site to get a deeper understanding of what it takes to create a HA cluster from scratch using kubeadm: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/.

Using virtual private cloud infrastructure

If your use case falls under the bare-metal use cases, but you don't have the necessary skilled manpower or the inclination to deal with the infrastructure challenges of bare metal, you have the option to use a private cloud such as OpenStack with Stackube: https://github.com/openstack/stackube. If you want to aim a little higher in the abstraction ladder, then Mirantis offers a cloud platform built on top of OpenStack and Kubernetes.

Let's review a few more tools for building Kubernetes clusters on bare metal. Some of these tools support OpenStack as well.

Building your own cluster with Kubespray

Kubespray is a project for deploying production-ready highly available Kubernetes clusters. It uses Ansible and can deploy Kubernetes on a large number of targets, such as:

  • AWS
  • GCE
  • Azure
  • OpenStack
  • vSphere
  • Packet (bare metal)
  • Oracle Cloud Infrastructure (experimental)

And also to plain bare metal.

It is highly customizable and support multiple operating systems for the nodes, multiple CNI plugins for networking, and multiple container runtimes.

If you want to test it locally, it can deploy to a multi-node vagrant setup too. If you're an Ansible fan, Kubespray may be a great choice for you.

Building your cluster with KRIB

KRIB is a Kubernetes installer for bare metal clusters that are provisioned using Digital Rebar Provision (DRP). DRP is a single Golang executable that takes care of a lot of the heavy lifting like DHCP in terms of bare-metal provisioning (PXE/iPXE), and workflow automation. KRIB drives kubeadm to ensure it ends up with a valid Kubernetes cluster. The process involves:

  • Server discovery
  • Installation of the KRIB Content and Certificate Plugin
  • Starting the cluster deployment
  • Monitoring the deployment
  • Accessing the cluster

See https://kubernetes.io/docs/setup/production-environment/tools/krib/ for more details.

Building your cluster with RKE

Rancher Kubernetes Engine (RKE) is a friendly Kubernetes installer that can install Kubernetes on bare-metal as well as virtualized servers. RKE aims to address the complexity of installing Kubernetes. It is open source and has great documentation. Check it out here: http://rancher.com/docs/rke/v0.1.x/en/.

Bootkube

Bootkube is very interesting too. It can launch self-hosted Kubernetes clusters. Self-hosted means that most of the cluster components run as regular pods and can be managed, monitored, and upgraded using the same tools and processes you use for your containerized applications. There are significant benefits to this approach that simplify the development and operation of Kubernetes clusters.

It is a Kubernetes incubator project, but it doesn't seem very active. Check it out here: https://github.com/kubernetes-incubator/bootkube.

In this section, we considered the option to build a bare-metal cluster Kubernetes cluster. We looked into the use cases that require it and highlighted the challenges and difficulties.