As mentioned here, the series about Kubernetes goes right ahead as planned.
While the main topic of this series will be the usage of DevOPS methods for managing Kubernetes, a few basic things still need to be looked at first, simply get started properly. As previously mentioned, Kubernetes relies heavily on Docker as container platform. Docker itself however actually needs an operating system to run on, which can be most Linux distributions, MacOS and even Windows. Kubernetes itself, which is just an orchestration platform on top of Docker, can run also on all of those operating systems. Since I am a big fan of open source software (OSS) though, I’ll focus on Linux only.
So should I build my Kubernetes environment within a landscape of Virtual Machines (VM) or on bare metal, i.e. in operating systems directly on hardware servers, either on premise or in shared data centers?
Well, that depends a lot on a variety things. The short answer will probably VMs, for the simple reason that many companies already rely heavily on virtual environments. Whether you run your infrastructure ‘on premise’, aka somewhere in your own data center using Hypervisors like vSphere, XENServer etc, or in cloud environments like provided by Amazon, Google or Hetzner etc, you are already using virtual machines in any such case.
One of the things to be aware of with Hypervisors might be that typically to achieve High Availability (HA) and good performance, VMs can be set up in ways to more or less freely roam across the hardware. For one, hardware might fail, and in that case VMs will be started on a different host if configured so. Other things might be lack of resources on one host, where VMs might be sent to a different host with more available resources. Both scenarios might very well be disruptive to the status of your Kubernetes pods (containers). When a VM is sent to a different host, their network connections will be likely shortly severed while being attached to a different virtual switch or whatever software defined network will apply in your environment.
Another thing is performance & troubleshooting. As long as everything works well (enough), one probably does not care too much, but with the usage of VMs come a few additional layers on various ends. For instance, instead of writing to a file system directly on the attached local hard drive / RAID of a server, virtual disk files living on a shared storage (SAN, NAS) are used. Depending on the hypervisor in question, they too might move around frequently. That alone can get you scratching your head a bit already at times. Network traffic flows through software defined networks atop of physical network devices, causing similar effects. In cloud environments, it will be almost impossible to know details about the layouts since you have not got access to those. Well, the good news is that you can (and must) rely on your cloud provider’s support and documentation to cover that end. Usually that works well enough.
To cover the other end: Bare metal environments lack those advantages but also the problems coming with them sometimes. Bare Metal means exactly that: One or more of physical servers with an operating system installed, and that will hold all the applications and services needed.
Your Kubernetes orchestrator has full control of the available static resources, and you may find it easier to deal with things. If you are just starting to learn Kubernetes & friends ( or Docker swarm for that matter), you may make use of spare servers to find out all the magic. However, getting hold of bare metal usually is a problem of time. Procurement takes often weeks if not months to accomplish from order to installation and final integration into all systems. Not to mention the limited lifetime. So especially the learning bit is done much easier on VMs, be it a small setup on your laptop or even at Hetzner for rather small money.
My personal experience says VMs are the winner, but then again it all depends on your requirements and resources.