Introduction

This blog post will provide a high level introduction into Kubernetes (K8s), from the lens of a Network Engineer. We’ll cover some of the fundamental building blocks of K8s, some relevant networking components and why all of this matters.

Background

Kubernetes is designed to automate the deployment, scaling and management of containerised applications. It was originally created by Google and then released as an open-source platform, and is now maintained by the CNCF. While K8s is a very popular tool for orchestrating containers at scale, there are alternatives like: Docker Swarm, Hashicorp Nomad, AWS ECS and Mesos to name a few.

Why should I care about Kubernetes?

With the widespread usage of Kubernetes for large scale application workload orchestration, network engineers need to understand, manage, configure, troubleshoot and integrate k8s clusters with our on-prem and cloud networks.

Fundamentals

Pods

A pod is the smallest unit in K8s, inside which we’ll have a single or multiple containers (think Docker). A pod is a single instance of an application, for example an app that checks the health of our network devices or a micro-service that deals with login to the application. It’s worth remembering that Kubernetes actually manages pods and doesn’t directly deploy containers, but rather containers that live inside a pod. When an app needs to scale, possibly due to increased load because of new users, it will create new pods.

Each pod will have a unique IP address. If we have multiple containers inside of a pod, they share the same IP and MAC address and can speak to each other via **localhost.** This is because they actually share the same network namespace (this is a Linux concept somewhat analogous to a VRF). They share storage, in the form of volumes too. By default, pods can speak to other pods in the same cluster unrestrictedly, we’ll come onto what a cluster is shortly. We’ll also cover how we can make this default behaviour a bit more secure.

pod.svg

There’s a few different ways we can create a pod, here’s a simple example of how we can declaratively define one inside what we call a manifest file that’s in YAML format. Click on the drop-down arrow to reveal the file. You can also see how we utilise the kubectl tool to interact with our cluster.

Pod Manifest File & Verification Commands

Nodes

A pod will run on a Kubernetes node. This can either be a physical server if our K8s cluster is running on-prem or a VM running in the cloud like an EC2 instance if our cluster is running in AWS. Usually we’ll have multiple nodes in a cluster to give us some resiliency, if one node goes down we can just move our applications to run on another one that’s still healthy.

There are two types of node, master and worker. Worker is usually where the pods that our applications live on run meanwhile the master node typically hosts internal Kubernetes components and manages other nodes.

module_03_nodes.svg

We can utilise kubectl to check how many nodes we have in our cluster and get related info.

**$ kubectl get nodes**
NAME           STATUS   ROLES           AGE   VERSION
controlplane   Ready    control-plane   62m   v1.33.0
node01         Ready    <none>          61m   v1.33.0

Clusters