maxresdefault.jpg

Introduction

This blog post runs through how you can automate managing your Kubernetes cluster in AWS using EKS Auto-mode as well deploy Nautobot, a network inventory application which can help you automate your network management.

Nautobot

If you haven’t come across Nautobot yet it’s an open-source network source of truth and network automation platform designed to streamline your network inventory and IPAM management.

Gone are the days of tracking all of your on-prem and cloud network assets in spreadsheets, and the pains of remembering to keep it up-to-date regularly as well. Nautobot also provides a foundation for creating custom automation workflows and applications, including self-service provisioning, automated configuration compliance and lifecycle management. The pains of dealing with overlapping VPC CIDR ranges in your cloud environment are now also a thing of the past, as you can track, plan and allocate new unused ranges for new cloud environments.

If you’re a Netbox fan boy/girl and wondering why we aren’t using that instead, it’s because it doesn’t yet natively support Cloud Network resources.

EKS Auto-Mode

AWS EKS Auto Mode is a fully managed solution that automates Kubernetes cluster management for compute, storage, and networking. It simplifies cluster operations by provisioning infrastructure automatically, selecting the optimal compute instances for each workload, scaling resources dynamically, and continuously optimising costs. EKS Auto Mode also handles operating system patching and manages core add-ons such as the AWS VPC CNI, AWS ALB Ingress controller and EBS storage controller, to name a few. All of these you’d previously have to install, manage and upgrade all by yourself.

With EKS Auto Mode, routine cluster operations are handled for you, allowing faster deployment of new workloads or seamless migration of existing clusters. This removes the overhead of managing versioning, upgrades, and security patches, so you can focus on building applications instead of managing infrastructure. Auto Mode also improves cost efficiency by leveraging Karpenter to automatically right-size compute capacity, migrating workloads to appropriately sized instances when needed. In addition, it manages the entire cluster lifecycle, ensuring your environments remain secure and up to date.

Deployment Steps

Initial Cluster Setup

Firstly, we’ll need a Kubernetes cluster that runs on auto-mode. We’ll be using a public Terraform module here to simplify the setup. It’ll create the required security groups, IAM roles and policies. We specify the name of our cluster, what version of k8s we need, the VPC our cluster will run in and the subnets the nodes will be deployed into.

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 21.0"

  name               = "nautobot-eks"
  kubernetes_version = "1.33"

  endpoint_public_access = true

  enable_cluster_creator_admin_permissions = true

  compute_config = {
    enabled    = true
    node_pools = ["general-purpose"]
  }

  vpc_id     = "vpc-0848babd08d9b4aab"
  subnet_ids = ["subnet-08753b735889c24e9", "subnet-08eadc055b888e610"]

  tags = {
    Environment = "dev"
    Terraform   = "true"
  }
}

The usual reminder to keep your production Kubernetes clusters private and access in-line with least privilege principles.

To access our cluster we’ll need to setup our Kube config file via the below aws cli command. This sorts out authentication and pulls in details like our API server details. After this we can run our usual kubectl commands to run commands on our cluster. Don’t forget to install awscli and kubectl on your machine, if you don’t have it already.

**$ aws eks update-kubeconfig --region eu-west-2 --name nautobot-eks**
Added new context arn:aws:eks:eu-west-2:3838934402359:cluster/nautobot-eks to /Users/suhaib.saeed/.kube/config

At this point we won’t see any nodes in our Kubernetes cluster, until we actually create some k8s objects.

**$ kubectl get pods**
No resources found in default namespace.
**$ kubectl get nodes**
No resources found