k8s-tf-helm

What exactly is bootstrapping a cloud-native Kubernetes cluster, why it’s good practice to have as part of your workflow deployment, and looking into a production ready bootstrapping workflow. The workflow consists of working with Terraform, Kubernetes and Helm. Bringing the three tools together, and combining them into one automated deployment.

Bootstrapping

Bootstrapping is simply bringing objects together to be deployed as one. For example, when working with an on-prem Kubernetes cluster, you will need to bootstrap its components, such as kube-apiserver, and kube-scheduler etc. But we will focus on, once our vanilla Kubernetes cluster is up and running. Let’s then bootstrap the necessary applications, to automatically have a production ready cluster. All in one go!

Workflow Deployment

Our objective will be deploying an Azure Kubernetes cluster, creating a Kubernetes namespace, and rolling-out a Prometheus community built Helm chart, kube-prometheus-stack. We will wrap this workflow to be deployed with Terraform. In the current moment, it’s possible you’re deploying these components individually one-by-one. How about we save you some clicking, and roll it out with one go.

Kubernetes Cluster

Let’s set up a low-cost Azure Kubernetes cluster for our testing purposes, of course, here you would’ve your production cluster configuration. Though this is Azure oriented, this can be replaced with Amazon Elastic Kubernetes, Google Kubernetes Engine, or any other cloud-native Kubernetes cluster.

aks.tf

terraform {
  backend "azurerm" {
    storage_account_name = ""
    resource_group_name  = ""
    container_name       = ""
    key                  = ""
  }
}

provider "azurerm" {
    version = "~>3.0"
    features {}
}

resource "azurerm_resource_group" "rg" {
  name     = "bootstrapped-aks"
  location = "UKWest"
}

resource "azurerm_kubernetes_cluster" "aks" {
  name                = "aks01"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = "aks01"
  kubernetes_version  = "1.22.6"

  identity {
    type = "SystemAssigned"
  }

  default_node_pool {
    name       = "aks01np"
    node_count = 1
    vm_size    = "Standard_B2s"
    max_pods   = 30
  }
}

Kubernetes Objects

The magic in having your Terraform deployment connect to the newly created cluster, is the below configuration. Once connected, you can start deploying pods, deployments, and services via Terraform. But in our case, we will solely create a new namespace kube-prometheus-stack, which we will deploy the Helm chart into.

kubernetes.tf

provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}

resource "kubernetes_namespace" "kube-prometheus-stack" {

  depends_on = [azurerm_kubernetes_cluster.aks]

  metadata {
    name = "kube-prometheus-stack"
  }
}

Helm Chart

Once we’ve our Kubernetes namespace in-place, we can start deploying Helm charts to it. In our case, we will be deploying as mentioned, the kube-prometheus-stack Helm chart.

It’s important to highlight the set blocks. Here, you can override configuration values of the deploying chart. For testing purposes, we will override the default login password (don’t write your passwords in git repo…) to HeyItsMePassword!.

helm.tf

provider "helm" {
  kubernetes {
    host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
    client_certificate     = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
    client_key             = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
  }
}

resource "helm_release" "kube-prometheus-stack" {

  depends_on = [kubernetes_namespace.kube-prometheus-stack]

  name       = "prometheus-community"
  repository = "https://prometheus-community.github.io/helm-charts"
  chart      = "kube-prometheus-stack"
  namespace  = "kube-prometheus-stack"

  set {
    name  = "kubeTargetVersionOverride"
    value = "1.22.6"
  }

  set {
    name  = "grafana.adminPassword"
    value = "HeyItsMePassword!"
  }
}

After a successful Terraform completion, you can connect to your cluster, and you’ll see our kube-prometheus-stack namespace listed. And inside of it, various pods, deployments, and services objects that were deployed as part of the Helm chart deployment. With Grafana service up and running, we can login using our overwritten password value and admin for username.

You can port-forward the Grafana service, and start exploring Grafana locally.

kubectl port-forward --namespace kube-prometheus-stack svc/prometheus-community-grafana 8080:80

grafana

Great, now we’re able to rollout a Kubernetes cluster alongside a monitoring set up, all in one terrform apply!

Hopefully, this has given you an understanding on how you can combine Kubernetes cluster creation, and start rolling out your application all in one go. Grafana deployment was an example, but you can set up any Helm charts that are needed for your use-case to have a production ready Kubernetes cluster.