Skip to main content

Module: eks-node-group

Terraform module to provision an EKS Managed Node Group for Elastic Kubernetes Service.

Instantiate it multiple times to create EKS Managed Node Groups with specific settings such as GPUs, EC2 instance types, or autoscale parameters.

IMPORTANT: When SSH access is enabled without specifying a source security group, this module provisions EKS Node Group nodes that are globally accessible by SSH (22) port. Normally, AWS recommends that no security group allows unrestricted ingress access to port 22 .

Introduction

This module creates an EKS Managed Node Group for an EKS cluster. It assumes you have already created an EKS cluster, but you can create the cluster and the node group in the same Terraform configuration. See our full-featured root module (a.k.a. component) eks/cluster for an example of how to do that.

Launch Templates

This module always uses a launch template to create the node group. You can create your own launch template and pass in its ID, or else this module will create one for you.

The AWS default for EKS is that if the launch template is updated, the existing nodes will not be affected. Only new instances added to the node group would get the changes specified in the new launch template. In contrast, when the launch template changes, this module can immediately create a new node group from the new launch template to replace the old one.

See the inputs create_before_destroy and immediately_apply_lt_changes for details about how to control this behavior.

Operating system differences

Currently, EKS supports 4 Operating Systems: Amazon Linux 2, Amazon Linux 2023, Bottlerocket, and Windows Server. This module supports all 4 OSes, but support for detailed configuration of the nodes varies by OS. The 4 inputs:

  1. before_cluster_joining_userdata
  2. kubelet_additional_options
  3. bootstrap_additional_options
  4. after_cluster_joining_userdata

are fully supported for Amazon Linux 2 and Windows, and take advantage of the bootstrap.sh supplied on those AMIs. NONE of these inputs are supported on Bottlerocket. On AL2023, only the first 2 are supported.

Note that for all OSes, you can supply the complete userdata contents, which will be untouched by this module, via userdata_override_base64.

Usage

Major Changes (breaking and otherwise)

With the v3.0.0 release of this module, support for Amazon Linux 2023 (AL2023) has been added, and some breaking changes have been made. Please see the release notes for details.

With the v2.0.0 (a.k.a. v0.25.0) release of this module, it has undergone major breaking changes and added new features. Please see the migration document for details.

For a complete example, see examples/complete.

For automated tests of the complete example using bats and Terratest (which tests and deploys the example on AWS), see test.

Sources of Information

  • The code examples below are manually updated and have a tendency to fall out of sync with actual code, particularly with respect to usage of other modules. Do not rely on them.
  • The documentation on this page about this module's inputs, outputs, and compliance is all automatically generated and is up-to-date as of the release date. After the code itself, this is your best source of information.
  • The code in examples/complete is automatically tested before every release, so that is a good place to look for verified example code. Keep in mind, however, it is code for testing, so it may not represent average use cases or best practices.
  • Of course, the READMEs and examples/complete directories in the other modules' GitHub repos are more authoritative with respect to how to use those modules than this README is.

Example Code

provider "aws" {
region = var.region
}

module "label" {
source = "cloudposse/label/null"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"

namespace = var.namespace
name = var.name
stage = var.stage
delimiter = var.delimiter
attributes = ["cluster"]
tags = var.tags
}

locals {
# Prior to Kubernetes 1.19, the usage of the specific kubernetes.io/cluster/* resource tags below are required
# for EKS and Kubernetes to discover and manage networking resources
# https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#base-vpc-networking
tags = { "kubernetes.io/cluster/${module.label.id}" = "shared" }
}

module "vpc" {
source = "cloudposse/vpc/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "1.x.x"

cidr_block = "172.16.0.0/16"

tags = local.tags
context = module.label.context
}

module "subnets" {
source = "cloudposse/dynamic-subnets/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "2.x.x"

availability_zones = var.availability_zones
vpc_id = module.vpc.vpc_id
igw_id = [module.vpc.igw_id]
ipv4_cidr_block = [module.vpc.vpc_cidr_block]
nat_gateway_enabled = true
nat_instance_enabled = false

tags = local.tags
context = module.label.context
}

module "eks_cluster" {
source = "cloudposse/eks-cluster/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "4.x.x"

vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids

kubernetes_version = var.kubernetes_version
oidc_provider_enabled = true

context = module.label.context
}

module "eks_node_group" {
source = "cloudposse/eks-node-group/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "3.x.x"

instance_types = [var.instance_type]
subnet_ids = module.subnets.public_subnet_ids
min_size = var.min_size
max_size = var.max_size
cluster_name = module.eks_cluster.eks_cluster_id
create_before_destroy = true
kubernetes_version = var.kubernetes_version == null || var.kubernetes_version == "" ? [] : [var.kubernetes_version]

# Enable the Kubernetes cluster auto-scaler to find the auto-scaling group
cluster_autoscaler_enabled = var.autoscaling_policies_enabled

context = module.label.context

# Ensure the cluster is fully created before trying to add the node group
module_depends_on = [module.eks_cluster.kubernetes_config_map_id]
}

Windows Managed Node groups

Windows managed node-groups have a few pre-requisites.

  • Your cluster must contain at least one linux based worker node
  • Your EKS Cluster must have the AmazonEKSVPCResourceController and AmazonEKSClusterPolicy policies attached
  • Your cluster must have a config-map called amazon-vpc-cni with the following content
apiVersion: v1
kind: ConfigMap
metadata:
name: amazon-vpc-cni
namespace: kube-system
data:
enable-windows-ipam: "true"
  • Windows nodes will automatically be tainted
kubernetes_taints = [{
key = "WINDOWS"
value = "true"
effect = "NO_SCHEDULE"
}]
  • Any pods that target Windows will need to have the following attributes set in their manifest
  nodeSelector:
kubernetes.io/os: windows
kubernetes.io/arch: amd64

https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html