Component: eks/cluster
This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate profiles.
This component should only be deployed after logging into AWS via Federated login with SAML (e.g. GSuite) or assuming an IAM role (e.g. from a CI/CD system). It should not be deployed if you log into AWS via AWS SSO, the reason being that on initial deployment, the EKS cluster will be owned by the assumed role that provisioned it, and AWS SSO roles are ephemeral (replaced on every configuration change). If this were to be the AWS SSO Role, then we risk losing access to the EKS cluster once the ARN of the AWS SSO Role eventually changes.
Usage
Stack Level: Regional
Here's an example snippet for how to use this component.
This example expects the Cloud Posse Reference Architecture Identity and Network designs deployed for mapping users to EKS service roles and granting access in a private network. In addition, this example has the GitHub OIDC integration added and makes use of Karpenter to dynamically scale cluster nodes.
For more on these requirements, see Identity Reference Architecture, Network Reference Architecture, the GitHub OIDC component, and the Karpenter component.
components:
terraform:
eks/cluster:
vars:
enabled: true
name: eks
cluster_kubernetes_version: "1.27"
vpc_component_name: "vpc"
eks_component_name: "eks/cluster"
# Your choice of availability zones or availability zone ids
# availability_zones: ["us-east-1a", "us-east-1b", "us-east-1c"]
aws_ssm_agent_enabled: true
allow_ingress_from_vpc_accounts:
- tenant: core
stage: auto
- tenant: core
stage: corp
- tenant: core
stage: network
public_access_cidrs: []
allowed_cidr_blocks: []
allowed_security_groups: []
enabled_cluster_log_types:
# Caution: enabling `api` log events may lead to a substantial increase in Cloudwatch Logs expenses.
- api
- audit
- authenticator
- controllerManager
- scheduler
oidc_provider_enabled: true
# Allows GitHub OIDC role
github_actions_iam_role_enabled: true
github_actions_iam_role_attributes: [ "eks" ]
github_actions_allowed_repos:
- acme/infra
# We use karpenter to provision nodes
# See below for using node_groups
managed_node_groups_enabled: false
node_groups: {}
# EKS IAM Authentication settings
# By default, you can authenticate to EKS cluster only by assuming the role that created the cluster.
# After the Auth Config Map is applied, the other IAM roles in
# `primary_iam_roles`, `delegated_iam_roles`, and `sso_iam_roles` will be able to authenticate.
apply_config_map_aws_auth: true
availability_zone_abbreviation_type: fixed
cluster_private_subnets_only: true
cluster_encryption_config_enabled: true
cluster_endpoint_private_access: true
cluster_endpoint_public_access: false
cluster_log_retention_period: 90
# List of `aws-teams-roles` (in the account where the EKS cluster is deployed) to map to Kubernetes RBAC groups
aws_team_roles_rbac:
- aws_team_role: admin
groups:
- system:masters
- aws_team_role: poweruser
groups:
- idp:poweruser
- system:authenticated
- aws_team_role: observer
groups:
- idp:observer
- system:authenticated
- aws_team_role: planner
groups:
- idp:observer
- system:authenticated
- aws_team: terraform
groups:
- system:masters
# Permission sets from AWS SSO allowing cluster access
# See `aws-sso` component.
aws_sso_permission_sets_rbac:
- aws_sso_permission_set: PowerUserAccess
groups:
- idp:poweruser
- system:authenticated
# Fargate Profiles for Karpenter
fargate_profiles:
karpenter:
kubernetes_namespace: karpenter
kubernetes_labels: null
karpenter_iam_role_enabled: true
# If you are using Karpenter, disable the legacy instance profile created by the eks/karpenter component
# and use the one created by this component instead by setting the legacy flags to false in both components.
# This is recommended for all new clusters.
legacy_do_not_create_karpenter_instance_profile: false
# All Fargate Profiles will use the same IAM Role when `legacy_fargate_1_role_per_profile_enabled` is set to false.
# Recommended for all new clusters, but will damage existing clusters provisioned with the legacy component.
legacy_fargate_1_role_per_profile_enabled: false
# While it is possible to deploy add-ons to Fargate Profiles, it is not recommended. Use a managed node group instead.
deploy_addons_to_fargate: false
# EKS addons
# https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
# Configuring EKS addons: https://aws.amazon.com/blogs/containers/amazon-eks-add-ons-advanced-configuration/
addons:
# https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
vpc-cni:
addon_version: v1.13.4-eksbuild.1 # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
kube-proxy:
addon_version: "v1.27.1-eksbuild.1" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
coredns:
addon_version: "v1.10.1-eksbuild.1" # set `addon_version` to `null` to use the latest version
# https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons
# https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
# https://github.com/kubernetes-sigs/aws-ebs-csi-driver
aws-ebs-csi-driver:
addon_version: "v1.20.0-eksbuild.1" # set `addon_version` to `null` to use the latest version
# If you are not using [volume snapshots](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/#how-to-use-volume-snapshots)
# (and you probably are not), disable the EBS Snapshotter with:
configuration_values: '{"sidecars":{"snapshotter":{"forceEnable":false}}}'
# Only install the EFS driver if you are using EFS.
# Create an EFS file system with the `efs` component.
# Create an EFS StorageClass with the `eks/storage-class` component.
# https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html
aws-efs-csi-driver:
addon_version: "v1.5.8-eksbuild.1"
# Set a short timeout in case of conflict with an existing efs-controller deployment
create_timeout: "7m"
Amazon EKS End-of-Life Dates
When picking a Kubernetes version, be sure to review the end-of-life dates for Amazon EKS. Refer to the chart below:
cycle | release | latest | latest release | eol |
---|---|---|---|---|
1.27 | 2023-05-24 | 1.27-eks-3 | 2023-06-30 | 2024-07-01 |
1.26 | 2023-04-11 | 1.26-eks-4 | 2023-06-30 | 2024-06-01 |
1.25 | 2023-02-21 | 1.25-eks-5 | 2023-06-30 | 2024-05-01 |
1.24 | 2022-11-15 | 1.24-eks-8 | 2023-06-30 | 2024-01-01 |
1.23 | 2022-08-11 | 1.23-eks-10 | 2023-06-30 | 2023-10-11 |
1.22 | 2022-04-04 | 1.22-eks-14 | 2023-06-30 | 2023-06-04 |
1.21 | 2021-07-19 | 1.21-eks-18 | 2023-06-09 | 2023-02-15 |
1.20 | 2021-05-18 | 1.20-eks-14 | 2023-05-05 | 2022-11-01 |
1.19 | 2021-02-16 | 1.19-eks-11 | 2022-08-15 | 2022-08-01 |
1.18 | 2020-10-13 | 1.18-eks-13 | 2022-08-15 | 2022-08-15 |
*This Chart was updated as of 08/04/2023 and is generated with the eol
tool. Check the latest updates by running eol amazon-eks
locally or on the website directly.
Usage with Node Groups
The eks/cluster
component also supports managed Node Groups. In order to add a set of nodes to
provision with the cluster, provide values for var.managed_node_groups_enabled
and var.node_groups
.
You can use managed Node Groups in conjunction with Karpenter. We recommend provisioning a managed node group with as many nodes as Availability Zones used by your cluster (typically 3), to ensure a minimum support for a high-availability set of daemons, and then using Karpenter to provision additional nodes as needed.
For example:
managed_node_groups_enabled: true
node_groups: # for most attributes, setting null here means use setting from node_group_defaults
main:
# availability_zones = null will create one autoscaling group
# in every private subnet in the VPC
availability_zones: null
desired_group_size: 3 # number of instances to start with, must be >= number of AZs
min_group_size: 3 # must be >= number of AZs
max_group_size: 6
# Can only set one of ami_release_version or kubernetes_version
# Leave both null to use latest AMI for Cluster Kubernetes version
kubernetes_version: null # use cluster Kubernetes version
ami_release_version: null # use latest AMI for Kubernetes version
attributes: []
create_before_destroy: true
cluster_autoscaler_enabled: true
instance_types:
- t3.medium
ami_type: AL2_x86_64 # use "AL2_x86_64" for standard instances, "AL2_x86_64_GPU" for GPU instances
block_device_map:
# EBS volume for local ephemeral storage
# IGNORED if legacy `disk_encryption_enabled` or `disk_size` are set!
# "/dev/xvda" most of the instances (without local NVMe) and most of the Linuxes, "/dev/xvdb" BottleRocket
"/dev/xvda":
ebs:
volume_size: 100 # number of GB
volume_type: gp3
kubernetes_labels: {}
kubernetes_taints: {}
resources_to_tag:
- instance
- volume
tags: null
Using Addons
EKS clusters support “Addons” that can be automatically installed on a cluster.
Install these addons with the var.addons
input.
Run the following command to see all available addons, their type, and their publisher. You can also see the URL for addons that are available through the AWS Marketplace. Replace 1.27 with the version of your cluster. See Creating an addon for more details.
EKS_K8S_VERSION=1.27 # replace with your cluster version
aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION \
--query 'addons[].{MarketplaceProductUrl: marketplaceInformation.productUrl, Name: addonName, Owner: owner Publisher: publisher, Type: type}' --output table
You can see which versions are available for each addon by executing the following commands. Replace 1.27 with the version of your cluster.
EKS_K8S_VERSION=1.27 # replace with your cluster version
echo "vpc-cni:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name vpc-cni \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
echo "kube-proxy:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name kube-proxy \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
echo "coredns:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name coredns \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
echo "aws-ebs-csi-driver:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name aws-ebs-csi-driver \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
echo "aws-efs-csi-driver:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name aws-efs-csi-driver \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
Some add-ons accept additional configuration. For example, the vpc-cni
addon accepts a disableNetworking
parameter.
View the available configuration options (as JSON Schema) via the aws eks describe-addon-configuration
command. For example:
aws eks describe-addon-configuration \
--addon-name aws-ebs-csi-driver \
--addon-version v1.20.0-eksbuild.1 | jq '.configurationSchema | fromjson'
You can then configure the add-on via the configuration_values
input. For example:
aws-ebs-csi-driver:
configuration_values: '{"node": {"loggingFormat": "json"}}'
Configure the addons like the following example:
# https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
# https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#creating-an-add-on
# https://aws.amazon.com/blogs/containers/amazon-eks-add-ons-advanced-configuration/
addons:
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
# https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html#cni-iam-role-create-role
# https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/#deploy-vpc-cni-managed-add-on
vpc-cni:
addon_version: "v1.12.2-eksbuild.1" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
kube-proxy:
addon_version: "v1.25.6-eksbuild.1" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
coredns:
addon_version: "v1.9.3-eksbuild.2" # set `addon_version` to `null` to use the latest version
# Uncomment to override default replica count of 2
# configuration_values: '{"replicaCount": 3}'
# https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
# https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons
# https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html#csi-iam-role
# https://github.com/kubernetes-sigs/aws-ebs-csi-driver
aws-ebs-csi-driver:
addon_version: "v1.19.0-eksbuild.2" # set `addon_version` to `null` to use the latest version
# If you are not using [volume snapshots](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/#how-to-use-volume-snapshots)
# (and you probably are not), disable the EBS Snapshotter with:
configuration_values: '{"sidecars":{"snapshotter":{"forceEnable":false}}}'
Some addons, such as CoreDNS, require at least one node to be fully provisioned first.
See issue #170 for more details.
Set var.addons_depends_on
to true
to require the Node Groups to be provisioned before addons.
addons_depends_on: true
addons:
coredns:
addon_version: "v1.8.7-eksbuild.1"
Addons may not be suitable for all use-cases! For example, if you are using Karpenter to provision nodes, these nodes will never be available before the cluster component is deployed.
For more information on upgrading EKS Addons, see "How to Upgrade EKS Cluster Addons"
Adding and Configuring a new EKS Addon
Add a new EKS addon to the addons
map (addons
variable):
addons:
my-addon:
addon_version: "..."
If the new addon requires an EKS IAM Role for Kubernetes Service Account, perform the following steps:
Add a file
addons-custom.tf
to theeks/cluster
folderIn the file, add an IAM policy document with the permissions required for the addon, and use the
eks-iam-role
module to provision an IAM Role for Kubernetes Service Account for the addon:data "aws_iam_policy_document" "my_addon" {
statement {
sid = "..."
effect = "Allow"
resources = ["..."]
actions = [
"...",
"..."
]
}
}
module "my_addon_eks_iam_role" {
source = "cloudposse/eks-iam-role/aws"
version = "2.1.0"
eks_cluster_oidc_issuer_url = local.eks_cluster_oidc_issuer_url
service_account_name = "..."
service_account_namespace = "..."
aws_iam_policy_document = [one(data.aws_iam_policy_document.my_addon[*].json)]
context = module.this.context
}For reference on how to configure the IAM role and IAM permissions for EKS addons, see addons.tf.
Add a file
additional-addon-support_override.tf
to theeks/cluster
folderIn the file, add the IAM Role for Kubernetes Service Account for the addon to the
overridable_additional_addon_service_account_role_arn_map
map:locals {
overridable_additional_addon_service_account_role_arn_map = {
my-addon = module.my_addon_eks_iam_role.service_account_role_arn
}
}This map will override the default map in the additional-addon-support.tf file, and will be merged into the final map together with the default EKS addons
vpc-cni
andaws-ebs-csi-driver
(which this component configures and creates IAM Roles for Kubernetes Service Accounts)Follow the instructions in the additional-addon-support.tf file if the addon may need to be deployed to Fargate, or has dependencies that Terraform cannot detect automatically.
Requirements
Name | Version |
---|---|
terraform | >= 1.3.0 |
aws | >= 4.9.0 |
random | >= 3.0 |
Providers
Name | Version |
---|---|
aws | >= 4.9.0 |
random | >= 3.0 |
Modules
Name | Source | Version |
---|---|---|
aws_ebs_csi_driver_eks_iam_role | cloudposse/eks-iam-role/aws | 2.1.1 |
aws_ebs_csi_driver_fargate_profile | cloudposse/eks-fargate-profile/aws | 1.3.0 |
aws_efs_csi_driver_eks_iam_role | cloudposse/eks-iam-role/aws | 2.1.1 |
coredns_fargate_profile | cloudposse/eks-fargate-profile/aws | 1.3.0 |
eks | cloudposse/stack-config/yaml//modules/remote-state | 1.5.0 |
eks_cluster | cloudposse/eks-cluster/aws | 2.9.0 |
fargate_pod_execution_role | cloudposse/eks-fargate-profile/aws | 1.3.0 |
fargate_profile | cloudposse/eks-fargate-profile/aws | 1.3.0 |
iam_arns | ../../account-map/modules/roles-to-principals | n/a |
iam_roles | ../../account-map/modules/iam-roles | n/a |
karpenter_label | cloudposse/label/null | 0.25.0 |
region_node_group | ./modules/node_group_by_region | n/a |
this | cloudposse/label/null | 0.25.0 |
utils | cloudposse/utils/aws | 1.3.0 |
vpc | cloudposse/stack-config/yaml//modules/remote-state | 1.5.0 |
vpc_cni_eks_iam_role | cloudposse/eks-iam-role/aws | 2.1.1 |
vpc_ingress | cloudposse/stack-config/yaml//modules/remote-state | 1.5.0 |
Resources
Inputs
Name | Description | Type | Default | Required |
---|---|---|---|---|
additional_tag_map | Additional key-value pairs to add to each map in tags_as_list_of_maps . Not added to tags or id .This is for some rare cases where resources want additional configuration of tags and therefore take a list of maps with tag key, value, and additional configuration. | map(string) | {} | no |
addons | Manages EKS addons resources |
| {} | no |
addons_depends_on | If set true (recommended), all addons will depend on managed node groups provisioned by this component and therefore not be installed until nodes are provisioned.See issue #170 for more details. | bool | true | no |
allow_ingress_from_vpc_accounts | List of account contexts to pull VPC ingress CIDR and add to cluster security group. e.g. { environment = "ue2", stage = "auto", tenant = "core" } | any | [] | no |
allowed_cidr_blocks | List of CIDR blocks to be allowed to connect to the EKS cluster | list(string) | [] | no |
allowed_security_groups | List of Security Group IDs to be allowed to connect to the EKS cluster | list(string) | [] | no |
apply_config_map_aws_auth | Whether to execute kubectl apply to apply the ConfigMap to allow worker nodes to join the EKS cluster | bool | true | no |
attributes | ID element. Additional attributes (e.g. workers or cluster ) to add to id ,in the order they appear in the list. New attributes are appended to the end of the list. The elements of the list are joined by the delimiter and treated as a single ID element. | list(string) | [] | no |
availability_zone_abbreviation_type | Type of Availability Zone abbreviation (either fixed or short ) to use in names. See https://github.com/cloudposse/terraform-aws-utils for details. | string | "fixed" | no |
availability_zone_ids | List of Availability Zones IDs where subnets will be created. Overrides availability_zones .Can be the full name, e.g. use1-az1 , or just the part after the AZ ID region code, e.g. -az1 ,to allow reusable values across regions. Consider contention for resources and spot pricing in each AZ when selecting. Useful in some regions when using only some AZs and you want to use the same ones across multiple accounts. | list(string) | [] | no |
availability_zones | AWS Availability Zones in which to deploy multi-AZ resources. Ignored if availability_zone_ids is set.Can be the full name, e.g. us-east-1a , or just the part after the region, e.g. a to allow reusable values across regions.If not provided, resources will be provisioned in every zone with a private subnet in the VPC. | list(string) | [] | no |
aws_auth_yaml_strip_quotes | If true, remove double quotes from the generated aws-auth ConfigMap YAML to reduce spurious diffs in plans | bool | true | no |
aws_ssm_agent_enabled | Set true to attach the required IAM policy for AWS SSM agent to each EC2 instance's IAM Role | bool | false | no |
aws_sso_permission_sets_rbac | (Not Recommended): AWS SSO (IAM Identity Center) permission sets in the EKS deployment account to add to aws-auth ConfigMap.Unfortunately, aws-auth ConfigMap does not support SSO permission sets, so we map the generatedIAM Role ARN corresponding to the permission set at the time Terraform runs. This is subject to change when any changes are made to the AWS SSO configuration, invalidating the mapping, and requiring a terraform apply in this project to update the aws-auth ConfigMap and restore access. |
| [] | no |
aws_team_roles_rbac | List of aws-team-roles (in the target AWS account) to map to Kubernetes RBAC groups. |
| [] | no |
cluster_encryption_config_enabled | Set to true to enable Cluster Encryption Configuration | bool | true | no |
cluster_encryption_config_kms_key_deletion_window_in_days | Cluster Encryption Config KMS Key Resource argument - key deletion windows in days post destruction | number | 10 | no |
cluster_encryption_config_kms_key_enable_key_rotation | Cluster Encryption Config KMS Key Resource argument - enable kms key rotation | bool | true | no |
cluster_encryption_config_kms_key_id | KMS Key ID to use for cluster encryption config | string | "" | no |
cluster_encryption_config_kms_key_policy | Cluster Encryption Config KMS Key Resource argument - key policy | string | null | no |
cluster_encryption_config_resources | Cluster Encryption Config Resources to encrypt, e.g. ["secrets"] | list(string) |
| no |
cluster_endpoint_private_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default to AWS EKS resource and it is false | bool | false | no |
cluster_endpoint_public_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default to AWS EKS resource and it is true | bool | true | no |
cluster_kubernetes_version | Desired Kubernetes master version. If you do not specify a value, the latest available version is used | string | null | no |
cluster_log_retention_period | Number of days to retain cluster logs. Requires enabled_cluster_log_types to be set. See https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. | number | 0 | no |
cluster_private_subnets_only | Whether or not to enable private subnets or both public and private subnets | bool | false | no |
color | The cluster stage represented by a color; e.g. blue, green | string | "" | no |
context | Single object for setting entire context at once. See description of individual variables for details. Leave string and numeric variables as null to use default value.Individual variable settings (non-null) override settings in context object, except for attributes, tags, and additional_tag_map, which are merged. | any |
| no |
delimiter | Delimiter to be used between ID elements. Defaults to - (hyphen). Set to "" to use no delimiter at all. | string | null | no |
deploy_addons_to_fargate | Set to true (not recommended) to deploy addons to Fargate instead of initial node pool | bool | false | no |
descriptor_formats | Describe additional descriptors to be output in the descriptors output map.Map of maps. Keys are names of descriptors. Values are maps of the form {<br/> format = string<br/> labels = list(string)<br/>} (Type is any so the map values can later be enhanced to provide additional options.)format is a Terraform format string to be passed to the format() function.labels is a list of labels, in order, to pass to format() function.Label values will be normalized before being passed to format() so they will beidentical to how they appear in id .Default is {} (descriptors output will be empty). | any | {} | no |
eks_component_name | The name of the eks component | string | "eks/cluster" | no |
enabled | Set to false to prevent the module from creating any resources | bool | null | no |
enabled_cluster_log_types | A list of the desired control plane logging to enable. For more information, see https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. Possible values [api , audit , authenticator , controllerManager , scheduler ] | list(string) | [] | no |
environment | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | string | null | no |
fargate_profile_iam_role_kubernetes_namespace_delimiter | Delimiter for the Kubernetes namespace in the IAM Role name for Fargate Profiles | string | "-" | no |
fargate_profile_iam_role_permissions_boundary | If provided, all Fargate Profiles IAM roles will be created with this permissions boundary attached | string | null | no |
fargate_profiles | Fargate Profiles config |
| {} | no |
id_length_limit | Limit id to this many characters (minimum 6).Set to 0 for unlimited length.Set to null for keep the existing setting, which defaults to 0 .Does not affect id_full . | number | null | no |
karpenter_iam_role_enabled | Flag to enable/disable creation of IAM role for EC2 Instance Profile that is attached to the nodes launched by Karpenter | bool | false | no |
kube_exec_auth_role_arn | The role ARN for aws eks get-token to use. Defaults to the current caller's role. | string | null | no |
kubeconfig_file | Name of kubeconfig file to use to configure Kubernetes provider | string | "" | no |
kubeconfig_file_enabled | Set true to configure Kubernetes provider with a kubeconfig file specified by kubeconfig_file .Mainly for when the standard configuration produces a Terraform error. | bool | false | no |
label_key_case | Controls the letter case of the tags keys (label names) for tags generated by this module.Does not affect keys of tags passed in via the tags input.Possible values: lower , title , upper .Default value: title . | string | null | no |
label_order | The order in which the labels (ID elements) appear in the id .Defaults to ["namespace", "environment", "stage", "name", "attributes"]. You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. | list(string) | null | no |
label_value_case | Controls the letter case of ID elements (labels) as included in id ,set as tag values, and output by this module individually. Does not affect values of tags passed in via the tags input.Possible values: lower , title , upper and none (no transformation).Set this to title and set delimiter to "" to yield Pascal Case IDs.Default value: lower . | string | null | no |
labels_as_tags | Set of labels (ID elements) to include as tags in the tags output.Default is to include all labels. Tags with empty values will not be included in the tags output.Set to [] to suppress all generated tags.Notes: The value of the name tag, if included, will be the id , not the name .Unlike other null-label inputs, the initial setting of labels_as_tags cannot bechanged in later chained modules. Attempts to change it will be silently ignored. | set(string) |
| no |
legacy_do_not_create_karpenter_instance_profile | When true (the default), suppresses creation of the IAM Instance Profilefor nodes launched by Karpenter, to preserve the legacy behavior of the eks/karpenter component creating it.Set to false to enable creation of the IAM Instance Profile, whichensures that both the role and the instance profile have the same lifecycle, and avoids AWS Provider issue #32671. Use in conjunction with eks/karpenter component legacy_create_karpenter_instance_profile . | bool | true | no |
legacy_fargate_1_role_per_profile_enabled | Set to false for new clusters to create a single Fargate Pod Execution role for the cluster.Set to true for existing clusters to preserve the old behavior of creatinga Fargate Pod Execution role for each Fargate Profile. | bool | true | no |
managed_node_groups_enabled | Set false to prevent the creation of EKS managed node groups. | bool | true | no |
map_additional_aws_accounts | Additional AWS account numbers to add to aws-auth ConfigMap | list(string) | [] | no |
map_additional_iam_roles | Additional IAM roles to add to config-map-aws-auth ConfigMap |
| [] | no |
map_additional_iam_users | Additional IAM users to add to aws-auth ConfigMap |
| [] | no |
map_additional_worker_roles | AWS IAM Role ARNs of worker nodes to add to aws-auth ConfigMap | list(string) | [] | no |
name | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'. This is the only ID element not also included as a tag .The "name" tag is set to the full id string. There is no tag with the value of the name input. | string | null | no |
namespace | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | string | null | no |
node_group_defaults | Defaults for node groups in the cluster |
|
| no |
node_groups | List of objects defining a node group for the cluster |
| {} | no |
oidc_provider_enabled | Create an IAM OIDC identity provider for the cluster, then you can create IAM roles to associate with a service account in the cluster, instead of using kiam or kube2iam. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html | bool | true | no |
public_access_cidrs | Indicates which CIDR blocks can access the Amazon EKS public API server endpoint when enabled. EKS defaults this to a list with 0.0.0.0/0. | list(string) |
| no |
regex_replace_chars | Terraform regular expression (regex) string. Characters matching the regex will be removed from the ID elements. If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits. | string | null | no |
region | AWS Region | string | n/a | yes |
stage | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | string | null | no |
subnet_type_tag_key | The tag used to find the private subnets to find by availability zone. If null, will be looked up in vpc outputs. | string | null | no |
tags | Additional tags (e.g. {'BusinessUnit': 'XYZ'} ).Neither the tag keys nor the tag values will be modified by this module. | map(string) | {} | no |
tenant | ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for | string | null | no |
vpc_component_name | The name of the vpc component | string | "vpc" | no |
Outputs
Name | Description |
---|---|
availability_zones | Availability Zones in which the cluster is provisioned |
eks_auth_worker_roles | List of worker IAM roles that were included in the auth-map ConfigMap. |
eks_cluster_arn | The Amazon Resource Name (ARN) of the cluster |
eks_cluster_certificate_authority_data | The Kubernetes cluster certificate authority data |
eks_cluster_endpoint | The endpoint for the Kubernetes API server |
eks_cluster_id | The name of the cluster |
eks_cluster_identity_oidc_issuer | The OIDC Identity issuer for the cluster |
eks_cluster_managed_security_group_id | Security Group ID that was created by EKS for the cluster. EKS creates a Security Group and applies it to ENI that is attached to EKS Control Plane master nodes and to any managed workloads |
eks_cluster_version | The Kubernetes server version of the cluster |
eks_managed_node_workers_role_arns | List of ARNs for workers in managed node groups |
eks_node_group_arns | List of all the node group ARNs in the cluster |
eks_node_group_count | Count of the worker nodes |
eks_node_group_ids | EKS Cluster name and EKS Node Group name separated by a colon |
eks_node_group_role_names | List of worker nodes IAM role names |
eks_node_group_statuses | Status of the EKS Node Group |
fargate_profile_role_arns | Fargate Profile Role ARNs |
fargate_profile_role_names | Fargate Profile Role names |
fargate_profiles | Fargate Profiles |
karpenter_iam_role_arn | Karpenter IAM Role ARN |
karpenter_iam_role_name | Karpenter IAM Role name |
vpc_cidr | The CIDR of the VPC where this cluster is deployed. |
Related How-to Guides
- How to Load Test in AWS
- How to Tune EKS with AWS Managed Node Groups
- How to Keep Everything Up to Date
- How to Tune SpotInst Parameters for EKS
- How to Upgrade EKS Cluster Addons
- EBS CSI Migration FAQ
- How to Upgrade EKS
References
- cloudposse/terraform-aws-components - Cloud Posse's upstream component
CHANGELOG
Components PR #852
This is a bug fix and feature enhancement update. No action is necessary to upgrade.
Bug Fixes
- Timeouts for Add-Ons are now honored (they were being ignored)
- If you supply a service account role ARN for an Add-On, it will be used, and no new role will be created. Previously it was used, but the component created a new role anyway.
- The EKS EFS controller add-on cannot be deployed to Fargate, and enabling it
along with
deploy_addons_to_fargate
will no longer attempt to deploy EFS to Fargate. Note that this means to use the EFS Add-On, you must create a managed node group. Track the status of this feature with this issue. - If you are using an old VPC component that does not supply
az_private_subnets_map
, this module will now use the older theprivate_subnet_ids
output.
Add-Ons have enabled
option
The EKS Add-Ons now have an optional "enabled" flag (defaults to true
) so
that you can selectively disable them in a stack where the inherited configuration
has them enabled.
Upgrading to v1.270.0
Components PR #795
Removed identity
roles from cluster RBAC (aws-auth
ConfigMap)
Previously, this module added identity
roles configured by the aws_teams_rbac
input to the aws-auth
ConfigMap. This never worked, and so now aws_teams_rbac
is ignored. When upgrading, you may see these roles being removed from the aws-auth
:
this is expected and harmless.
Better support for Manged Node Group Block Device Specifications
Previously, this module only supported specifying the disk size and encryption state
for the root volume of Managed Node Groups. Now, the full set of block device
specifications is supported, including the ability to specify the device name.
This is particularly important when using BottleRocket, which uses a very small
root volume for storing the OS and configuration, and exposes a second volume
(/dev/xvdb
) for storing data.
Block Device Migration
Almost all of the attributes of node_groups
and node_group_defaults
are now
optional. This means you can remove from your configuration any attributes that
previously you were setting to null
.
The disk_size
and disk_encryption_enabled
attributes are deprecated. They
only apply to /dev/xvda
, and only provision a gp2
volume. In order to
provide backwards compatibility, they are still supported, and, when specified,
cause the new block_device_map
attribute to be ignored.
The new block_device_map
attribute is a map of objects. The keys are the names
of block devices, and the values are objects with the attributes from the Terraform
launch_template.block-devices resource.
Note that the new default, when none of block_device_map
, disk_size
, or
disk_encryption_enabled
are specified, is to provision a 20GB gp3
volume
for /dev/xvda
, with encryption enabled. This is a change from the previous
default, which provisioned a gp2
volume instead.
Support for EFS add-on
This module now supports the EFS CSI driver add-on, in very much the same way as it supports the EBS CSI driver add-on. The only difference is that the EFS CSI driver add-on requires that you first provision an EFS file system.
Migration from eks/efs-controller
to EFS CSI Driver Add-On
If you are currently using the eks/efs-controller
module, you can migrate
to the EFS CSI Driver Add-On by following these steps:
- Remove or scale to zero Pods any Deployments using the EFS file system.
- Remove (
terraform destroy
) theeks/efs-controller
module from your cluster. This will also remove theefs-sc
StorageClass. - Use the eks/storage-class
module to create a replacement EFS StorageClass
efs-sc
. This component is new and you may need to add it to your cluster. - Deploy the EFS CSI Driver Add-On by adding
aws-efs-csi-driver
to theaddons
map (see README). - Restore the Deployments you modified in step 1.
More options for specifying Availability Zones
Previously, this module required you to specify the Availability Zones for the cluster in one of two ways:
- Explicitly, by providing the full AZ names via the
availability_zones
input - Implicitly, via private subnets in the VPC
Option 2 is still usually the best way, but now you have additional options:
- You can specify the Availability Zones via the
availability_zones
input without specifying the full AZ names. You can just specify the suffixes of the AZ names, and the module will find the full names for you, using the current region. This is useful for using the same configuration in multiple regions. - You can specify Availability Zone IDs via the
availability_zone_ids
input. This is useful to ensure that clusters in different accounts are nevertheless deployed to the same Availability Zones. As with theavailability_zones
input, you can specify the suffixes of the AZ IDs, and the module will find the full IDs for you, using the current region.
Support for Karpenter Instance Profile
Previously, this module created an IAM Role for instances launched by Karpenter,
but did not create the corresponding Instance Profile, which was instead created by
the eks/karpenter
component. This can cause problems if you delete and recreate the cluster,
so for new clusters, this module can now create the Instance Profile as well.
Because this is disruptive to existing clusters, this is not enabled by default.
To enable it, set the legacy_do_not_create_karpenter_instance_profile
input to false
,
and also set the eks/karpenter
input legacy_create_karpenter_instance_profile
to false
.
Upgrading to v1.250.0
Components PR #723
Improved support for EKS Add-Ons
This has improved support for EKS Add-Ons.
Configuration and Timeouts
The addons
input now accepts a configuration_values
input to allow you
to configure the add-ons, and various timeout inputs to allow you to fine-tune
the timeouts for the add-ons.
Automatic IAM Role Creation
If you enable aws-ebs-csi-driver
or vpc-cni
add-ons, the module will
automatically create the required Service Account IAM Role and attach it to
the add-on.
Add-Ons can be deployed to Fargate
If you are using Karpenter and not provisioning any nodes with this module,
the coredns
and aws-ebs-csi-driver
add-ons can be deployed to Fargate.
(They must be able to run somewhere in the cluster or else the deployment
will fail.)
To cause the add-ons to be deployed to Fargate, set the deploy_addons_to_fargate
input to true
.
Note about CoreDNS: If you want to deploy CoreDNS to Fargate, as of this
writing you must set the configuration_values
input for CoreDNS to
'{"computeType": "Fargate"}'
. If you want to deploy CoreDNS to EC2 instances,
you must NOT include the computeType
configuration value.
Availability Zones implied by Private Subnets
You can now avoid specifying Availability Zones for the cluster anywhere. If all of the possible Availability Zones inputs are empty, the module will use the Availability Zones implied by the private subnets. That is, it will deploy the cluster to all of the Availability Zones in which the VPC has private subnets.
Optional support for 1 Fargate Pod Execution Role per Cluster
Previously, this module created a separate Fargate Pod Execution Role for each Fargate Profile it created. This is unnecessary, excessive, and can cause problems due to name collisions, but is otherwise merely inefficient, so it is not important to fix this on existiong, working clusters. This update brings a feature that causes the module to create at most 1 Fargate Pod Execution Role per cluster.
This change is recommended for all NEW clusters, but only NEW clusters.
Because it is a breaking change, it is not enabled by default. To enable it, set the
legacy_fargate_1_role_per_profile_enabled
variable to false
.
WARNING: If you enable this feature on an existing cluster, and that cluster is using Karpenter, the update could destroy all of your existing Karpenter-provisioned nodes. Depending on your Karpenter version, this could leave you with stranded EC2 instances (still running, but not managed by Karpenter or visible to the cluster) and an interruption of service, and possibly other problems. If you are using Karpenter and want to enable this feature, the safest way is to destroy the existing cluster and create a new one with this feature enabled.