Skip to main content

AWS

Discover a wide range of reusable Terraform modules tailored for managing AWS infrastructure. These modules simplify the deployment and management of AWS services through consistent and scalable code.

backup

Terraform module to provision AWS Backup, a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services such as Amazon EBS volumes, Amazon EC2 instances, Amazon RDS databases, Amazon DynamoDB tables, Amazon EFS file systems, and AWS Storage Gateway volumes.

[!NOTE]
The syntax of declaring a backup schedule has changed as of release 0.14.0, follow the instructions in the 0.13.x to 0.14.x+ migration guide.
[!WARNING] The deprecated variables have been fully deprecated as of 1.x.x. Please use the new variables as described in the 0.13.x to 0.14.x+ migration guide.

cicd

Terraform module to create AWS CodePipeline with CodeBuild for CI/CDThis module supports three use-cases:

  1. GitHub -> S3 (build artifact) -> Elastic Beanstalk (running application stack). The module gets the code from a GitHub repository (public or private), builds it by executing the buildspec.yml file from the repository, pushes the built artifact to an S3 bucket, and deploys the artifact to Elastic Beanstalk running one of the supported stacks (e.g. Java, Go, Node, IIS, Python, Ruby, etc.).
    • http://docs.aws.amazon.com/codebuild/latest/userguide/sample-maven-5m.html
    • http://docs.aws.amazon.com/codebuild/latest/userguide/sample-nodejs-hw.html
    • http://docs.aws.amazon.com/codebuild/latest/userguide/sample-go-hw.html
  1. GitHub -> ECR (Docker image) -> Elastic Beanstalk (running Docker stack). The module gets the code from a GitHub repository, builds a Docker image from it by executing the buildspec.yml and Dockerfile files from the repository, pushes the Docker image to an ECR repository, and deploys the Docker image to Elastic Beanstalk running Docker stack.
    • http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
  1. GitHub -> ECR (Docker image). The module gets the code from a GitHub repository, builds a Docker image from it by executing the buildspec.yml and Dockerfile files from the repository, and pushes the Docker image to an ECR repository. This is used when we want to build a Docker image from the code and push it to ECR without deploying to Elastic Beanstalk. To activate this mode, don't specify the app and env attributes for the module.
    • http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html

cloudtrail

Terraform module to provision an AWS CloudTrail.The module accepts an encrypted S3 bucket with versioning to store CloudTrail logs.The bucket could be from the same AWS account or from a different account.This is useful if an organization uses a number of separate AWS accounts to isolate the Audit environment from other environments (production, staging, development).In this case, you create CloudTrail in the production environment (production AWS account), while the S3 bucket to store the CloudTrail logs is created in the Audit AWS account, restricting access to the logs only to the users/groups from the Audit account.

cloudtrail-s3-bucket

Terraform module to provision an S3 bucket with built in policy to allow CloudTrail logs.This is useful if an organization uses a number of separate AWS accounts to isolate the Audit environment from other environments (production, staging, development).In this case, you create CloudTrail in the production environment (Production AWS account), while the S3 bucket to store the CloudTrail logs is created in the Audit AWS account, restricting access to the logs only to the users/groups from the Audit account.The module supports the following:

  1. Forced server-side encryption at rest for the S3 bucket
  2. S3 bucket versioning to easily recover from both unintended user actions and application failures
  3. S3 bucket is protected from deletion if it's not empty (force_destroy set to false)

cloudwatch-events

This is terraform-aws-cloudwatch-events module that creates CloudWatch Events rules and according targets.

Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.

config-storage

This module creates an S3 bucket suitable for storing AWS Config data.It implements a configurable log retention policy, which allows you to efficiently manage logs across different storage classes (e.g. Glacier) and ultimately expire the data altogether.It enables server-side default encryption. https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.htmlIt blocks public access to the bucket by default. https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html

dynamic-subnets

Terraform module to provision public and private subnets in an existing VPCNote: This module is intended for use with an existing VPC and existing Internet Gateway. To create a new VPC, use terraform-aws-vpc module.Note: Due to Terraform limitations, many optional inputs to this module are specified as a list(string) that can have zero or one element, rather than as a string that could be empty or null. The designation of an input as a list type does not necessarily mean that you can supply more than one value in the list, so check the input's description before supplying more than one value.The core function of this module is to create 2 sets of subnets, a "public" set with bidirectional access to the public internet, and a "private" set behind a firewall with egress-only access to the public internet. This includes dividing up a given CIDR range so that a each subnet gets its own distinct CIDR range within that range, and then creating those subnets in the appropriate availability zones. The intention is to keep this module relatively simple and easy to use for the most popular use cases. In its default configuration, this module creates 1 public subnet and 1 private subnet in each of the specified availability zones. The public subnets are configured for bi-directional traffic to the public internet, while the private subnets are configured for egress-only traffic to the public internet. Rather than provide a wealth of configuration options allowing for numerous special cases, this module provides some common options and further provides the ability to suppress the creation of resources, allowing you to create and configure them as you like from outside this module. For example, rather than give you the option to customize the Network ACL, the module gives you the option to create a completely open one (and control access via Security Groups and other means) or not create one at all, allowing you to create and configure one yourself.

Public subnets

This module defines a public subnet as one that has direct access to an internet gateway and can accept incoming connection requests. In the simplest configuration, the module creates a single route table with a default route targeted to the VPC's internet gateway, and associates all the public subnets with that single route table.Likewise it creates a single Network ACL with associated rules allowing all ingress and all egress, and associates that ACL with all the public subnets.

Private subnets

A private subnet may be able to initiate traffic to the public internet through a NAT gateway, a NAT instance, or an egress-only internet gateway, or it might only have direct access to other private subnets. In the simple configuration, for IPv4 and/or IPv6 with NAT64 enabled via public_dns64_enabled or private_dns64_enabled, the module creates 1 NAT Gateway or NAT Instance for each private subnet (in the public subnet in the same availability zone), creates 1 route table for each private subnet, and adds to that route table a default route from the subnet to its NAT Gateway or Instance. For IPv6, the module adds a route to the Egress-Only Internet Gateway configured via input.As with the Public subnets, the module creates a single Network ACL with associated rules allowing all ingress and all egress, and associates that ACL with all the private subnets.

Customization for special use cases

Various features are controlled by bool inputs with names ending in _enabled. By changing the default values, you can enable or disable creation of public subnets, private subnets, route tables, NAT gateways, NAT instances, or Network ACLs. So for example, you could use this module to create only private subnets and the open Network ACL, and then add your own route table associations to the subnets and route all non-local traffic to a Transit Gateway or VPN.

CIDR allocation

For IPv4, you provide a CIDR and the module divides the address space into the largest CIDRs possible that are still small enough to accommodate max_subnet_count subnets of each enabled type (public or private). When max_subnet_count is left at the default 0, it is set to the total number of availability zones in the region. Private subnets are allocated out of the first half of the reserved range, and public subnets are allocated out of the second half.For IPv6, you provide a /56 CIDR and the module assigns /64 subnets of that CIDR in consecutive order starting at zero. (You have the option of specifying a list of CIDRs instead.) As with IPv4, enough CIDRs are allocated to cover max_subnet_count private and public subnets (when both are enabled, which is the default), with the private subnets being allocated out of the lower half of the reservation and the public subnets allocated out of the upper half.

ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS.The module also creates AutoScaling Policies and CloudWatch Metric Alarms to monitor CPU utilization on the EC2 instances and scale the number of instance in the AutoScaling Group up or down. If you don't want to use the provided functionality, or want to provide your own policies, disable it by setting the variable autoscaling_policies_enabled to false.At present, although you can set the created AutoScaling Policy type to any legal value, in practice only SimpleScaling is supported. To use a StepScaling or TargetTrackingScaling policy, create it yourself and then pass it in the alarm_actions field of custom_alarms.

ec2-instance-group

Terraform Module for providing N general purpose EC2 hosts.If you only need to provision a single EC2 instance, consider using the terraform-aws-ec2-instance module instead.IMPORTANT This module by-design does not provision an AutoScaling group. It was designed to provision a discrete number of instances suitable for running stateful services such as databases (e.g. Kafka, Redis, etc).Included features:

  • Automatically create a Security Group
  • Option to switch EIP attachment
  • CloudWatch monitoring and automatic reboot if instance hangs
  • Assume Role capability

eks-node-group

Terraform module to provision an EKS Managed Node Group for Elastic Kubernetes Service.Instantiate it multiple times to create EKS Managed Node Groups with specific settings such as GPUs, EC2 instance types, or autoscale parameters.IMPORTANT: When SSH access is enabled without specifying a source security group, this module provisions EKS Node Group nodes that are globally accessible by SSH (22) port. Normally, AWS recommends that no security group allows unrestricted ingress access to port 22 .

elastic-beanstalk-environment

Terraform module to provision AWS Elastic Beanstalk environment

Searching for Maintainer!

The Cloud Posse team no longer utilizes Beanstalk all that much, but this module is still fairly popular. In an effort to give it the attention it deserves, we're searching for a volunteer maintainer to manage this specific repository's issues and pull requests (of which a number are already stacked up). This is a great opportunity for anyone who is looking to solidify and strengthen their Terraform skillset while also giving back to the SweetOps open source community!
You can learn more about being a SweetOps contributor on our docs site here.If you're interested, reach out to us via the #terraform channel in the SweetOps Slack or directly via email @ [email protected]

global-accelerator (1)

This module provisions AWS Global Accelerator. Multiple listeners can be specified when instantiating this module. The endpoint-group submodule provisions a Global Accelerator Endpoint Group for a listener created by this module and can be instantiated multiple times in order to provision multiple Endpoint Groups.The reason why endpoint-group is its own submodule is because an AWS Provider needs to be instantiated for the region the Endpoint Group's endpoints reside in. For more information, see the endpoint-group documentation.

iam-policy

This terraform-aws-iam-policy module is a wrapper around the Terraform aws_iam_policy_document data source, enhancing it to provide multiple ways to create an AWS IAM Policy document (as a JSON string). It is primarily intended to simplify creating a policy in Terraform from external inputs. In particular, if you want to specify a policy in a tfvars file as a Terraform object, or in YAML as part of an Atmos stack (which is them turned into a Terraform object input), this module provides an object type declaration to use for the input and then it can make the translation to JSON for you. If you can supply the policy as JSON to begin with, or conveniently use the aws_iam_policy_document Terraform data source directly, then this module is not helpful in your case.

[!NOTE] AWS's IAM policy document syntax allows for replacement of policy variables within a statement using ${...}-style notation, which conflicts with Terraform's interpolation syntax. In order to use AWS policy variables with this module, use &{...} notation for interpolations that should be processed by AWS rather than by Terraform. Nevertheless, any ${...}-style notations that appear in strings passed into this module (somehow escaping Terraform interpolation earlier) will be passed through to the policy document unchanged.

iam-s3-user

Terraform module to provision a basic IAM user with permissions to access S3 resources, e.g. to give the user read/write/delete access to the objects in an S3 bucket.Suitable for CI/CD systems (e.g. TravisCI, CircleCI) or systems which are external to AWS that cannot leverage AWS IAM Instance Profiles or AWS OIDC.By default, IAM users, groups, and roles have no access to AWS resources. IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended that IAM policies be applied directly to groups and roles but not users. This module intentionally attaches an IAM policy directly to the user and does not use groupsThe IAM user name is constructed using terraform-null-label and some input is required. The simplest input is name. By default the name will be converted to lower case and all non-alphanumeric characters except for hyphen will be removed. See the documentation for terraform-null-label to learn how to override these defaults if desired.If an AWS Access Key is created, it is stored either in SSM Parameter Store or is provided as a module output, but not both. Using SSM Parameter Store is recommended because module outputs are stored in plaintext in the Terraform state file.

iam-system-user

Terraform Module to provision a basic IAM system user suitable for CI/CD Systems (e.g. TravisCI, CircleCI) or systems which are external to AWS that cannot leverage AWS IAM Instance Profiles or AWS OIDC.We do not recommend creating IAM users this way for any other purpose.By default, IAM users, groups, and roles have no access to AWS resources. IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended that IAM policies be applied directly to groups and roles but not users. This module intentionally attaches an IAM policy directly to the user and does not use groupsThe IAM user name is constructed using terraform-null-label and some input is required. The simplest input is name. By default the name will be converted to lower case and all non-alphanumeric characters except for hyphen will be removed. See the documentation for terraform-null-label to learn how to override these defaults if desired.If an AWS Access Key is created, it is stored either in SSM Parameter Store or is provided as a module output, but not both. Using SSM Parameter Store is recommended because module outputs are stored in plaintext in the Terraform state file.

kms-key

Terraform module to provision a KMS key with alias.Can be used with chamber for managing secrets by storing them in Amazon EC2 Systems Manager Parameter Store.

  • https://aws.amazon.com/systems-manager/features
  • https://aws.amazon.com/blogs/mt/the-right-way-to-store-secrets-using-parameter-store

multi-az-subnets

Terraform module for multi-AZ subnets provisioning.The module creates private and public subnets in the provided Availability Zones.The public subnets are routed to the Internet Gateway specified by var.igw_id.nat_gateway_enabled flag controls the creation of NAT Gateways in the public subnets.The private subnets are routed to the NAT Gateways provided in the var.az_ngw_ids map.If you are creating subnets inside a VPC, consider using cloudposse/terraform-aws-dynamic-subnets instead.

s3-bucket

This module creates an S3 bucket with support for versioning, lifecycles, object locks, replication, encryption, ACL, bucket object policies, and static website hosting.For backward compatibility, it sets the S3 bucket ACL to private and the s3_object_ownership to ObjectWriter. Moving forward, setting s3_object_ownership to BucketOwnerEnforced is recommended, and doing so automatically disables the ACL.This module blocks public access to the bucket by default. See block_public_acls, block_public_policy, ignore_public_acls, and restrict_public_buckets to change the settings. See AWS documentation for more details.This module can optionally create an IAM User with access to the S3 bucket. This is inherently insecure in that to enable anyone to become the User, access keys must be generated, and anything generated by Terraform is stored unencrypted in the Terraform state. See the Terraform documentation for more detailsThe best way to grant access to the bucket is to grant one or more IAM Roles access to the bucket via privileged_principal_arns. This IAM Role can be assumed by EC2 instances via their Instance Profile, or Kubernetes (EKS) services using IRSA. Entities outside of AWS can assume the Role via OIDC. (See this example of connecting GitHub to enable GitHub actions to assume AWS IAM roles, or use this Cloud Posse component if you are already using the Cloud Posse reference architecture.)If neither of those approaches work, then as a last resort you can set user_enabled = true and this module will provision a basic IAM user with permissions to access the bucket. We do not recommend creating IAM users this way for any other purpose.If an IAM user is created, the IAM user name is constructed using terraform-null-label and some input is required. The simplest input is name. By default the name will be converted to lower case and all non-alphanumeric characters except for hyphen will be removed. See the documentation for terraform-null-label to learn how to override these defaults if desired.If an AWS Access Key is created, it is stored either in SSM Parameter Store or is provided as a module output, but not both. Using SSM Parameter Store is recommended because that will keep the secret from being easily accessible via Terraform remote state lookup, but the key will still be stored unencrypted in the Terraform state in any case.

s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrails.This module implements a configurable log retention policy, which allows you to efficiently manage logs across different storage classes (e.g. Glacier) and ultimately expire the data altogether.It enables default server-side encryption.It blocks public access to the bucket by default.As of March, 2022, this module is primarily a wrapper around our s3-bucket module, with some options preconfigured and SQS notifications added. If it does not exactly suit your needs, you may want to use the s3-bucket module directly.As of version 1.0 of this module, most of the inputs are marked nullable = false, meaning you can pass in null and get the default value rather than having the input be actually set to null. This is technically a breaking change from previous versions, but since null was not a valid value for most of these variables, we are not considering it a truly breaking change. However, be mindful that the behavior of inputs set to null may change in the future, so we recommend setting them to the desired value explicitly.

sso (1)

This module configures AWS Single Sign-On (SSO). AWS SSO makes it easy to centrally manage access to multiple AWS accounts and business applications and provide users with single sign-on access to all their assigned accounts and applications from one place. With AWS SSO, you can easily manage access and user permissions to all of your accounts in AWS Organizations centrally. AWS SSO configures and maintains all the necessary permissions for your accounts automatically, without requiring any additional setup in the individual accounts. You can assign user permissions based on common job functions and customize these permissions to meet your specific security requirements. AWS SSO also includes built-in integrations to many business applications, such as Salesforce, Box, and Microsoft 365.With AWS SSO, you can create and manage user identities in AWS SSO’s identity store, or easily connect to your existing identity source, including Microsoft Active Directory, Okta Universal Directory, and Azure Active Directory (Azure AD). AWS SSO allows you to select user attributes, such as cost center, title, or locale, from your identity source, and then use them for attribute-based access control in AWS.

tfstate-backend

Terraform module to provision an S3 bucket to store terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.The module supports the following:

  1. Forced server-side encryption at rest for the S3 bucket
  2. S3 bucket versioning to allow for Terraform state recovery in the case of accidental deletions and human errors
  3. State locking and consistency checking via DynamoDB table to prevent concurrent operations
  4. DynamoDB server-side encryption
https://www.terraform.io/docs/backends/types/s3.htmlNOTE: The operators of the module (IAM Users) must have permissions to create S3 buckets and DynamoDB tables when performing terraform plan and terraform applyNOTE: This module cannot be used to apply changes to the mfa_delete feature of the bucket. Changes regarding mfa_delete can only be made manually using the root credentials with MFA of the AWS Account where the bucket resides. Please see: https://github.com/terraform-providers/terraform-provider-aws/issues/629

transit-gateway

Terraform module to provision:

  • AWS Transit Gateway
  • AWS Resource Access Manager (AWS RAM) Resource Share to share the Transit Gateway with the Organization or another AWS Account (configurable via the variables ram_resource_share_enabled and ram_principals)
  • Transit Gateway route table
  • Transit Gateway VPC attachments to connect multiple VPCs via the Transit Gateway
  • Transit Gateway route table propagations to create propagated routes and allow traffic from the Transit Gateway to the VPC attachments
  • Transit Gateway route table associations to allow traffic from the VPC attachments to the Transit Gateway
  • Transit Gateway static routes (static routes have a higher precedence than propagated routes)
  • Subnet routes to route traffic from the subnets in each VPC to the other Transit Gateway VPC attachments

vpc-peering-multi-account

Terraform module to create a peering connection between any two VPCs existing in different AWS accounts.This module supports performing this action from a 3rd account (e.g. a "root" account) by specifying the roles to assume for each member account.IMPORTANT: AWS allows a multi-account VPC Peering Connection to be deleted from either the requester's or accepter's side. However, Terraform only allows the VPC Peering Connection to be deleted from the requester's side by removing the corresponding aws_vpc_peering_connection resource from your configuration. Read more about this on Terraform's documentation portal.

vpn-connection

Terraform module to provision a site-to-site VPN connection between a VPC and an on-premises network.The module does the following:

  • Creates a Virtual Private Gateway (VPG) and attaches it to the VPC
  • Creates a Customer Gateway (CGW) pointing to the provided IP address of the Internet-routable external interface on the on-premises network
  • Creates a Site-to-Site Virtual Private Network (VPN) connection and assigns it to the VPG and CGW
  • Requests automatic route propagation between the VPG and the provided route tables in the VPC
  • If the VPN connection is configured to use static routes, provisions a static route between the VPN connection and the CGW