acm-request-certificate
Terraform module to request an ACM certificate for a domain and add a CNAME record to the DNS zone to complete certificate validation
alb
Terraform module to create an ALB, default ALB listener(s), and a default ALB target and related security groups.
alb-ingress
Terraform module to provision an HTTP style ALB ingress based on hostname and/or path. ALB ingress can be provisioned without authentication, or using Cognito or OIDC authentication.
alb-target-group-cloudwatch-sns-alarms
Terraform module for creating alarms for tracking important changes and occurrences from ALBs.
amplify-app
Terraform module to provision AWS Amplify apps, backend environments, branches, domain associations, and webhooks.
api-gateway
1 items
athena
Terraform module to deploy an instance of [Amazon Athena](https://aws.amazon.com/athena/) on AWS.
backup
Terraform module to provision [AWS Backup](https://aws.amazon.com/backup), a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services such as Amazon EBS volumes, Amazon EC2 instances, Amazon RDS databases, Amazon DynamoDB tables, Amazon EFS file systems, and AWS Storage Gateway volumes. **NOTE**: the syntax of declaring a backup schedule has changed as of release 0.14.0, follow the instructions in the [0.13.x to 0.14.x+ migration guide](https://github.com/cloudposse/terraform-aws-backup/tree/main/docs/migration-0.13.x-0.14.x+.md).
bossy
This is an example project to provide all the scaffolding for a typical well-built Cloud Posse Terraform module for AWS resources. It's a template repository you can use when creating new repositories. This is not a useful module by itself.
budgets
Terraform module to create [AWS Budgets](https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-managing-costs.html) and an associated SNS topic and Lambda function to send notifications to Slack.
cicd
Terraform module to create AWS [`CodePipeline`](https://aws.amazon.com/codepipeline/) with [`CodeBuild`](https://aws.amazon.com/codebuild/) for [`CI/CD`](https://en.wikipedia.org/wiki/CI/CD) This module supports three use-cases: 1. **GitHub -> S3 (build artifact) -> Elastic Beanstalk (running application stack)**. The module gets the code from a ``GitHub`` repository (public or private), builds it by executing the ``buildspec.yml`` file from the repository, pushes the built artifact to an S3 bucket, and deploys the artifact to ``Elastic Beanstalk`` running one of the supported stacks (_e.g._ ``Java``, ``Go``, ``Node``, ``IIS``, ``Python``, ``Ruby``, etc.). - http://docs.aws.amazon.com/codebuild/latest/userguide/sample-maven-5m.html - http://docs.aws.amazon.com/codebuild/latest/userguide/sample-nodejs-hw.html - http://docs.aws.amazon.com/codebuild/latest/userguide/sample-go-hw.html 2. **GitHub -> ECR (Docker image) -> Elastic Beanstalk (running Docker stack)**. The module gets the code from a ``GitHub`` repository, builds a ``Docker`` image from it by executing the ``buildspec.yml`` and ``Dockerfile`` files from the repository, pushes the ``Docker`` image to an ``ECR`` repository, and deploys the ``Docker`` image to ``Elastic Beanstalk`` running ``Docker`` stack. - http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html 3. **GitHub -> ECR (Docker image)**. The module gets the code from a ``GitHub`` repository, builds a ``Docker`` image from it by executing the ``buildspec.yml`` and ``Dockerfile`` files from the repository, and pushes the ``Docker`` image to an ``ECR`` repository. This is used when we want to build a ``Docker`` image from the code and push it to ``ECR`` without deploying to ``Elastic Beanstalk``. To activate this mode, don't specify the ``app`` and ``env`` attributes for the module. - http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
cloudformation-stack
Terraform module to provision CloudFormation Stack.
cloudformation-stack-set
Terraform module to provision Cloudformation Stack Set and Administrator IAM role.
cloudfront-cdn
Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin (e.g. website) and [ships logs to a bucket](https://github.com/cloudposse/terraform-aws-log-storage). If you need to accelerate an S3 bucket, we suggest using [`terraform-aws-cloudfront-s3-cdn`](https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn) instead.
cloudfront-s3-cdn
1 items
cloudtrail
Terraform module to provision an AWS [CloudTrail](https://aws.amazon.com/cloudtrail/). The module accepts an encrypted S3 bucket with versioning to store CloudTrail logs. The bucket could be from the same AWS account or from a different account. This is useful if an organization uses a number of separate AWS accounts to isolate the Audit environment from other environments (production, staging, development). In this case, you create CloudTrail in the production environment (production AWS account), while the S3 bucket to store the CloudTrail logs is created in the Audit AWS account, restricting access to the logs only to the users/groups from the Audit account.
cloudtrail-cloudwatch-alarms
Terraform module for creating alarms for tracking important changes and occurances from cloudtrail. This module creates a set of filter metrics and alarms based on the security best practices covered in the [AWS CIS Foundations Benchmark](https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf) guide.
cloudtrail-s3-bucket
Terraform module to provision an S3 bucket with built in policy to allow [CloudTrail](https://aws.amazon.com/cloudtrail/) [logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html). This is useful if an organization uses a number of separate AWS accounts to isolate the Audit environment from other environments (production, staging, development). In this case, you create CloudTrail in the production environment (Production AWS account), while the S3 bucket to store the CloudTrail logs is created in the Audit AWS account, restricting access to the logs only to the users/groups from the Audit account. The module supports the following: 1. Forced [server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) at rest for the S3 bucket 2. S3 bucket [versioning](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) to easily recover from both unintended user actions and application failures 3. S3 bucket is protected from deletion if it's not empty ([force_destroy](https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#force_destroy) set to `false`)
cloudwatch-agent
Terraform module to install the CloudWatch agent on EC2 instances using `cloud-init`.
cloudwatch-events
This is `terraform-aws-cloudwatch-events` module that creates CloudWatch Events rules and according targets. > Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.
cloudwatch-flow-logs
Terraform module for enabling [`flow logs`](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html) for `vpc` and `subnets`.
cloudwatch-logs
Terraform module for creation of CloudWatch Log Streams and Log Groups. Useful in combination with Fluentd/Fluent-bit for shipping logs.
code-deploy
Terraform module to provision AWS Code Deploy app and group.
codebuild
Terraform module to create AWS CodeBuild project for AWS CodePipeline.
codefresh-backing-services
Terraform module to provision [CodeFresh Enterprise](https://codefresh.io/enterprise/) backing services
config
1 items
config-storage
This module creates an S3 bucket suitable for storing `AWS Config` data. It implements a configurable log retention policy, which allows you to efficiently manage logs across different storage classes (_e.g._ `Glacier`) and ultimately expire the data altogether. It enables server-side default encryption. https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html It blocks public access to the bucket by default. https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
datadog-integration
Terraform module to configure [Datadog AWS integration](https://docs.datadoghq.com/api/v1/aws-integration/).
datadog-lambda-forwarder
Terraform module to provision all the necessary infrastructure to deploy [Datadog Lambda forwarders](https://github.com/DataDog/datadog-serverless-functions/tree/master/aws/logs_monitoring)
dms
1 items
documentdb-cluster
Terraform module to provision an [`Amazon DocumentDB`](https://aws.amazon.com/documentdb/) cluster.
dynamic-subnets
Terraform module to provision public and private [`subnets`](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) in an existing [`VPC`](https://aws.amazon.com/vpc) __Note:__ This module is intended for use with an existing VPC and existing Internet Gateway. To create a new VPC, use [terraform-aws-vpc](https://github.com/cloudposse/terraform-aws-vpc) module. __Note:__ Due to Terraform [limitations](https://github.com/hashicorp/terraform/issues/26755#issuecomment-719103775), many optional inputs to this module are specified as a `list(string)` that can have zero or one element, rather than as a `string` that could be empty or `null`. The designation of an input as a `list` type does not necessarily mean that you can supply more than one value in the list, so check the input's description before supplying more than one value. The core function of this module is to create 2 sets of subnets, a "public" set with bidirectional access to the public internet, and a "private" set behind a firewall with egress-only access to the public internet. This includes dividing up a given CIDR range so that a each subnet gets its own distinct CIDR range within that range, and then creating those subnets in the appropriate availability zones. The intention is to keep this module relatively simple and easy to use for the most popular use cases. In its default configuration, this module creates 1 public subnet and 1 private subnet in each of the specified availability zones. The public subnets are configured for bi-directional traffic to the public internet, while the private subnets are configured for egress-only traffic to the public internet. Rather than provide a wealth of configuration options allowing for numerous special cases, this module provides some common options and further provides the ability to suppress the creation of resources, allowing you to create and configure them as you like from outside this module. For example, rather than give you the option to customize the Network ACL, the module gives you the option to create a completely open one (and control access via Security Groups and other means) or not create one at all, allowing you to create and configure one yourself. ### Public subnets This module defines a public subnet as one that has direct access to an internet gateway and can accept incoming connection requests. In the simplest configuration, the module creates a single route table with a default route targeted to the VPC's internet gateway, and associates all the public subnets with that single route table. Likewise it creates a single Network ACL with associated rules allowing all ingress and all egress, and associates that ACL with all the public subnets. ### Private subnets A private subnet may be able to initiate traffic to the public internet through a NAT gateway, a NAT instance, or an egress-only internet gateway, or it might only have direct access to other private subnets. In the simple configuration, for IPv4 and/or IPv6 with NAT64 enabled via `public_dns64_enabled` or `private_dns64_enabled`, the module creates 1 NAT Gateway or NAT Instance for each private subnet (in the public subnet in the same availability zone), creates 1 route table for each private subnet, and adds to that route table a default route from the subnet to its NAT Gateway or Instance. For IPv6, the module adds a route to the Egress-Only Internet Gateway configured via input. As with the Public subnets, the module creates a single Network ACL with associated rules allowing all ingress and all egress, and associates that ACL with all the private subnets. ### Customization for special use cases Various features are controlled by `bool` inputs with names ending in `_enabled`. By changing the default values, you can enable or disable creation of public subnets, private subnets, route tables, NAT gateways, NAT instances, or Network ACLs. So for example, you could use this module to create only private subnets and the open Network ACL, and then add your own route table associations to the subnets and route all non-local traffic to a Transit Gateway or VPN. ### CIDR allocation For IPv4, you provide a CIDR and the module divides the address space into the largest CIDRs possible that are still small enough to accommodate `max_subnet_count` subnets of each enabled type (public or private). When `max_subnet_count` is left at the default `0`, it is set to the total number of availability zones in the region. Private subnets are allocated out of the first half of the reserved range, and public subnets are allocated out of the second half. For IPv6, you provide a `/56` CIDR and the module assigns `/64` subnets of that CIDR in consecutive order starting at zero. (You have the option of specifying a list of CIDRs instead.) As with IPv4, enough CIDRs are allocated to cover `max_subnet_count` private and public subnets (when both are enabled, which is the default), with the private subnets being allocated out of the lower half of the reservation and the public subnets allocated out of the upper half.
dynamodb
Terraform module to provision a DynamoDB table with autoscaling. Autoscaler scales up/down the provisioned OPS for the DynamoDB table based on the load. ## Requirements This module requires [AWS Provider](https://github.com/terraform-providers/terraform-provider-aws) `>= 4.22.0`
dynamodb-autoscaler
Terraform module to provision DynamoDB autoscaler. Autoscaler scales up/down the provisioned OPS for a DynamoDB table based on the load.
ec2-admin-server
Terraform Module for providing a server capable of running admin tasks. Use `terraform-aws-ec2-admin-server` to create and manage an admin instance.
ec2-ami-backup
This repo contains a terraform module that creates two lambda functions that will create AMI automatically at regular intervals. It is based on the code at <https://serverlesscode.com/post/lambda-schedule-ebs-snapshot-backups/> and <https://serverlesscode.com/post/lambda-schedule-ebs-snapshot-backups-2/>.
ec2-ami-snapshot
Terraform module to easily generate AMI snapshots to create replica instances
ec2-autoscale-group
Terraform module to provision [Auto Scaling Group](https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html) and [Launch Template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) on AWS. The module also creates AutoScaling Policies and CloudWatch Metric Alarms to monitor CPU utilization on the EC2 instances and scale the number of instance in the AutoScaling Group up or down. If you don't want to use the provided functionality, or want to provide your own policies, disable it by setting the variable `autoscaling_policies_enabled` to `false`. At present, although you can set the created AutoScaling Policy type to any legal value, in practice [only `SimpleScaling` is supported](https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/issues/55). To use a `StepScaling` or `TargetTrackingScaling` policy, create it yourself and then pass it in the `alarm_actions` field of `custom_alarms`.
ec2-bastion-server
Terraform module to define a generic Bastion host with parameterized `user_data` and support for AWS SSM Session Manager for remote access with IAM authentication.
ec2-client-vpn
The `terraform-aws-ec2-client-vpn` project provides for ec2 client vpn infrastructure. AWS Client VPN is a managed client-based VPN service based on OpenVPN that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using [any OpenVPN-based VPN client](https://docs.aws.amazon.com/vpn/latest/clientvpn-user/connect-aws-client-vpn-connect.html).
ec2-instance
Terraform Module for provisioning a general purpose EC2 host. Included features: * Automatically create a Security Group * Option to switch EIP attachment * CloudWatch monitoring and automatic reboot if instance hangs * Assume Role capability
ec2-instance-group
Terraform Module for providing N general purpose EC2 hosts. If you only need to provision a single EC2 instance, consider using the [terraform-aws-ec2-instance](https://github.com/cloudposse/terraform-aws-ec2-instance) module instead. **IMPORTANT** This module by-design does not provision an AutoScaling group. It was designed to provision a discrete number of instances suitable for running stateful services such as databases (e.g. Kafka, Redis, etc). Included features: * Automatically create a Security Group * Option to switch EIP attachment * CloudWatch monitoring and automatic reboot if instance hangs * Assume Role capability
ecr
Terraform module to provision an [`AWS ECR Docker Container registry`](https://aws.amazon.com/ecr/).
ecr-public
Terraform module to provision a Public [`AWS ECR Docker Container registry`](https://docs.aws.amazon.com/AmazonECR/latest/public/public-repositories.html/).
ecs-alb-service-task
Terraform module to create an ECS Service for a web app (task), and an ALB target group to route requests.
ecs-atlantis
A Terraform module for deploying [Atlantis](https://runatlantis.io) to an AWS ECS cluster.
ecs-cloudwatch-autoscaling
Terraform module for creating alarms for tracking important changes and occurrences from ECS Services.
ecs-cloudwatch-sns-alarms
Terraform module for creating alarms for tracking important changes and occurrences from ECS Services.
ecs-cluster
Terraform module to provision an [`ECS Cluster`](https://aws.amazon.com/ru/ecs/) with list of [`capacity providers`](https://docs.aws.amazon.com/AmazonECS/latest/userguide/cluster-capacity-providers.html). Supports [Amazon ECS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/userguide/fargate-capacity-providers.html) and [EC2 Autoscaling](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-auto-scaling.html) capacity providers.
ecs-codepipeline
Terraform Module for CI/CD with AWS Code Pipeline using GitHub webhook triggers and Code Build for ECS.
ecs-container-definition
Terraform module to generate well-formed JSON documents that are passed to the `aws_ecs_task_definition` Terraform resource as [container definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions).
ecs-web-app
A Terraform module which implements a web app on ECS and supporting AWS resources.
efs
Terraform module to provision an AWS [`EFS`](https://aws.amazon.com/efs/) Network File System. **NOTE**: Release `0.32.0` contains breaking changes. To preserve the SG, follow the instructions in the [0.30.1 to 0.32.x+ migration path](https://github.com/cloudposse/terraform-aws-efs/tree/main/docs/migration-0.30.1-0.32.x+.md).
efs-backup
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline. The workflow is simple: * Periodically launch resource (EC2 instance) based on schedule * Execute the shell command defined in the activity on the instance * Sync data from Production EFS to S3 Bucket by using `aws-cli` * The execution log of the activity is stored in `S3` * Publish the success or failure of the activity to an `SNS` topic * Automatically rotate the backups using `S3 lifecycle rule`
efs-cloudwatch-sns-alarms
Create a set of sane EFS CloudWatch alerts for monitoring the health of an EFS resource. | area | metric | comparison operator | threshold | rationale | |---------|--------------------|----------------------|-------------------|--------------------------------------------------------------------| | Storage | BurstCreditBalance | `<` | 192000000000 | 192 GB in Bytes (last hour where you can burst at 100 MB/sec) | | Storage | PercentIOLimit | `>` | 95 | When the IO limit has been exceeded, the system performance drops. |
eks-cluster
Terraform module to provision an [EKS](https://aws.amazon.com/eks/) cluster on AWS.
eks-fargate-profile
Terraform module to provision an [AWS Fargate Profile](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html) and Fargate Pod Execution Role for [EKS](https://aws.amazon.com/eks/).
eks-iam-role
This `terraform-aws-eks-iam-role` project provides a simplified mechanism for provisioning [AWS EKS Service Account IAM roles](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html).
eks-node-group
Terraform module to provision an EKS Node Group for [Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/). Instantiate it multiple times to create many EKS node groups with specific settings such as GPUs, EC2 instance types, or autoscale parameters. **IMPORTANT:** This module provisions an `EKS Node Group` nodes globally accessible by SSH (22) port. Normally, AWS recommends that no security group allows unrestricted ingress access to port 22 .
eks-spotinst-ocean-nodepool
This `terraform-aws-eks-spotinst-ocean-nodepool` module provides the scaffolding for provisioning a [Spotinst](https://spot.io/) [Ocean](https://spot.io/products/ocean/) connected to an AWS EKS cluster.
eks-workers
Terraform module to provision AWS resources to run EC2 worker nodes for [Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/). Instantiate it multiple times to create many EKS worker node pools with specific settings such as GPUs, EC2 instance types, or autoscale parameters.
elastic-beanstalk-application
Terraform module to provision AWS Elastic Beanstalk application
elastic-beanstalk-environment
Terraform module to provision AWS Elastic Beanstalk environment ## Searching for Maintainer! The Cloud Posse team no longer utilizes Beanstalk all that much, but this module is still fairly popular. In an effort to give it the attention it deserves, we're searching for a volunteer maintainer to manage this specific repository's issues and pull requests (of which a number are already stacked up). This is a great opportunity for anyone who is looking to solidify and strengthen their Terraform skillset while also giving back to the SweetOps open source community! [You can learn more about being a SweetOps contributor on our docs site here](https://docs.cloudposse.com/community/contributors/). If you're interested, reach out to us via the `#terraform` channel in [the SweetOps Slack](https://slack.sweetops.com/) or directly [via email @ [email protected]](mailto:[email protected])
elasticache-memcached
Terraform module to provision an [`ElastiCache`](https://aws.amazon.com/elasticache/) Memcached Cluster
elasticache-redis
Terraform module to provision an [`ElastiCache`](https://aws.amazon.com/elasticache/) Redis Cluster
elasticsearch
Terraform module to provision an [`Elasticsearch`](https://aws.amazon.com/elasticsearch-service/) cluster with built-in integrations with [Kibana](https://aws.amazon.com/elasticsearch-service/kibana/) and [Logstash](https://aws.amazon.com/elasticsearch-service/logstash/).
emr-cluster
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS.
firewall-manager
Terraform module to create and manage AWS Firewall Manager policies.
github-action-token-rotator
This module deploys a [lambda function](https://github.com/cloudposse/lambda-github-action-token-rotator) that runs as a GitHub Application and periodically gets a new GitHub Runner Registration Token from the GitHub API. This token is then stored in AWS Systems Manager Parameter Store.
global-accelerator
1 items
glue
1 items
guardduty
This module enables AWS GuardDuty in one region of one account and optionally sets up an SNS topic to receive notifications of its findings.
health-events
This module creates EventBridge (formerly CloudWatch Events) rules for AWS Personal Health Dashboard Events and an SNS topic. EventBridge will publish messages to this SNS topic, which can be subcribed to using this module as well. Since AWS Personal Health Dashboard is a global service, but since the KMS key and SNS topic are regional, this module is technically regional but only needs to be deployed once per account.
helm-release
This `terraform-aws-helm-release` module deploys a [Helm chart](https://helm.sh/docs/topics/charts/) with an option to create an EKS IAM Role for a Service Account ([IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)).
iam-account-settings
Terraform module to provision general IAM account settings. It will create the IAM account alias for pretty login URLs and set the account password policy."
iam-assumed-roles
Terraform module to provision two IAM roles and two IAM groups for assuming the roles provided MFA is present, and add IAM users to the groups. - Role and group with Administrator (full) access to AWS resources - Role and group with Readonly access to AWS resources To give a user administrator's access, add the user to the admin group. To give a user readonly access, add the user to the readonly group.
iam-chamber-s3-role
Terraform module to provision an IAM role with configurable permissions to access [S3 Bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html) used by Chamber as [Parameter Store backend](https://github.com/segmentio/chamber#s3-backend-experimental).
iam-chamber-user
Terraform module to provision a basic IAM [chamber](https://github.com/segmentio/chamber) user with access to SSM parameters and KMS key to decrypt secrets, suitable for CI/CD systems (_e.g._ TravisCI, CircleCI, CodeFresh) or systems which are *external* to AWS that cannot leverage [AWS IAM Instance Profiles](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html). We do not recommend creating IAM users this way for any other purpose.
iam-policy
This `terraform-aws-iam-policy` module is a wrapper around the Terraform [aws_iam_policy_document](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) data source, enhancing it to provide multiple ways to create an AWS IAM Policy document (as a JSON string). It is primarily intended to simplify creating a policy in Terraform from external inputs. In particular, if you want to specify a policy in a `tfvars` file as a Terraform object, or in YAML as part of an [Atmos](https://atmos.tools/) stack (which is them turned into a Terraform object input), this module provides an object type declaration to use for the input and then it can make the translation to JSON for you. If you can supply the policy as JSON to begin with, or conveniently use the `aws_iam_policy_document` Terraform data source directly, then this module is not helpful in your case. NOTE: AWS's IAM policy document syntax allows for replacement of policy variables within a statement using ${...}-style notation, which conflicts with Terraform's interpolation syntax. In order to use AWS policy variables with this module, use &{...} notation for interpolations that should be processed by AWS rather than by Terraform. Nevertheless, any ${...}-style notations that appear in strings passed into this module (somehow escaping Terraform interpolation earlier) will be passed through to the policy document unchanged.
iam-policy-document-aggregator
Terraform module to aggregate multiple IAM policy documents into single policy document. # NOTE: This module is now deprecated due to new functionality in the Terraform AWS Provider. See below on migration steps Now that the AWS provider supports the `override_policy_documents` argument on the `aws_iam_policy_document` data source, this module is no longer necessary. All code using this module can be migrated to natively use the `aws_iam_policy_document` data source by doing the following change: ```hcl # Previous module usage: module "aggregated_policy" { source = "cloudposse/iam-policy-document-aggregator/aws" version = "0.8.0" source_documents = [ data.aws_iam_policy_document.base.json, data.aws_iam_policy_document.resource_full_access.json ] } ``` Replace the above with: ```hcl data "aws_iam_policy_document" "aggregated" { override_policy_documents = [ data.aws_iam_policy_document.base.json, data.aws_iam_policy_document.resource_full_access.json ] } ``` And then update your references to `module.aggregated_policy.result_document` with `data.aws_iam_policy_document.aggregated.json`. Please see the discussion in #31 for further details.
iam-role
A Terraform module that creates IAM role with provided JSON IAM polices documents. #### Warning * If `var.enabled` set `false` the module can be used as [IAM Policy Document Aggregator](https://github.com/cloudposse/terraform-aws-iam-policy-document-aggregator) because [`output.policy`](https://github.com/cloudposse/terraform-aws-iam-role/tree/init#outputs) always aggregates [`var.policy_documents`](https://github.com/cloudposse/terraform-aws-iam-role/tree/init#inputs) * List size [`var.policy_documents`](https://github.com/cloudposse/terraform-aws-iam-role/tree/init#inputs) [limited to 10](https://github.com/cloudposse/terraform-aws-iam-policy-document-aggregator#inputs)
iam-s3-user
Terraform module to provision a basic IAM user with permissions to access S3 resources, e.g. to give the user read/write/delete access to the objects in an S3 bucket. Suitable for CI/CD systems (_e.g._ TravisCI, CircleCI) or systems which are *external* to AWS that cannot leverage [AWS IAM Instance Profiles](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) or [AWS OIDC](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). By default, IAM users, groups, and roles have no access to AWS resources. IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended that IAM policies be applied directly to groups and roles but not users. **This module intentionally attaches an IAM policy directly to the user and does not use groups** The IAM user name is constructed using [terraform-null-label](https://github.com/cloudposse/terraform-null-label) and some input is required. The simplest input is `name`. By default the name will be converted to lower case and all non-alphanumeric characters except for hyphen will be removed. See the documentation for `terraform-null-label` to learn how to override these defaults if desired. If an AWS Access Key is created, it is stored either in SSM Parameter Store or is provided as a module output, but not both. Using SSM Parameter Store is recommended because module outputs are stored in plaintext in the Terraform state file.
iam-system-user
Terraform Module to provision a basic IAM system user suitable for CI/CD Systems (_e.g._ TravisCI, CircleCI) or systems which are *external* to AWS that cannot leverage [AWS IAM Instance Profiles](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) or [AWS OIDC](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). We do not recommend creating IAM users this way for any other purpose. By default, IAM users, groups, and roles have no access to AWS resources. IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended that IAM policies be applied directly to groups and roles but not users. **This module intentionally attaches an IAM policy directly to the user and does not use groups** The IAM user name is constructed using [terraform-null-label](https://github.com/cloudposse/terraform-null-label) and some input is required. The simplest input is `name`. By default the name will be converted to lower case and all non-alphanumeric characters except for hyphen will be removed. See the documentation for `terraform-null-label` to learn how to override these defaults if desired. If an AWS Access Key is created, it is stored either in SSM Parameter Store or is provided as a module output, but not both. Using SSM Parameter Store is recommended because module outputs are stored in plaintext in the Terraform state file.
iam-user
Terraform Module to provision a basic IAM user suitable for humans. It will establish a login profile and associate the user with IAM groups. We do not recommend creating IAM users for any other purpose. For external systems (e.g. CI/CD) check out our [`terraform-aws-iam-system-user`](https://github.com/cloudposse/terraform-aws-iam-system-user) module.
inspector
This module enables [AWS Inspector](https://aws.amazon.com/inspector/) in one region of one account and optionally enables [various rules packages provided by AWS](https://docs.aws.amazon.com/inspector/latest/userguide/inspector_rules-arns.html).
jenkins
`terraform-aws-jenkins` is a Terraform module to build a Docker image with [Jenkins](https://jenkins.io/), save it to an [ECR](https://aws.amazon.com/ecr/) repo, and deploy to [Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk/) running [Docker](https://www.docker.com/). This is an enterprise-ready, scalable and highly-available architecture and the CI/CD pattern to build and deploy Jenkins. ## Features The module will create the following AWS resources: * Elastic Beanstalk Application * Elastic Beanstalk Environment with Docker stack to run the Jenkins master * ECR repository to store the Jenkins Docker image * EFS filesystem to store Jenkins config and jobs (it will be mounted to a directory on the EC2 host, and then to the Docker container) * AWS Backup stack to automatically backup the EFS * CodePipeline with CodeBuild to build and deploy Jenkins so even Jenkins itself follows the CI/CD pattern After all of the AWS resources are created, __CodePipeline__ will: * Get the specified Jenkins repo from GitHub, _e.g._ https://github.com/cloudposse/jenkins * Build a Docker image from it * Save the Docker image to the ECR repo * Deploy the Docker image from the ECR repo to Elastic Beanstalk running Docker stack * Monitor the GitHub repo for changes and re-run the steps above if new commits are pushed 
key-pair
Terraform module for generating or importing an SSH public key file into AWS.
kinesis-stream
Terraform module to deploy an [Amazon Kinesis Data Stream](https://aws.amazon.com/kinesis/data-streams/) on AWS.
kms-key
Terraform module to provision a [KMS](https://aws.amazon.com/kms/) key with alias. Can be used with [chamber](https://github.com/segmentio/chamber) for managing secrets by storing them in Amazon EC2 Systems Manager Parameter Store. * https://aws.amazon.com/systems-manager/features * https://aws.amazon.com/blogs/mt/the-right-way-to-store-secrets-using-parameter-store
kops-atlantis
Terraform module to provision an IAM role for `atlantis` running in a Kops cluster, and attach an IAM policy to the role with permissions to modify infrastructure. ## Overview This module assumes you are running [atlantis](https://runatlantis.io) in a Kops cluster. We recommend using it together with [`kiam`](https://github.com/uswitch/kiam) to permit pods to assume roles. It will provision an IAM role with the required permissions and grant the Kops master nodes the permission to assume it. This is useful to provision AWS resources from Kubernetes using a GitOps style workflow. The module uses [terraform-aws-kops-metadata](https://github.com/cloudposse/terraform-aws-kops-metadata) to lookup resources within a Kops cluster for easier integration with Terraform.
kops-aws-alb-ingress
Terraform module to provision an IAM role for [`aws-alb-ingress-controller`](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) running in a Kops cluster, and attach an IAM policy to the role with permissions to manage Application Load Balancers. ## Overview This module assumes you are running [aws-alb-ingress-controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) in a Kops cluster. It will provision an IAM role with the required permissions and grant the Kubernetes servers the permission to assume it. This is useful to run on Kubernetes Ingress backed by AWS ALB. The module uses [terraform-aws-kops-metadata](https://github.com/cloudposse/terraform-aws-kops-metadata) to lookup resources within a Kops cluster for easier integration with Terraform.
kops-chart-repo
Terraform module to provision an S3 bucket for [Helm](https://helm.sh/) chart repository, and an IAM role and policy with permissions for Kops nodes to access the bucket. The module uses [terraform-aws-kops-metadata](https://github.com/cloudposse/terraform-aws-kops-metadata) to lookup resources within a Kops cluster for easier integration with Terraform.
kops-data-iam
Terraform module to lookup IAM roles within a [Kops](https://github.com/kubernetes/kops) cluster
kops-data-instance-groups
Terraform module to get auto scale groups that are instace groups created with [Kops](https://github.com/kubernetes/kops)
kops-data-network
Terraform module to lookup network resources within a [Kops](https://github.com/kubernetes/kops) cluster
kops-ecr
Terraform module to provision an ECR repository and grant users and kubernetes nodes access to it. ## Overview The module uses [terraform-aws-kops-metadata](https://github.com/cloudposse/terraform-aws-kops-metadata) to lookup resources within a Kops cluster for easier integration with Terraform.
kops-efs
Terraform module to provision an EFS cluster and IAM role for `efs-provider` running in a Kops cluster, and attach an IAM policy to the role with permissions to mount/modify EFS. ## Overview This module assumes you are running [efs-provider](https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs) in a Kops cluster. It will provision an EFS cluster and IAM role with the required permissions and grant the Kops pods the permission to assume it. This is useful to mount EFS targets into running Kubernetes pods. The module uses [terraform-aws-kops-metadata](https://github.com/cloudposse/terraform-aws-kops-metadata) to lookup resources within a Kops cluster for easier integration with Terraform.
kops-external-dns
Terraform module to provision an IAM role for `external-dns` running in a Kops cluster, and attach an IAM policy to the role with permissions to modify Route53 record sets. ## Overview This module assumes you are running [external-dns](https://github.com/kubernetes-incubator/external-dns) in a Kops cluster. It will provision an IAM role with the required permissions and grant the Kops masters the permission to assume it. This is useful to make Kubernetes services discoverable via AWS DNS services. The module uses [terraform-aws-kops-metadata](https://github.com/cloudposse/terraform-aws-kops-metadata) to lookup resources within a Kops cluster for easier integration with Terraform.
kops-iam-authenticator-config
Terraform module to create and apply a [`Kubernetes`](https://kubernetes.io/) ConfigMap to map AWS IAM roles to Kubernetes users/groups. This will configure clusters managed by [`kops`](https://github.com/kubernetes/kops) to use [`aws-iam-authenticator`](https://github.com/kubernetes-sigs/aws-iam-authenticator), allowing to use AWS IAM credentials to authenticate to a Kubernetes cluster.
kops-metadata
Terraform module to lookup resources within a [Kops](https://github.com/kubernetes/kops) cluster
kops-route53
Terraform module to lookup an IAM role associated with `kops` masters, and attach an IAM policy to the role with permissions to modify Route53 record sets. It provides the IAM permissions needed by [route53-kubernetes](https://github.com/cloudposse/route53-kubernetes) for `kops`. This is useful to make Kubernetes services discoverable via AWS DNS services.
kops-state-backend
Terraform module to provision dependencies for `kops` (config S3 bucket & DNS zone). The module supports the following: 1. Forced server-side encryption at rest for the S3 bucket 2. S3 bucket versioning to allow for `kops` state recovery in the case of accidental deletions or human errors 3. Block public access in bucket level by default
kops-vault-backend
Terraform module to provision an S3 bucket for [HashiCorp Vault](https://www.hashicorp.com/products/vault) secrets storage, and an IAM role and policy with permissions for Kops nodes to access the bucket. The module uses [terraform-aws-kops-metadata](https://github.com/cloudposse/terraform-aws-kops-metadata) to lookup resources within a Kops cluster for easier integration with Terraform.
kops-vpc-peering
Terraform module to create a peering connection between a backing services VPC and a VPC created by [Kops](https://github.com/kubernetes/kops). The module depends on the following [Cloud Posse][website] Terraform modules - [terraform-aws-kops-metadata](https://github.com/cloudposse/terraform-aws-kops-metadata) - to lookup resources within a Kops cluster - [terraform-aws-vpc-peering](https://github.com/cloudposse/terraform-aws-vpc-peering) - to create a peering connection between two VPCs
lakeformation
Terraform module to deploy an instance of [Amazon Lake Formation](https://aws.amazon.com/lake-formation/) on AWS.
lambda-elasticsearch-cleanup
Terraform module to provision a scheduled Lambda function which will delete old Elasticsearch indexes using [SigV4Auth](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) authentication. The lambda function can optionally send output to an SNS topic if the topic ARN is given. This module was largely inspired by [aws-lambda-es-cleanup](https://github.com/cloudreach/aws-lambda-es-cleanup)
lambda-function
This module deploys an AWS Lambda function from a Zip file or from a Docker image. Additionally, it creates an IAM role for the Lambda function, which optionally attaches policies to allow for CloudWatch Logs, Cloudwatch Insights, VPC Access and X-Ray tracing.
lb-s3-bucket
Terraform module to provision an S3 bucket with built in IAM policy to allow [AWS Load Balancers](https://aws.amazon.com/documentation/elastic-load-balancing/) to ship [access logs](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html).
macie
Terraform module to provision [Amazon Macie](https://aws.amazon.com/macie/) - a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS
mq-broker
Terraform module to provision AmazonMQ resources on AWS
msk-apache-kafka-cluster
Terraform module to provision [Amazon Managed Streaming](https://aws.amazon.com/msk/) for [Apache Kafka](https://aws.amazon.com/msk/what-is-kafka/) __Note:__ this module is intended for use with an existing VPC. To create a new VPC, use [terraform-aws-vpc](https://github.com/cloudposse/terraform-aws-vpc) module. **NOTE**: Release `0.8.0` contains breaking changes that will result in the destruction of your existing MSK cluster. To preserve the original cluster, follow the instructions in the [0.7.x to 0.8.x+ migration path](https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/tree/main/docs/migration-0.7.x-0.8.x+.md).
multi-az-subnets
Terraform module for multi-AZ [`subnets`](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) provisioning. The module creates private and public subnets in the provided Availability Zones. The public subnets are routed to the Internet Gateway specified by `var.igw_id`. `nat_gateway_enabled` flag controls the creation of NAT Gateways in the public subnets. The private subnets are routed to the NAT Gateways provided in the `var.az_ngw_ids` map. If you are creating subnets inside a VPC, consider using [cloudposse/terraform-aws-dynamic-subnets](https://github.com/cloudposse/terraform-aws-dynamic-subnets) instead.
mwaa
Terraform module to provision Amazon Managed Workflows for Apache Airflow
named-subnets
Terraform module for named [`subnets`](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) provisioning.
network-firewall
Terraform module to provision AWS Network Firewall resources.
nlb
Terraform module to create an NLB and a default NLB target and related security groups.
organization-access-group
Terraform module to create an IAM Group and Policy to grant permissions to delegated IAM users in the Organization's master account to access a member account https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_access.html
organization-access-role
Terraform module to create an IAM Role to grant permissions to delegated IAM users in the master account to access an invited member account https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_access.html
rds
Terraform module to provision AWS [`RDS`](https://aws.amazon.com/rds/) instances
rds-cloudwatch-sns-alarms
Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic. Create a set of sane RDS CloudWatch alerts for monitoring the health of an RDS instance.
rds-cluster
Terraform module to provision an [`RDS Aurora`](https://aws.amazon.com/rds/aurora) cluster for MySQL or Postgres. Supports [Amazon Aurora Serverless](https://aws.amazon.com/rds/aurora/serverless/).
rds-cluster-instance-group
Terraform module to provision an [`RDS Aurora`](https://aws.amazon.com/rds/aurora) instance group for MySQL or Postgres along with a dedicated endpoint. Use this module together with our [`terraform-aws-rds-cluster`](https://github.com/cloudposse/terraform-aws-rds-cluster) to provision pools of RDS instances. This is useful for creating reporting clusters that don't impact the production databases. Supports [Amazon Aurora Serverless](https://aws.amazon.com/rds/aurora/serverless/).
rds-db-proxy
Terraform module to provision an Amazon [RDS Proxy](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) for MySQL or Postgres.
rds-replica
Terraform module to provision AWS [`RDS`](https://aws.amazon.com/rds/) replica instances. These are best suited for reporting purposes. **IMPORTANT** It is not possible to create a read replica for a DB Instance that belongs to an Aurora DB Cluster.
redshift-cluster
This is `terraform-example-module` project provides all the scaffolding for a typical well-built Cloud Posse module. It's a template repository you can use when creating new repositories.
route53-alias
Terraform module that implements "vanity" host names (e.g. `brand.com`) as `ALIAS` records to another Route53 DNS resource record (e.g. ELB/ALB, S3 Bucket Endpoint or CloudFront Distribution). Unlike `CNAME` records, the synthetic `ALIAS` record works with zone apexes.
route53-cluster-hostname
Terraform module to define a consistent AWS Route53 hostname
route53-cluster-zone
Terraform module to easily define consistent cluster domains on `Route53`.
route53-resolver-dns-firewall
Terraform module to provision Route 53 Resolver DNS Firewall, domain lists, firewall rules, rule groups, and logging configurations.
s3-bucket
This module creates an S3 bucket with support for versioning, lifecycles, object locks, replication, encryption, ACL, bucket object policies, and static website hosting. For backward compatibility, it sets the S3 bucket ACL to `private` and the `s3_object_ownership` to `ObjectWriter`. Moving forward, setting `s3_object_ownership` to `BucketOwnerEnforced` is recommended, and doing so automatically disables the ACL. This module blocks public access to the bucket by default. See `block_public_acls`, `block_public_policy`, `ignore_public_acls`, and `restrict_public_buckets` to change the settings. See [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html) for more details. This module can optionally create an IAM User with access to the S3 bucket. This is inherently insecure in that to enable anyone to become the User, access keys must be generated, and anything generated by Terraform is stored unencrypted in the Terraform state. See the [Terraform documentation](https://www.terraform.io/docs/state/sensitive-data.html) for more details The best way to grant access to the bucket is to grant one or more IAM Roles access to the bucket via `privileged_principal_arns`. This IAM Role can be assumed by EC2 instances via their Instance Profile, or Kubernetes (EKS) services using [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). Entities outside of AWS can assume the Role via [OIDC](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). (See [this example of connecting GitHub](https://aws.amazon.com/blogs/security/use-iam-roles-to-connect-github-actions-to-actions-in-aws/) to enable GitHub actions to assume AWS IAM roles, or use [this Cloud Posse component](https://github.com/cloudposse/terraform-aws-components/tree/main/modules/github-oidc-provider) if you are already using the Cloud Posse reference architecture.) If neither of those approaches work, then as a last resort you can set `user_enabled = true` and this module will provision a basic IAM user with permissions to access the bucket. We do not recommend creating IAM users this way for any other purpose. If an IAM user is created, the IAM user name is constructed using [terraform-null-label](https://github.com/cloudposse/terraform-null-label) and some input is required. The simplest input is `name`. By default the name will be converted to lower case and all non-alphanumeric characters except for hyphen will be removed. See the documentation for `terraform-null-label` to learn how to override these defaults if desired. If an AWS Access Key is created, it is stored either in SSM Parameter Store or is provided as a module output, but not both. Using SSM Parameter Store is recommended because that will keep the secret from being easily accessible via Terraform remote state lookup, but the key will still be stored unencrypted in the Terraform state in any case.
s3-log-storage
This module creates an S3 bucket suitable for receiving logs from other `AWS` services such as `S3`, `CloudFront`, and `CloudTrails`. This module implements a configurable log retention policy, which allows you to efficiently manage logs across different storage classes (_e.g._ `Glacier`) and ultimately expire the data altogether. It enables [default server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html). It [blocks public access to the bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html) by default. As of March, 2022, this module is primarily a wrapper around our [s3-bucket](https://github.com/cloudposse/terraform-aws-s3-bucket) module, with some options preconfigured and SQS notifications added. If it does not exactly suit your needs, you may want to use the `s3-bucket` module directly. As of version 1.0 of this module, most of the inputs are marked `nullable = false`, meaning you can pass in `null` and get the default value rather than having the input be actually set to `null`. This is technically a breaking change from previous versions, but since `null` was not a valid value for most of these variables, we are not considering it a truly breaking change. However, be mindful that the behavior of inputs set to `null` may change in the future, so we recommend setting them to the desired value explicitly.
s3-website
## Deprecated **As of July, 2023 this module is deprecated.** `terraform-aws-s3-website` offers little value beyond [ the `terraform-aws-s3-bucket` module](https://github.com/cloudposse/terraform-aws-s3-bucket), so Cloud Posse is phasing out support for this project. Users are advised to migrate to [terraform-aws-s3-bucket](https://github.com/cloudposse/terraform-aws-s3-bucket) to manage the S3 bucket (including logging) and [terraform-aws-route53-alias](https://github.com/cloudposse/terraform-aws-route53-alias) to register the website hostname in Route53. Feature requests should be directed to those modules. Terraform module to provision S3-backed Websites. **IMPORTANT:** This module provisions a globally accessible S3 bucket for unauthenticated users because it is designed for hosting public static websites. Normally, AWS recommends that S3 buckets should not publicly accessible in order to protect S3 data from unauthorized users.
security-group
Terraform module to create AWS Security Group and rules.
security-hub
Terraform module to deploy [AWS Security Hub](https://aws.amazon.com/security-hub/).
service-control-policies
Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts.
service-quotas
Terraform module to manage [AWS Service Quotas](https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html).
ses
Terraform module to provision Simple Email Service on AWS.
ses-lambda-forwarder
This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the [aws-lambda-ses-forwarder](https://www.npmjs.com/package/aws-lambda-ses-forwarder) NPM module.
sns-cloudwatch-sns-alarms
Terraform module to provision CloudWatch alarms for SNS
sns-lambda-notify-slack
Terraform module to provision a lambda function that subscribes to SNS and notifies to Slack.
sns-topic
Terraform module to provision SNS topic
ssm-iam-role
Terraform module to provision an IAM role with configurable permissions to access [SSM Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html).
ssm-parameter-chamber-reader
Terraform module read ssm paramters managed with Chamber.
ssm-parameter-store
Terraform module for providing read and write access to the AWS SSM Parameter Store.
ssm-parameter-store-policy-documents
This module generates JSON documents for restricted permission sets for AWS SSM Parameter Store access. Helpful when combined with [terraform-aws-ssm-parameter-store](https://github.com/cloudposse/terraform-aws-ssm-parameter-store)
ssm-patch-manager
This module provisions AWS SSM Patch manager maintenance window tasks, targets, patch baselines and patch groups and a s3 bucket for storing patch task logs.
ssm-tls-self-signed-cert
This module creates a self-signed certificate and writes it alongside with its key to SSM Parameter Store (or alternatively AWS Secrets Manager).
ssm-tls-ssh-key-pair
Terraform module that provisions an SSH TLS key pair and writes it to SSM Parameter Store. This is useful for bot accounts (e.g. for GitHub). Easily rotate SSH secrets by simply tainting the module resource and reapplying.
sso
1 items
step-functions
Terraform module to provision [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html).
teleport-storage
This Terraform module provisions: * An S3 bucket for session logs in Gravitational [Teleport](https://gravitational.com/teleport) * 2 DynamoDB tables to use as storage backend in Teleport ## Features Using DynamoDB as a storage backend allows highly available deployments of Teleport Auth services. Using S3 for Teleport session storage has many advantages: * Encryption at rest * Versioned objects * Lifecycle Support to expunge old sessions (e.g. after 2 years) * Extreme Availability & Durability * Zero Maintenance * Glacier * Cross Region Replication * S3 Bucket could be owned by tamper-proof AWS Audit Account * Easily prevent deletions * Audit Trails, Access Logs via Cloud Trails
test-module
This is `terraform-example-module` project provides all the scaffolding for a typical well-built Cloud Posse module. It's a template repository you can use when creating new repositories.
tfstate-backend
Terraform module to provision an S3 bucket to store `terraform.tfstate` file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. The module supports the following: 1. Forced server-side encryption at rest for the S3 bucket 2. S3 bucket versioning to allow for Terraform state recovery in the case of accidental deletions and human errors 3. State locking and consistency checking via DynamoDB table to prevent concurrent operations 4. DynamoDB server-side encryption https://www.terraform.io/docs/backends/types/s3.html __NOTE:__ The operators of the module (IAM Users) must have permissions to create S3 buckets and DynamoDB tables when performing `terraform plan` and `terraform apply` __NOTE:__ This module cannot be used to apply changes to the `mfa_delete` feature of the bucket. Changes regarding mfa_delete can only be made manually using the root credentials with MFA of the AWS Account where the bucket resides. Please see: https://github.com/terraform-providers/terraform-provider-aws/issues/62
transfer-sftp
This is `terraform-aws-transfer-sftp` project provides all the scaffolding for a typical well-built Cloud Posse module. It's a template repository you can use when creating new repositories.
transit-gateway
Terraform module to provision: - [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/) - [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) Resource Share to share the Transit Gateway with the Organization or another AWS Account (configurable via the variables `ram_resource_share_enabled` and `ram_principal`) - [Transit Gateway route table](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html) - [Transit Gateway VPC attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpc-attachments.html) to connect multiple VPCs via the Transit Gateway - Transit Gateway route table propagations to create propagated routes and allow traffic from the Transit Gateway to the VPC attachments - Transit Gateway route table associations to allow traffic from the VPC attachments to the Transit Gateway - Transit Gateway static routes (static routes have a higher precedence than propagated routes) - Subnet routes to route traffic from the subnets in each VPC to the other Transit Gateway VPC attachments
utils
This `terraform-aws-utils` module provides some simple utilities to use when working in AWS.
vpc
1 items
vpc-flow-logs-s3-bucket
Terraform module to create AWS [`VPC Flow logs`](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) backed by S3.
vpc-peering
Terraform module to create a peering connection between two VPCs
vpc-peering-multi-account
Terraform module to create a peering connection between any two VPCs existing in different AWS accounts. This module supports performing this action from a 3rd account (e.g. a "root" account) by specifying the roles to assume for each member account. **IMPORTANT:** AWS allows a multi-account VPC Peering Connection to be deleted from either the requester's or accepter's side. However, Terraform only allows the VPC Peering Connection to be deleted from the requester's side by removing the corresponding `aws_vpc_peering_connection` resource from your configuration. [Read more about this](https://www.terraform.io/docs/providers/aws/r/vpc_peering_accepter.html) on Terraform's documentation portal.
vpn-connection
Terraform module to provision a [site-to-site](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) [VPN connection](https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html) between a VPC and an on-premises network. The module does the following: - Creates a Virtual Private Gateway (VPG) and attaches it to the VPC - Creates a Customer Gateway (CGW) pointing to the provided IP address of the Internet-routable external interface on the on-premises network - Creates a Site-to-Site Virtual Private Network (VPN) connection and assigns it to the VPG and CGW - Requests automatic route propagation between the VPG and the provided route tables in the VPC - If the VPN connection is configured to use static routes, provisions a static route between the VPN connection and the CGW
waf
Terraform module to create and manage AWS WAFv2 rules.