# The Cloud Posse Reference Architecture
> The turnkey architecture for AWS, Datadog & GitHub Actions to get up and running quickly using the Atmos open source framework.
This file contains all documentation content in a single document following the llmstxt.org standard.
## Best Practices
>
> Physics is the law, everything else is a recommendation.
> Anyone can break laws created by people, but I have yet to see anyone break the laws of physics.
> — **Elon Musk**
>
---
## Developer Best Practices
import DocCardList from '@theme/DocCardList';
import Intro from '@site/src/components/Intro';
### Recommendations
---
## Editor Config Best Practices
import Intro from '@site/src/components/Intro';
The EditorConfig enables developers to define and maintain consistent coding styles between different editors and IDEs. It consists of a simple file format (`.editorconfig`) for defining coding styles such as tabs vs spaces. Most text editors support the format and adhere to defined styles. The config files are easily readable and they work nicely with version control systems.
## Example
Place this file in the root of your git repository.
### `.editorconfig`
```ini
# top-most EditorConfig file
root = true
# Unix-style newlines with a newline ending every file
[*]
end_of_line = lf
insert_final_newline = true
# Matches multiple files with brace expansion notation
# Set default charset
[*.{js,py}]
charset = utf-8
# 4 space indentation
[*.py]
indent_style = space
indent_size = 4
# Override for Makefile
[{Makefile, makefile, GNUmakefile}]
indent_style = tab
indent_size = 4
[Makefile.*]
indent_style = tab
indent_size = 4
# Indentation override for all JS under lib directory
[lib/**.js]
indent_style = space
indent_size = 2
# Matches the exact files either package.json or .travis.yml
[{package.json,.travis.yml}]
indent_style = space
indent_size = 2
[shell]
indent_style = tab
indent_size = 4
[*.sh]
indent_style = tab
indent_size = 4
```
## Editor Plugins
Find all plugins here: http://editorconfig.org/#download
- [Vim](https://github.com/editorconfig/editorconfig-vim#readme)
- [Visual Studio](https://marketplace.visualstudio.com/items?itemName=EditorConfigTeam.EditorConfig)
## References
- http://editorconfig.org/
---
## Sign Your GitHub Commits with SSH
If you are already using SSH to authenticate to GitHub, it is very easy to sign all your commits as well, as long as you have already installed Git 2.34.0 or later. (Note, there may be problems with OpenSSH 8.7. Use an earlier or later version. I have this working with OpenSSH 8.1p1.)
### Configure git to sign all your commits with an SSH key
```bash
git config --global gpg.format ssh
git config --global commit.gpgsign true
git config --global tag.gpgsign true
```
### Configure git with the public key to use when signing
Set `KEY_FILE` to the file containing your SSH public key
```bash
KEY_FILE=~/.ssh/id_ed25519.pub
git config --global user.signingKey "$(head -1 $KEY_FILE)"
```
Add your SSH public key to GitHub as a signing key, much the same way you added it as an authentication key, but choose "Signing Key" instead of "Authentication Key" under "Key type", even if you already have it uploaded as an authentication key. Detailed instructions are available [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account#adding-a-new-ssh-key-to-your-account).
We suggest using the same key you use to authenticate with, so that signing is the same as pulling and pushing, but you can use a different key if you want to be prompted for a password with every commit.
---
## Makefile Best Practices
import Intro from '@site/src/components/Intro';
GNU Makefiles are a convenient way for last-mile automation across multiple tool sets. We used to rely more heavily on Makefiles, but have since transitioned our usage predominantly into Atmos itself. That said, here is a collection of some of the best practices we’ve amassed over the years from extensively leveraging Makefiles.
## Avoid using Evals
The use of `$(eval ...)` leads to very confusing execution paths, due to the way `make` evaluates a target. When `make` executes a target, it preprocesses all `$(....)` interpolations and renders the template. After that, it executes, line-by-line each command in the target.
## Namespace targets
Over time, the number of targets in a `Makefile` will grow. We recommend namespacing all targets.
For example:
```
docker/build:
docker build -t example/test .
```
## Use `/` as a target namespace delimiter
When naming target names, we recommend using `/` as the delimiter rather than `:` or `-`. Further more, we recommend sticking all targets within a namespace into a separate file. E.g. `Makefile.docker` for all targets that begin with `docker/`.
For example, stick this in `Makefile.docker`
```
docker/build:
docker build -t example/test .
```
## Avoid using `:` in target names
While it's possible to use `:` as the delimiter in target names, there is a big gotcha: it breaks target dependencies.
For example:
```
docker\:deps:
docker pull example/base-image
docker\:build: docker:deps
docker build -t example/test
```
In this example, `make` will silently ignore calling the target dependency of `docker:deps`. Escaping the target dependency (e.g. `docker\:deps`) has no effect.
## Use `include`
Avoid sticking every target in the same `Makefile` for the same reason we don't stick all code in the same source file. We typically recommend adding something like this to the top of our `Makefile`:
```
-include tasks/Makefile.*
```
:::info
> The leading `-` tells `make` not to error if the `tasks/` folder is empty.
:::
## Define sane defaults for environment variables
No one likes to pass 20 arguments to `make`. Set sane defaults for all variables using the `?=` operator.
For example:
```
DOCKER_TAG ?= latest
```
## Pass Environment Variables like Function Arguments
The nice thing about `make` is it will automatically export all arguments in `key=value` notation as environment variables. This let's us call `make` targets like functions.
e.g.
```
make docker/build DOCKER_TAG=dev
```
## Write small targets
Make is an excellent language for gluing together various tools in your toolchain. It's an easy trap to stick an entire `bash` script inside of a target. From experience, these targets become error prone and difficult to maintain for anyone but a seasoned `make` programmer.
Instead, stick complex logic inside of shell scripts and call those shell scripts from a target.
## Use target dependencies
A target can have dependencies called automatically prior to executing the target. If anyone of the dependencies fails, the execution aborts and the target will not be called.
For example:
```
deps:
@which docker
build: deps
@docker build -t example/test .
```
## Use standard target names in root `Makefile`
The entry-level `Makefile` should define these standard targets across all projects. This makes it very easy for anyone to get started who is familiar with `make`.
* `deps`
* `build`
* `install`
* `default`
* `all`
*IMPORTANT:* All leading whitespace should be tabbed (`^T`)
## Help Target
Our standard `help` target. This will automatically generate well-formatted output for any target that has a `###` comment preceding it.

Simply add this code snippet to your `Makefile` and you'll get this functionality.
```
## This help screen
help:
@printf "Available targets:\n\n"
@awk '/^[a-zA-Z\-_0-9%:\\]+/ { \
helpMessage = match(lastLine, /^## (.*)/); \
if (helpMessage) { \
helpCommand = $$1; \
helpMessage = substr(lastLine, RSTART + 3, RLENGTH); \
gsub("\\\\", "", helpCommand); \
gsub(":+$$", "", helpCommand); \
printf " \x1b[32;01m%-35s\x1b[0m %s\n", helpCommand, helpMessage; \
} \
} \
{ lastLine = $$0 }' $(MAKEFILE_LIST) | sort -u
@printf "\n"
```
# Default target
Add this to the top of your `Makefile` to automatically call `help` if no target passed.
```
default: help
```
---
## Markdown Best Practices
import Intro from '@site/src/components/Intro';
Most of our documentation is provided in Markdown format. Here are some of the conventions and best practices we follow when writing Markdown. Please note that we use the term Markdown loosely to refer to GitHub-flavored Markdown, and we also use quite a bit of MDX, which is what all of our documentation in Docusaurus uses.
## Code Blocks
Use code blocks for anything more than 1 line. Use `code` for inline code, filenames, commands, etc.
### Code Block
~~~~markdown
```
# This is a code block
```
~~~~
### Table of Options
Use tables to communicate lists of options.
Here's an example:
##### Table of Options
```markdown
| Name | Default | Description | Required |
|:-----------|:-------:|:-------------------------------------------|:--------:|
| namespace | | Namespace (e.g. `cp` or `cloudposse`) | Yes |
| stage | | Stage (e.g. `prod`, `dev`, `staging`) | Yes |
| name | | Name (e.g. `bastion` or `db`) | Yes |
| attributes | [] | Additional attributes (e.g. `policy`) | No |
| tags | {} | Additional tags (e.g. `map("Foo","XYZ")`) | No |
```
* `:--------:` should be used for “Default” and “Required” values
* `:---------` should be used for all other columns
* Use `value` for all values
Which will render to something like this:


## Feature List Formatting
Use this format describe the features & benefits.
### Feature List Example
```markdown
1. **Feature 1** - Explanation of benefits
2. **Feature 2** - Explanation of benefits
```
## Use Block Quotes
Reference copyrighted text, quotes, and other unoriginal copy using `>`
### Block Quote Example
```markdown
> Amazon Simple Storage Service (Amazon S3) makes it simple and practical to collect, store, and analyze data - regardless of format – all at massive scale.
```
---
## Password Management
We strongly advise all companies to use "1Password for Teams" as their password management solution.
## Features
- Shared MFA - useful for root accounts like AWS
- MFA Integration with Duo
- Groups
- Slack Integration
- Cloud Storage
- Cross-platform support (OSX, Windows, Linux, & Web)
## Alternatives
- LastPass
---
## Semantic Versioning
We practice [Semantic Versioning](https://semver.org/) for all projects (e.g. GitHub Tags/Releases, Helm Charts, Terraform Modules, Docker Images). Using this versioning standard helps to reduce the entropy related to [Dependency Hell](https://en.wikipedia.org/wiki/Dependency_hell).
Image credit: [Gopher Academy](https://blog.gopheracademy.com/advent-2015/semver/)
## Semantics
Generally, all of our versions follow this convention: `X.Y.Z` (e.g. `1.2.3`). Sometimes, we'll use this format: `X.Y.Z-branch` when we need to disambiguate between versions existing in multiple branches.
Major Releases
These are releases when `X` changes. These releases will typically have major changes in the interface. These releases may not be backward-compatible.
Minor Releases
These are releases when `Y` changes. These releases bundle new features, but the interface should be largely the same. Minor releases should be backward-compatible.
Patch Releases
These are releases when `Z` changes. These releases are typically bug fixes which do not introduce new features. These releases are backward-compatible.
We use GitHub tags & releases for all versioning. All docker images follow the same convention.
## Versioning
### 0.X.Y
We always start projects off at `0.1.0`. This is our first release of any project. While we try to keep our interfaces stable, as long as `X=0`, it indicates that our code does not yet have a stable API and may vary radically between minor releases.
### 1.X.Y+
As soon as our code reaches `1.X.Y`, the interface should be relatively stable - that is not changing much between minor releases.
## Implementation
Managing semantic versions should be automated just like everything else in our infrastructure. The [`build-harness`](/learn/toolchain/#build-harness) is used by our CI/CD process to automatically generate versions based on git history.
---
## Docker Best Practices
import Intro from '@site/src/components/Intro';
Docker best practices that we follow are listed here. Note that this is not an exhaustive list, but rather some of the ones that have stood out for us as practical ways of leveraging Docker together with the Cloud Posse reference architecture.
## Inheritance
Inheritance is when you use `FROM some-image:1.2.3` (vs `FROM scratch`) in a `Dockerfile`. We recommend to leverage lean base images (E.g. `alpine` or `busybox`).
Try to leverage the same base image in as many of your images as possible for faster `docker pulls`.
:::info
- https://docs.docker.com/engine/reference/builder/#from
:::
## Multi-stage Builds
There are two ways to leverage multi-stage builds:
1. *Build-time Environments* The most common application of multi-stage builds is for using a build-time environment for compiling apps, and then a minimal image (E.g. `alpine` or `scratch`) for distributing the resultant artifacts (e.g. statically-linked `go` binaries).
2. *Multiple-Inheritance* We like to think of "multi-stage builds" as a mechanism for "multiple inheritance" as it relates to docker images. While not technically the same thing, using multi-stage images makes it possible to `COPY --from=other-image` to keep things very DRY.
:::info
- https://docs.docker.com/develop/develop-images/multistage-build/
- https://blog.alexellis.io/mutli-stage-docker-builds/
:::
## Use Scratch Base Image
One often overlooked, ultimately lean base-image is the `scratch` image. This is an empty filesystem which allows one to copy/distribute the minimal set of artifacts. For languages that can compile statically linked binaries, using the `scratch` base image (e.g. `FROM scratch`) is the most secure way as there will be no other exploitable packages bundled in the image.
We use this pattern for our [`terraform-root-modules`](https://github.com/cloudposse/terraform-root-modules) distribution of terraform reference architectures.
## Configure Cache Storage Backends
When using BuildKit, you should configure a [cache storage backend](https://docs.docker.com/build/cache/backends/) that is suitable for your build environment. Layer caching significantly speeds up builds by reusing layers from previous builds, and is enabled by default as BuildKit has a dedicated local cache. However, in a CI/CD build environment such as GitHub Actions, an external cache storage backend is essential as there is little to no persistence between builds.
Fortunately, Cloud Posse's [cloudposse/github-action-docker-build-push](https://github.com/cloudposse/github-action-docker-build-push) action uses `gha` (the [GitHub Actions Cache](https://docs.github.com/en/rest/actions/cache)) by default. Thus, even without any additional configuration, the action will automatically cache layers between builds.
When using self-hosted GitHub Actions Runners in an AWS environment, however, we recommend using [ECR as a remote cache storage backend](https://aws.amazon.com/blogs/containers/announcing-remote-cache-support-in-amazon-ecr-for-buildkit-clients/). Using ECR as the remote cache backend—especially in conjunction with a [VPC endpoint for ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html)—results in reduced NAT Gateway costs and faster layered cache imports when compared to the GitHub Actions Cache.
The following example demonstrates how to configure the [cloudposse/github-action-docker-build-push](https://github.com/cloudposse/github-action-docker-build-push) action to use ECR as the remote cache storage backend:
```diff
- name: Build
id: build
uses: cloudposse/github-action-docker-build-push@main
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
+ cache-from: "type=registry,ref=registry.hub.docker.com/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:cache"
+ cache-to: "mode=max,image-manifest=true,oci-mediatypes=true,type=registry,ref=registry.hub.docker.com/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:cache"
```
For more information with regards to the `cache-from` and `cache-to` options, please refer to the [docker buildx documentation](https://docs.docker.com/reference/cli/docker/buildx/build/#options).
---
## GitHub Feature Branches
import Intro from '@site/src/components/Intro'
The Feature Branch Workflow is a requirement for CI/CD. It's a process by which all feature development takes place in a dedicated branch instead of the `master` branch. This makes it easy for multiple developers to collaborate on a particular feature while at the same time ensuring that the master branch remains stable."
## Problem
When you're working on a project, there will be a bunch of different features or ideas in progress at any given time, not all of which are ready for prime time. Further more, as business priorities change, you might need to prioritize certain features and put others on the back burner.
At the same time, business requirements mandate that you have a stable version that can be deployed at any given time. We know code can never be entirely bug-free. Furthermore, once deployed there can be unintended consequences. Other times, managers simply change their mind and decide that a certain feature was premature or unnecessary. To mitigate the impact of these events, we need the ability to rollback a feature or cut-bait.
**TL;DR:** If everyone is working on the same branch such as master, it pollutes the commit history making it all but impossible to figure out which commits relate to specific features making rollbacks impossible.
## Solution
To solve this problem, the standard workflow called _branching_ should be used religiously. Any time a new feature is developed it must be worked on in a separate branch. When you create a branch, you're creating an environment where you can freely test out new ideas without impacting others because changes made on a branch don't affect the `master` branch (or any other one).
Furthermore, no two developers should ever commit to or work on the same branch at the same time (unless they have permission from the branch stakeholder). Instead, they should create a new branch.
The next thing that needs to happen is that the `master` branch is treated as the Holy Grail. Every effort is made to ensure it's stable and can be deployed to production at any time.
Once a feature is considered ready, the developer submits a Pull Request (or PR) and assigns it to a Subject Matter Expert (SME) or peer for review.
On the surface, this is what a well-formatted Pull Request looks like: 
A _Pull Request_ allows many things to happen:
- **Title**: A "human readable" title that represents the feature! 
- **Description**: A long description that details **_What_** was changed, **_Why_** it was deemed necessary, and any other **_References_** that might be useful (E.g. Jira ticket)
- **Comments**: let anyone provide arbitrary feedback viewable by everyone.
- **Diffs**: show what changed between this feature and the current master branch
- **Formal Code Review Process:** let multiple people contribute to the code review process by submitting comments on a line-by-line basis. Having these code reviews formally documented serves as an excellent teaching tool. Over time, the reviews become faster and faster as developers learn what is expected. 
- **Merging**: Once the PR is approved, the developer can squash and merge their code into the master branch. Squashing allows the master branch to have a very clean commit history where every commit corresponds to a PR. 
- **Clean Commit History**: means that every change to the master branch is documented and justified. No one is sneaking in changes. 
- **History of Features** and when they were added 
- **Reverting**: If a feature needs to be removed, with the click of a single button it can be removed from the `master` branch 
## Technical Details
### Create a Branch
Whenever you begin work on a new feature or bugfix, it's important that you create a new branch. Not only is it proper git workflow, but it also keeps your changes organized and separated from the master branch so that you can easily submit and manage multiple pull requests for every task you complete.
To create a new branch and start working on it:
```shell
# Checkout the master branch - you want your new branch to come from master
git checkout master
# Pull down the latest changes
git pull origin master
# Create a new branch, with a descriptive name (e.g. implement-xyz-widget)
git checkout -b newfeature
```
Now, go to town hacking away. When you're ready, push the changes up to the origin.
```shell
git push origin newfeature
```
Now check out how to create [Pull Requests](/best-practices/github/github-pull-requests)!
---
## GitHub Pull Requests
## Submitting a Pull Request
Prior to submitting your pull request, you might want to do a few things to clean up your branch and make it as simple as possible for the original repo's maintainer to test, accept, and merge your work. If any commits have been made to the upstream master branch, you should rebase your development branch so that merging it will be a simple fast-forward that won't require any conflict resolution work.
```
# Fetch upstream master and merge with your repo's master branch
git pull origin master --rebase
```
Follow the prompts to correct any code conflicts. Any file that is conflicted needs to be manually reviewed. After you fix the problems run:
```
git add filename
git rebase --continue
```
Once that is happy, push the rebased changes back to the origin.
```
git push origin newfeature -f
```
Then follow these instructions once you're ready: https://help.github.com/articles/creating-a-pull-request/
## Pull Request Template
Use the following markdown template to describe the Pull Request.
```
## what
* ...high-level explanation of what this PR accomplishes...
## why
* ...business justifications for making the changes...
## references
* ...related pull requests, issues, documents, or research...
```
**Pro Tip:** Use a `.github/pull_request_template.md` file to automatically populate this template when creating new Pull Requests.
:::info
https://help.github.com/articles/creating-a-pull-request-template-for-your-repository/
:::
---
## GitHub Best Practices
## Use `.gitignore`
Use a `.gitignore` file in the root of every repo to exclude files that should never be committed.
Here's an example of the [`.gitignore`](https://github.com/cloudposse/docs/blob/master/.gitignore) from our documentation repository.
```txt title=".gitignore example"
.DS_Store
.envrc
.env
deploy.toml
deploy.yaml
test
test.toml
test.yaml
.htmltest.*.yaml
node_modules
.build-harness
build-harness/
public/*
algolia/*
tmp/*
.gitkeep
*.swp
.idea
*.iml
package-lock.json
static/components/*
static/styleguide/*
themes/cloudposse/static/css/*
themes/cloudposse/static/js/*
static/webfonts/*
static/css/*
static/js/*
```
---
## Terraform Best Practices
These are the *opinionated* best-practices we follow at Cloud Posse. They are inspired by years of experience writing terraform
and borrow on the many other helpful resources like those by [HashiCorp](https://www.terraform.io/docs/cloud/guides/recommended-practices/index.html).
See our general [Best Practices](/best-practices/) which also apply to Terraform.
## Variables
### Use upstream module or provider variable names where applicable
When writing a module that accepts `variable` inputs, make sure to use the same names as the upstream to avoid confusion and ambiguity.
### Use all lower-case with underscores as separators
Avoid introducing any other syntaxes commonly found in other languages such as CamelCase or pascalCase. For consistency we want all variables
to look uniform. This is also inline with the [HashiCorp naming conventions](https://www.terraform.io/docs/extend/best-practices/naming.html).
### Use positive variable names to avoid double negatives
All `variable` inputs that enable/disable a setting should be formatted `...._enabled` (e.g. `encryption_enabled`). It is acceptable for
default values to be either `false` or `true`.
### Use description field for all inputs
All `variable` inputs need a `description` field. When the field is provided by an upstream provider (e.g. `terraform-aws-provider`), use same wording as the upstream docs.
### Use sane defaults where applicable
Modules should be as turnkey as possible. The `default` value should ensure the most secure configuration (E.g. with encryption enabled).
### Use `nullable = false` where appropriate
When passing an argument to a resource, passing `null` means to use the default
value. Prior to Terraform version 1.1.0, passing `null` to a module input
set that value to `null` rather than to the default value.
Starting with Terraform version 1.1.0, variables can be declared as
`nullable = false` which:
1. Prevents the variable from being set to `null`.
2. Causes the variable to be set to the default value if `null` is passed in.
You should always use `nullable = false` for all variables which should never
be set to `null`. This is particularly important for lists,
maps, and objects, which, if not required, should default to empty values (i.
e. `{}` or `[]`) rather than `null`. It can also be useful to set strings
to default to `""` rather than `null` and set `nullable = false`. This will
simplify the code since it can count on the variable having a non-null value.
The default `nullable = true` never needs to be explicitly set. Leave
variables with the default `nullable = true` if a null value is acceptable.
### Use feature flags, list, or map inputs for optional functionality
All Cloud Posse modules should respect the [null-label](https://github.com/cloudposse/terraform-null-label)
`enabled` feature flag, and when `enabled` is `false`, create no resources
and generate null outputs (or, in the case of output lists, maps, and objects,
empty values may be acceptable to avoid having other modules consuming the
outputs fail due to having a null rather than empty value).
Optional functionality should be toggled in either of 2 ways:
1. Use of a feature flag. Specifically, an input variable of type
`bool` with a name ending in `_enabled`. Use this mechanism if the
option requires no further configuration, e.g. `iam_role_enabled` or
`s3_bucket_enabled`. Feature flags should always be `nullable = false`,
but the default value can be `true` or `false` as desired.
2. If an optional feature requires further configuration, use a `list` or `map`
input variable, with an empty input disabling the option and non-empty
input providing configuration. In this case, only use a separate feature
flag if the list or map input may still cause problems due to relying on
computed problems during the plan phase. See [Count vs. For Each](/learn/component-development/terraform-in-depth/terraform-count-vs-for-each)
and [Terraform Errors When Planning](/learn/component-development/terraform-in-depth/terraform-unknown-at-plan-time)
for more information.
3. It is never acceptable for an optional feature of the _module_ to be
toggled by the value of a string or number input variable, due to the
issues explained in [Terraform Errors When Planning](/learn/component-development/terraform-in-depth/terraform-unknown-at-plan-time).
However, if an optional feature of a _resource_ may be toggled by such an
input if that is the behavior of the resource and the input has the same
name as the resource argument.
### Use objects with optional fields for complex inputs
When a module requires a complex input, use an object with optional fields.
This provides documentation and plan-time validation while avoiding type
conversion errors, and allows for future expansion without breaking changes.
Make as many fields as possible optional, provide defaults at every level of
nesting, and use `nullable = false` if possible.
:::caution Extra (or Misspelled) Fields in Object Inputs Will Be Silently Ignored
If you use an object with defaults as an input, Terraform will not give any
indication if the user provides extra fields in the input object. This is
particularly a problem if they misspelled an optional field name, because
the misspelled field will be silently ignored, and the default value the
user intended to override will silently be used. This is
[a limitation of Terraform](https://github.com/hashicorp/terraform/issues/29204#issuecomment-1989579801).
Furthermore, there is no way to add any checks for this situation, because
the input will have already been transformed (unexpected fields removed) by
the time any validation code runs. This makes using an object a trade-off
versus using separate inputs, which do not have this problem, or `type = any`
which allows you to write validation code to catch this problem and
additional code to supply defaults for missing fields.
:::
Reserve `type = any` for exceptional cases where the input is highly
variable and/or complex, and the module is designed to handle it. For
example, the configuration of a [Datadog synthetic test](https://registry.terraform.io/providers/DataDog/datadog/latest/docs/reference/synthetics_test)
is both highly complex and [the Cloud Posse module](https://github.com/cloudposse/terraform-datadog-platform/tree/main/modules/synthetics)
accepts both an object derived from the `synthetics_test` resource schema or
an object derived from the JSON output of the Datadog API. In this rare case,
attempting to maintain a type definition would not only be overly complex,
it would slow the adoption of future additions to the interface, and so
`type = any` is appropriate.
### Prefer a single object over multiple simple inputs for related configuration
When reviewing Cloud Posse modules as examples, you may notice that they
often use a large number of input variables of simple types. This is because
in the early development of Terraform, there was no good way to define
complex objects with defaults. However, now that Terraform supports complex
objects with field-level defaults, we recommend using a single object input
variable with such defaults to group related configuration, taking into consideration
the trade-offs listed in the [above caution](#use-objects-with-optional-fields-for-complex-inputs).
This makes the interface easier to understand and use.
For example, prefer:
```hcl
variable "eip_timeouts" {
type = object({
create = optional(string)
update = optional(string)
delete = optional(string, "30m")
}))
default = {}
nullable = false
}
```
rather than:
```hcl
variable "eip_create_timeout" {
type = string
default = null
}
variable "eip_update_timeout" {
type = string
default = null
}
variable "eip_delete_timeout" {
type = string
default = "30m"
}
```
However, using an object with defaults versus multiple simple inputs is not
without trade-offs, as explained in the [above caution](#use-objects-with-optional-fields-for-complex-inputs).
There are a few ways to mitigate this problem besides using separate inputs:
- If all the defaults are null or empty, you can use a `map(string)` input
variable and use the `keys` function to check for unexpected fields. This
catches errors, but has the drawback that it does not provide
documentation of what fields are expected.
- You can use `type = any` for inputs, but then you have to write the extra
code to validate the input and supply defaults for missing fields. You
should also document the expected fields in the input description.
- If all you are worried about is misspelled field names, you can make the
correctly spelled field names required, ensuring they are supplied.
Alternatively, if the misspelling is predictable, such as you have a field
named `minsize` but people are likely to try to supply `min_size`, you can
make the misspelled field name optional with a sentinel value and then
check for that value in the validation code.
### Use custom validators to enforce custom constraints
Use the `validation` block to enforce custom constraints on input variables.
A custom constraint is one that, if violated, would **_not_** otherwise
cause an error, but would cause the module to behave in an unexpected way.
For example, if the module takes an optional IPv6 CIDR block, you might
receive that in a variable of `list(string)` for reasons explained
[here](#use-feature-flags-list-or-map-inputs-for-optional-functionality).
Use a custom validator to ensure that the list has at most one element, because
if it has more than one, the module will ignore the extra elements, while a
reasonable person might expect them to be used in some way. At the same time, is it not necessary to use a custom validator to enforce
that the input is a valid IPv6 CIDR block, because Terraform will already
do that for you.
This is perhaps better illustrated by an example of a pseudo-enumeration.
Terraform does not support real enumerations, so they are typically
implemented as strings with a limited number of acceptable values. For example,
Say you have a resource that takes a `frequency` input that is a string that
must be either `DAILY` or `WEEKLY`. Even though the value of the string is very
restricted, you should not use a custom validator to enforce that for two reasons.
1. Terraform (technically, the Terraform resource provider) will already
enforce that the string is one of those two values and produce an
informative error message if it is not.
2. If you use a custom validator to enforce that the string is one of those
two values, and then a later version of the resource adds a new option to the
enumeration, such as "HOURLY", the custom validator will prevent the
module from using the new value, even though the module would function
perfectly were it not for the validator. This adds work and delay to the
adoption of underlying enhancements to the resource, without providing
enough benefit to be worth the extra effort.
### Use variables for all secrets with no `default` value and mark them "sensitive"
All `variable` inputs for secrets must never define a `default` value. This ensures that `terraform` is able to validate user input.
The exception to this is if the secret is optional and will be generated for the user automatically when left `null` or `""` (empty).
Use `sensitive = true` to mark all secret variables as sensitive. This ensures that the value is not printed to the console.
## Outputs
### Use description field for all outputs
All outputs must have a `description` set. The `description` should be based on (or adapted from) the upstream terraform provider where applicable.
Avoid simply repeating the variable name as the output `description`.
### Use well-formatted snake case output names
Avoid introducing any other syntaxes commonly found in other languages such as CamelCase or pascalCase. For consistency, we want all variables
to look uniform. It also makes code more consistent when using outputs together with terraform [`remote_state`](https://www.terraform.io/docs/providers/terraform/d/remote_state.html) to access those settings from across modules.
### Never output secrets
Secrets should never be outputs of modules. Rather, they should be written to secure storage such as AWS Secrets Manager, AWS SSM Parameter Store with KMS encryption, or S3 with KMS encryption at rest. Our preferred mechanism on AWS is using SSM Parameter Store. Values written to SSM are easily
retrieved by other terraform modules, or even on the command-line using tools like [chamber](https://github.com/segmentio/chamber) by Segment.io.
We are very strict about this in our components (a.k.a root modules), the
top-most module, because these sensitive outputs are easily leaked in CI/CD
pipelines (see [`tfmask`](https://github.com/cloudposse/tfmask) for masking
secrets in output only as a last resort). We are less sensitive to this in
modules that are typically nested inside of other modules.
Rather than outputting a secret, you may output plain text indicating where
the secret is stored, for example `RDS master password is in SSM parameter
/rds/master_password`. You may also want to have another output just for the
key for the secret in the secret store, so the key is available to other
programs which may be able to retrieve the value given the key.
:::warning
Regardless of whether the secret is output or not, the fact that a secret is
known to Terraform means that its value is stored in plaintext in the Terraform
state file. Storing values in SSM Parameter Store or other places does not
solve this problem. Even if you store the value in a secure place using some
method other than Terraform, if you read it into Terraform, it will be
stored in plaintext in the state file.
To keep the secret out of the state file, you must both store and retrieve
the secret outside of Terraform. This is a limitation of Terraform that [has
been discussed](https://github.com/hashicorp/terraform/issues/516)
practically since Terraform's inception, so a near-term solution is unlikely.
:::
### Use symmetrical names
We prefer to keep terraform outputs symmetrical as much as possible with the upstream resource or module, with exception of prefixes. This reduces the amount of entropy in the code or possible ambiguity, while increasing consistency. Below is an example of what **not* to do. The expected output name is `user_secret_access_key`. This is because the other IAM user outputs in the upstream module are prefixed with `user_`, and then we should borrow the upstream's output name of `secret_access_key` to become `user_secret_access_key` for consistency.

### Export all Attributes of a Resource
While it's important to explicitly output the most relevant attributes from a resource, there are cases where exposing the entire resource object is beneficial. By outputting the full resource, you provide downstream modules and users with greater flexibility and easier interoperability—especially when composing infrastructure across multiple modules.
This approach also future-proofs your module. As providers evolve and introduce new attributes, consumers can immediately access those attributes without requiring the module to be updated with new outputs.
```hcl
output "_" {
value = .
description = "All attributes of [`.`](link-to-aws-provider-resource-attribute-docs)"
}
```
For example, if the module contained the resource address `aws_ec2_instance.default` then the output would be defined like this.
```hcl
output "aws_ec2_instance_default" {
value = aws_ec2_instance.default
description = "All attributes of [`aws_ec2_instance.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#attribute-reference)"
}
```
## Language
### Use indented `HEREDOC` syntax
Using `<<-EOT` (as opposed to `<0.12.26` or `>= 0.13, < 0.15`. Always use `>=` and enforce Terraform versions in your environment by controlling which CLI you use.
### Use encrypted S3 bucket with versioning, encryption and strict IAM policies
We recommend not commingling state in the same bucket as other data. This could
cause the state to get overridden or compromised. Note, the state contains
cached values of all outputs. Consider isolating sensitive areas like
production configuration and audit trails (separate buckets, separate
organizations).
**Pro Tip:** Using the [`terraform-aws-tfstate-backend`](https://github.com/cloudposse/terraform-aws-tfstate-backend) to easily provision buckets for each stage.
### Use Versioning on State Bucket
### Use Encryption at Rest on State Bucket
### Use `.gitignore` to exclude terraform state files, state directory backups and core dumps
```
.terraform
.terraform.tfstate.lock.info
*.tfstate
*.tfstate.backup
```
### Use `.dockerigore` to exclude terraform statefiles from builds
Example:
```
**/.terraform*
```
### Use a programmatically consistent naming convention
All resource names (E.g. things provisioned on AWS) must follow a consistent convention. The reason this is so important is
that modules are frequently composed inside of other modules. Enforcing consistency increases the likelihood that modules can
invoke other modules without colliding on resource names.
To enforce consistency, we require that all modules use the [`terraform-null-label`](https://github.com/cloudposse/terraform-null-label) module.
With this module, users have the ability to change the way resource names are generated such as by changing the order of parameters or the delimiter.
While the module is opinionated on the parameters, it's proved invaluable as a mechanism for generating consistent resource names.
## Module Design
### Small Opinionated Modules
We believe that modules should do one thing very well. But in order to do that, it requires being opinionated on the design.
Simply wrapping terraform resources for the purposes of modularizing code is not that helpful. Implementing a specific use-case
of those resource is more helpful.
### Composable Modules
Write all modules to be easily composable into other modules. This is how we're able to achieve economies of scale and stop
re-inventing the same patterns over and over again.
## Module Usage
### Use Terraform registry format with exact version numbers
There are many ways to express a module's source. Our convention is to use Terraform registry syntax with an explicit version.
```
source = "cloudposse/label/null"
version = "0.25.0"
```
The reason to pin to an explicit version rather than a range like `>= 0.25.
0` is that any update is capable of breaking something. Any changes to your infrastructure should be implemented and reviewed under your control, not blindly automatic based on when you deployed it.
:::info
Prior to Terraform v0.13, our convention was to use the pure git url:
```hcl
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0"
```
Note that the `ref` always points explicitly to a `tags` pinned to a specific version. Dropping the `tags/` qualifier means it could be a branch or a tag; we prefer to be explicit.
:::
---
## Code of Conduct
import Intro from '@site/src/components/Intro';
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing or otherwise, unacceptable behavior may be reported by contacting the project team at hello@cloudposse.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/
---
## Need help? Join our community!
import DocCardList from '@theme/DocCardList'
import Intro from '@site/src/components/Intro'
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import Note from '@site/src/components/Note'
import PrimaryCTA from '@site/src/components/PrimaryCTA'
Cloud Posse has a great community of active users who are more than willing to help each other. So, join us!
## Get Started today
## Sign up for our SweetOps Slack
Join our free SweetOps Slack community, hosted by Cloud Posse with more than 9,500 members worldwide. While we primarily address questions in channels like `#atmos`, `#aws`, `#cloudposse`, `#geodesic`, `#github-actions`, and `#terraform`, feel free to engage in any channel.
Sign up here: [https://slack.sweetops.com](https://slack.sweetops.com).
Archives are searchable at [archive.sweetops.com](https://archive.sweetops.com).
## Join our GitHub Discussions
[Cloud Posse GitHub Discussions](https://github.com/orgs/cloudposse/discussions) is a great place to ask questions, share ideas, and get feedback from the community. We have a dedicated section for support and questions related to the reference architecture.
Please search through existing discussions before creating a new one. If you do not find an answer, please feel free to create a new discussion.
Essential Support guarantees priority responses, so you always have a direct line to Cloud Posse’s expertise. Learn more about our support offerings at [https://cloudposse.com/support](https://cloudposse.com/support).
## Attend Weekly Office Hours
[Sign up](https://cloudposse.com/office-hours/) for our free public “Office Hours” are held every Wednesday at 11:30am PT (14:30 ET) via Zoom. Past recordings are available on our [YouTube channel](https://youtube.com/cloudposse).
These public calls are hosted weekly by Cloud Posse. They are a good way to keep up with the latest developments and trends in our DevOps community.
Sign Up for Office Hours
## Are you more into email? Try our Newsletter
Sign up for [Cloud Posse's Weekly Newsletter](https://newsletter.cloudposse.com) to get the latest news about things happening in our community and other news about building Open Source infrastructure—straight into your inbox.
## Found a Bug or Issue?
Please report it in [our issue tracker](https://github.com/cloudposse/docs/issues)
---
---
## Contact Us
import Intro from '@site/src/components/Intro';
We'd love to hear from you! Reach out using any of the options below in the medium most convenient for you.
Here's how to get in touch with us:
Email
hello@cloudposse.com
Website
cloudposse.com
GitHub
github.com/cloudposse
Schedule Time
cloudposse.com/meet
Slack
Newsletter
cloudposse.com/newsletter
LinkedIn
linkedin.com/company/cloudposse
## Partnership Opportunities
Cloud Posse welcomes all partnership inquiries, including partnerships with other DevOps practitioners, freelancers and consultancies who want to leverage our methodologies with their customers.
Please drop us a line at [hello@cloudposse.com](mailto:hello@cloudposse.com).
---
## Terraform Automated Testing
import Intro from '@site/src/components/Intro';
import Step from '@site/src/components/Step';
import Steps from '@site/src/components/Steps';
import StepNumber from '@site/src/components/StepNumber';
Cloud Posse's Terraform modules use a comprehensive automated testing strategy that combines static code analysis and integration tests. Our testing approach ensures code quality through automated linting and formatting checks, while integration tests validate that modules work correctly in real-world scenarios. Tests can be run locally during development and are automatically triggered through our CI/CD pipeline.
All of our Terraform modules have automated tests. We have two sets of checks:
### Static Code Analysis
The first set of checks is executed through the feature-branch workflow, which can be found [here](https://github.com/cloudposse/github-actions-workflows-terraform-module/blob/main/.github/workflows/feature-branch.yml)
This workflow generates some documentation and performs basic sanity checks, including linting and formatting. These checks are lightweight and can be executed without requiring any special permissions. Consequently, they *are automatically run* on every commit.
Before committing and pushing your changes, you can and should run this set of checks locally by executing the following command on your host
```
pre-commit run --all-files
```
Running these checks locally incorporates all the required changes that otherwise would block your PR.
### Integration Tests
The second set of checks consists of Terraform integration tests that validate the functionality and integration of the module. These tests are performed using the [`terratest`](https://github.com/gruntwork-io/terratest) library, specifically designed for infrastructure testing, and do more in-depth integration tests of module functionality.
Unlike the first set of checks, these integration tests are *executed only on request*, and only by authorized contributors. We use ChatOps to trigger this workflow.
## Philosophy of Terraform Integration Testing
At a minimum, we ensure that all of our modules cleanly `plan`, `apply` and `destroy`. This catches 80% of the problems with only 20% of the effort. We also test than then the `enabled` input is set to `false`, no resources are created.
Ideally we would like to test that the resources are properly created, but often this is difficult to verify programmatically, in which case we settle for spot checking that the dynamic outputs match our expectations. At the same time, we do not want to waste effort retesting what has already been tested by HashiCorp and their providers. For example, we have our [`terraform-aws-s3-bucket`](https://github.com/cloudposse/terraform-aws-s3-bucket) module that creates an S3 bucket. We don't need to test that a bucket is created; we assume that would be caught by the upstream terraform tests. But we do want to [test that the bucket name](https://github.com/cloudposse/terraform-aws-s3-bucket/blob/master/test/src/examples_complete_test.go#L38) is what we expect it to be, since this is something under our control.
## Using ChatOps To Trigger Integration Tests
In addition to automatic triggers, tests can be run on demand via "ChatOps". (You will need to have at least `triage` level of access to a repo to use ChatOps commands.) Typically, tests are run by a Cloud Posse contributor or team member as part of a PR review.
Tests are initiated by posting GitHub comments on the PR. Currently supported commands are the following:
| Command | Description |
| ------------ | --------------------------------------------------- |
| `/terratest` | Run the `terratest` integration tests in `test/src` |
Terraform tests run against our [testing infrastructure](https://github.com/cloudposse/testing.cloudposse.co) that we host in an isolated account on AWS, strictly for the purposes of testing infrastructure.
ChatOps is powered by [GitHub Actions](https://github.com/features/actions) and the [slash-dispatch-command](https://github.com/peter-evans/slash-command-dispatch).
The terratest workflow is defined in the [`cloudposse/actions`](https://github.com/cloudposse/actions/blob/master/.github/workflows/terratest-command.yml) repository. The benefit with this is that we have one place to control the testing
workflow for all of our hundreds of terraform modules. The downside, however, with dispatched workflows is that the _workflows_ always run from the `main` branch.
## Manually triggering a shared workflow
Here's a list a workflows you might want to trigger manually should things go wrong on GitHub side or with our configuration.
- `feature-branch` can be triggered anytime by labeling/unlabeling PR with any label.
- `release-branch` is the same to creating a GH release manually. We have created a complimentary workflow `release-published` for this case: it will fulfill the missing parts once you create a release manually. Note that you are skipping tests before release in this case.
- `scheduled` can be triggered anytime from GitHub UI, it has a *workflow_dispatch* trigger for this purpose.
## Running Terraform Tests locally
We use [Atmos](https://atmos.tools) to streamline how Terraform tests are run. It centralizes configuration and wraps common test workflows with easy-to-use commands.
All tests are located in the `test/` folder.
Under the hood, tests are powered by Terratest together with our internal [Test Helpers](https://github.com/cloudposse/test-helpers) library, providing robust infrastructure validation.
Setup dependencies:
- Install Atmos ([installation guide](https://atmos.tools/install/))
- Install Go [1.24+ or newer](https://go.dev/doc/install)
- Install Terraform or OpenTofu
To run tests:
- Run all tests:
```sh
atmos test run
```
- Clean up test artifacts:
```sh
atmos test clean
```
- Explore additional test options:
```sh
atmos test --help
```
The configuration for test commands is centrally managed. To review what's being imported, see the [`atmos.yaml`](https://raw.githubusercontent.com/cloudposse/.github/refs/heads/main/.github/atmos/terraform-module.yaml) file.
Learn more about implementing [custom commands](https://atmos.tools/core-concepts/custom-commands/) with atmos.
## ChatOps Configuration
If you're a contributor who wants to initialize one of our terraform modules, this is the process. Note, if a repo has already been configured for ChatOps, there's no need to proceed with these steps.
To initialize one of our modules with ChatOps, run the following commands:
1. Install Atmos ([installation guide](https://atmos.tools/install/))
1. `git clone` the terraform module repository
1. `cd $repo` to enter the repository directory
1. `git add *` to add the changes
1. Add the build badge to the `README.yaml` under the `badges` section.
1. `atmos docs generate readme` to rebuild the `README.md` (remember, never edit the `README.md` manually since it's generated from the `README.yaml`)
1. Open up a Pull Request with the changes. Here is a [good example](https://github.com/cloudposse/atmos/pull/555).
1. Request a Code Review in the [`#pr-reviews`](https://slack.cloudposse.com) Slack channel (and *big* thanks for your contribution!)
---
## Code Review Guidelines
import Steps from '@site/src/components/Steps';
Here are some of our tips for conducting *Code Reviews* the SweetOps way. If you haven't already, become familiar with our [Best Practices](/best-practices) and [Terraform Best Practices](/best-practices/terraform).
1. Use the ["Suggest"](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/incorporating-feedback-in-your-pull-request) feature as much as possible. This makes it quick and easy for the contributor to accept or dismiss the recommendations.
1. Use proper markdown in suggestions (e.g. code blocks)
1. Always be polite and appreciative of the contributions!
1. Use emoticons to up-vote other comments (rather than `+1` comments)
1. Use ChatOps command `/terratest` to run integration tests
1. Recommend changes to better conform to our best-practices
1. Quote comments you're replying to make your responses more clear
### Specifics for Terraform Modules
We use automated testing to enforce certain standards for our Terraform modules. Currently these are run via GitHub Actions, and you can look at the logs of failing tests by clicking the `Details` link in the PR status list. Here is a partial list of rules that are enforced:
- All modules referenced must be pinned to an exact, numbered version. Cannot be `master` or a range like `>= 0.9.0`
- All providers must have version pinning of the form `>=` (can be `>= x.x` or `>= x.x.x`). More restrictive pinning is not allowed.
- All modules that no longer support Terraform versions older than 0.12.26 must be upgraded to refer to providers using Terraform Registry format (explicit `source` field).
- All modules must have their `README` updated to the current standard. **Note:** `README.md` is generated by tooling from `README.yaml`. Anything you want to update in the `README` must be updated in `README.yaml` or it will simply be overwritten. Updating the `README` usually requires nothing more than regenerating it.
- All modules must comply exactly with Terraform formatting standards used by `terraform fmt`
We have tooling to help with some of this. Before opening a PR, but after making all your changes, run
```
make pr/auto-format
```
in the root directory of the repository. That will format your Terraform code and rebuild the README. (If you have done that and the tests still complain about a bad `README`, it is possible you have cached an old version of the builder Docker image. Try updating it with `make init && make builder/pull` and run `make pr/auto-format` again.)
---
## Component Testing
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import CodeBlock from '@theme/CodeBlock';
import CollapsibleText from '@site/src/components/CollapsibleText';
import PillBox from '@site/src/components/PillBox';
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Step3VendorYaml from '@site/examples/component-testing/step-3/vendor.yaml';
import Step3VpcYaml from '@site/examples/component-testing/step-3/catalog-vpc.yaml';
import Step3Stack from '@site/examples/component-testing/step-3/stack.yaml';
import Step4UsecaseYaml from '@site/examples/component-testing/step-4/usecase.yaml';
import Step4Stack from '@site/examples/component-testing/step-4/stack.yaml';
import Step7UsecaseYaml from '@site/examples/component-testing/step-7/usecase.yaml';
import Step7Stack from '@site/examples/component-testing/step-7/stack.yaml';
import Intro from '@site/src/components/Intro';
This documentation will guide you through our comprehensive strategy for testing Terraform components, provide step-by-step instructions and practical examples to help you validate your component configurations effectively. Whether you're setting up initial tests, adding dependencies, or verifying output assertions, you'll find the resources you need to ensure robust and reliable deployments.
## Context
Our component testing strategy is a direct outcome of [our migration to a dedicated GitHub Organization for components](/components/#terraform-component-github-repository-has-moved). This separation allows each component to live in its own repository, enabling independent versioning and testing. It not only improves the reliability of each component but also empowers the community to contribute via pull requests confidently. With integrated testing for every PR, we can ensure high quality and build trust in each contribution.
For more information on building and maintaining components, please refer to our [Component Development Guide](/learn/component-development/), which provides detailed insights into best practices, design principles, and the overall process of component development.
## Prerequisites
1. Install Terraform / Tofu
- Ensure you have [Terraform](https://www.terraform.io/downloads.html) or [OpenTofu](https://opentofu.org/docs/intro/install/) installed on your machine.
1. Install Atmos
- [Atmos](https://atmos.tools/install/) is a tool for managing Terraform environments.
1. Install Golang
- Go is a programming language that you'll need to run the tests.
- Download and install Go from the [official Go website](https://golang.org/dl/).
- Make sure to set up your Go environment correctly by following the [Getting Started with Go](https://golang.org/doc/install/source) guide.
1. Authenticate on AWS
- Ensure you have the necessary AWS credentials configured on your machine. You can do this by setting up the AWS CLI and running `aws configure`, where you'll input your AWS Access Key, Secret Key, region, and output format.
- Refer to the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) for more details.
## Test Framework
Component testing framework assumes that each component's repo structure follows the convention when all component terraform source code
would be stored in `src` directory and everything related to tests will be placed in `test` directory.
Tests consists of two coupled parts - atmos configuration fixtures and tests written on Go code.
Repo structure should be simular to this one:
```console
component-root/
├── src/ # Component source directory
│ └── main.tf
└── test/ # Tests directory
├── fixtures/ # Atmos configurations
├── component_test.go # Tests
├── go.mod
└── go.sum
```
### Atmos configuration fixtures
Atmos configuration fixtures provides minimal settings to deploy the component and it's dependencies on test account during test run.
The difference with a regular atmos configuration are:
1. All components deployed on one stack `default-test` in one `us-east-2` region.
2. Use single aws account for all test resources. If component assumes the cross region or cross account interaction, the configuration still deploys it to the same actual aws account.
3. Mock `account-map` component to skip role assuming and always use current AWS credentials provided with environment variables
4. Configure terraform state files storage to local directory at a path provided by test framework with environment variable `COMPONENT_HELPER_STATE_DIR`
This configuration is common for all components and could be copied from [template repo](https://github.com/cloudposse-terraform-components/template/tree/main/test).
Fixtures directory structure looks like
```console
fixtures/
├── stacks/
| ├── catalog/
| | ├── usecase/
| | | ├── basic.yaml
| | | └── disabled.yaml
| | └── account-map.yaml
│ └── orgs/default/test/
| ├── _defaults.yaml
| └── tests.yaml
├── atmos.yaml
└── vendor.yaml
```
For most components, avoid any changes to these files
1. `atmos.yaml` - shared atmos config common for all test cases
2. `stacks/catalog/account-map.yaml` - Mock `account-map` configuration makes any environment/stack/tenant to be backed with the single AWS test account
3. `stacks/orgs/default/test/_defaults.yaml` - Configure terraform state backend to local directory and define shared variables for `default-test`
This files and directories contains custom configurations specific for a testing component:
1. `vendor.yaml` - Vendor configuration for all component dependencies
2. `stacks/catalog/` - Store all dependencies configuration files in the dir
3. `stacks/catalog/usecases` - Store configuration of the testing component's use cases
4. `stacks/catalog/usecases/basic.yaml` - Predefined file for basic configuration of the testing component's use case
5. `stacks/catalog/usecases/disabled.yaml` - Predefined file for the `disabled` configuration use case (when variable `enabled: false`)
6. `stacks/orgs/default/test/tests.yaml` - Include all dependencies and use cases configurations to deploy them for `default-test` stack
### Tests (Golang)
Component tests are written on go lang as this general purpose language is standard defacto for cloud compute engineering
Under the hood tests uses several libraries with helper functions
1. `github.com/cloudposse/test-helpers/atmos/component-helper` - Component testing framework provides
2. `github.com/cloudposse/test-helpers/atmos` - Atmos API
3. `github.com/cloudposse/test-helpers/aws` - Test helpers interact with AWS
4. `github.com/cloudposse/terratest/aws` - Test helpers provided by GruntWork
5. `github.com/aws/aws-sdk-go-v2` - AWS API
You can specify any additional dependency libraries by running `go get {library name}`.
Test framework extends `github.com/stretchr/testify/suite` to organize test suites.
Regular test file structure follow this example:
```go title="test/component_test.go"
package test
import (
"context"
"testing"
"fmt"
"strings"
helper "github.com/cloudposse/test-helpers/pkg/atmos/component-helper"
awsHelper "github.com/cloudposse/test-helpers/pkg/aws"
"github.com/cloudposse/test-helpers/pkg/atmos"
"github.com/gruntwork-io/terratest/modules/aws"
"github.com/stretchr/testify/assert"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type ComponentSuite struct {
helper.TestSuite
}
// Functions Test prefix are entrypoint for `go test`
func TestRunSuite(t *testing.T) {
# Define test suite instance
suite := new(ComponentSuite)
// Add dependency to the dependencies queue
suite.AddDependency(t, "vpc", "default-test", nil)
// Run test suite
helper.Run(t, suite)
}
// Test suite methods prefixed with `Test` are tests
// Test basic usecase
func (s *ComponentSuite) TestBasic() {
const component = "example/basic"
const stack = "default-test"
const awsRegion = "us-east-2"
// Destroy test component
defer s.DestroyAtmosComponent(s.T(), component, stack, nil)
// Deploy test component
options, _ := s.DeployAtmosComponent(s.T(), component, stack, nil)
assert.NotNil(s.T(), options)
// Get test component output
id := atmos.Output(s.T(), options, "eks_cluster_id")
assert.True(s.T(), strings.HasPrefix(id, "eg-default-ue2-test-"))
// Test component drift
s.DriftTest(component, stack, nil)
}
// Test disabled use case
func (s *ComponentSuite) TestEnabledFlag() {
const component = "example/disabled"
const stack = "default-test"
// Verify no resources created when `enabled: false`
s.VerifyEnabledFlag(component, stack, nil)
}
```
### CLI Flags Cheat Sheet
A test suite run consists of the following phases all of which can be controlled by passing flags:
| Phase | Description | Skip flag |
|------------|-------------------------------------------|----------------------------|
| Setup | Setup test suite and deploy dependencies |`--skip-setup` |
| Test | Deploy the component |`--only-deploy-dependencies`|
| Teardown | Destroy all dependencies |`--skip-teardown` |
This is possible to enable/disable steps on each phase more precisely
| Phase | Description | Skip flag |
|------------|--------------------------------------------------------|-----------------------------|
| Setup | Vendor dependencies |`--skip-vendor` |
| Setup | Deploy component dependencies |`--skip-deploy-dependencies` |
| Test | Deploy the component |`--skip-deploy-component` |
| Test | Perform assertions | |
| Test | Destroy the deployed component (on defer) |`--skip-destroy-component` |
| Teardown | Destroy all dependencies |`--skip-destroy-dependencies`|
Here is the useful combination of flags.
| Command | Description |
|-----------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
|`go test -timeout 1h --only-deploy-dependencies --skip-destroy-dependencies` | Deploy dependencies only. |
|`go test -timeout 1h --skip-deploy-dependencies --skip-destroy-dependencies --skip-destroy-component` | Deploy testing component. Use previously deployed dependencies. Do not destroy anything. Useful when you are working on deploying use case |
|`go test -timeout 1h --skip-deploy-dependencies --skip-destroy-dependencies --skip-deploy-component --skip-destroy-component`| Do not deploy or destroy anything. Useful when you are working on tests asserts |
|`go test -timeout 1h --skip-deploy-dependencies --skip-deploy-component` | Destroy component and its dependencies. Useful when your tests are done to clean up all resources |
[Read more about the test helpers framework](https://github.com/cloudposse/test-helpers/blob/main/pkg/atmos/component-helper/README.md)
## Write Tests
Writing tests for your Terraform components is essential for building trust in the component's reliability and enabling safe acceptance of community contributions. By implementing comprehensive tests, we can confidently review and merge pull requests while ensuring the component continues to function as expected.
### Copy the test scaffold files
If you missed the test scaffold files, copy the contents from [this GitHub repository](https://github.com/cloudposse-terraform-components/template/tree/main/test) into your component repository.
This will provide you with the necessary structure and example tests to get started.
The repo structure should looks like the following:
```console
├── src/
│ └── main.tf
└── test/
├── fixtures/
│ ├── stacks/
│ | ├── catalog/
│ | | ├── usecase/
│ | | | ├── basic.yaml
│ | | | └── disabled.yaml
│ | | └── account-map.yaml
│ │ └── orgs/default/test/
│ | ├── _defaults.yaml
│ | └── tests.yaml
│ ├── atmos.yaml
│ └── vendor.yaml
├── component_test.go
├── go.mod
└── go.sum
```
### Run Initial Tests
Navigate to the `test` directory and run tests in your terminal by running
```console
cd test
go test -v -timeout 1h --only-deploy-dependencies
```
➜ test git:(main) go test -v -timeout 1h --only-deploy-dependencies
=== RUN TestRunSuite
2025/03/07 14:13:34 INFO TestRunSuite: setup → started
2025/03/07 14:13:34 INFO TestRunSuite: tests will be run in temp directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212
2025/03/07 14:13:34 INFO TestRunSuite: terraform state for tests will be saved in state directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212/state
2025/03/07 14:13:34 INFO TestRunSuite: setup/bootstrap temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → started
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: vendor dependencies → started
TestRunSuite 2025-03-07T14:13:35+01:00 retry.go:91: atmos [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Running command atmos with args [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Processing vendor config file 'vendor.yaml'
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Pulling sources for the component 'account-map' from 'github.com/cloudposse/terraform-aws-components.git//modules/account-map?ref=1.520.0' into 'components/terraform/account-map'
2025/03/07 14:13:42 INFO TestRunSuite: vendor dependencies → completed
2025/03/07 14:13:42 INFO TestRunSuite: deploy dependencies → started
2025/03/07 14:13:42 INFO no dependencies to deploy
2025/03/07 14:13:42 INFO TestRunSuite: deploy dependencies → completed
2025/03/07 14:13:42 INFO TestRunSuite: setup → completed
2025/03/07 14:13:42 WARN TestRunSuite: teardown → skipped
--- PASS: TestRunSuite (8.28s)
PASS
ok test 9.142s
### Add Dependencies
Identify any additional dependencies your component require. Skip this step if the component doesn't have any dependencies.
1. Add dependency to the vendor file
{Step3VendorYaml}
1. Add atmos component configurations
{Step3VpcYaml}
1. Import the dependent component for `default-test` stack
{Step3Stack}
1. Add the dependent component to test suite with Go code
- By default, the test suite will add a unique random value to the `attributes` terraform variable.
- This is to avoid resource naming collisions with other tests that are using the same component.
- But in some cases, you may need to pass unique value to specific input for the component.
Check out the advanced example for the most common use-case with the `dns-delegated` domain name.
```go title="test/component_test.go"
package test
import (
"testing"
helper "github.com/cloudposse/test-helpers/pkg/atmos/component-helper"
)
type ComponentSuite struct {
helper.TestSuite
}
func (s *ComponentSuite) TestBasic() {
// Add empty test
// Suite setup would not be executed without at least one test
}
func TestRunSuite(t *testing.T) {
suite := new(ComponentSuite)
// Deploy the dependent vpc component
suite.AddDependency(t, "vpc", "default-test", nil)
helper.Run(t, suite)
}
```
```go title="test/component_test.go"
package test
import (
"testing"
helper "github.com/cloudposse/test-helpers/pkg/atmos/component-helper"
)
type ComponentSuite struct {
helper.TestSuite
}
func TestRunSuite(t *testing.T) {
suite := new(ComponentSuite)
subdomain := strings.ToLower(random.UniqueId())
inputs := map[string]interface{}{
"zone_config": []map[string]interface{}{
{
"subdomain": subdomain,
"zone_name": "components.cptest.test-automation.app",
},
},
}
suite.AddDependency(t, "dns-delegated", "default-test", &inputs)
helper.Run(t, suite)
}
```
1. Deploy dependencies
```console
go test -v -timeout 1h --only-deploy-dependencies --skip-destroy-dependencies
```
=== RUN TestRunSuite
2025/03/07 14:13:34 INFO TestRunSuite: setup → started
2025/03/07 14:13:34 INFO TestRunSuite: tests will be run in temp directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212
2025/03/07 14:13:34 INFO TestRunSuite: terraform state for tests will be saved in state directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212/state
2025/03/07 14:13:34 INFO TestRunSuite: setup/bootstrap temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → started
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: vendor dependencies → started
TestRunSuite 2025-03-07T14:13:35+01:00 retry.go:91: atmos [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Running command atmos with args [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Processing vendor config file 'vendor.yaml'
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Pulling sources for the component 'account-map' from 'github.com/cloudposse/terraform-aws-components.git//modules/account-map?ref=1.520.0' into 'components/terraform/account-map'
2025/03/07 14:13:42 INFO TestRunSuite: vendor dependencies → completed
2025/03/07 17:38:24 INFO TestRunSuite: deploy dependencies → started
2025/03/07 17:38:24 INFO deploying dependency component=vpc stack=default-test
TestRunSuite 2025-03-07T17:38:24+01:00 retry.go:91: atmos [terraform apply vpc -s default-test -input=false -auto-approve -var attributes=["rydpt4"] -no-color -lock=false]
TestRunSuite 2025-03-07T17:38:24+01:00 logger.go:67: Running command atmos with args [terraform apply vpc -s default-test -input=false -auto-approve -var attributes=["rydpt4"] -no-color -lock=false]
...
2025/03/07 17:43:27 INFO TestRunSuite: deploy dependencies → completed
2025/03/07 17:43:27 INFO TestRunSuite: setup → completed
2025/03/07 17:43:27 WARN TestRunSuite: teardown → skipped
--- PASS: TestRunSuite (322.74s)
PASS
ok test 324.052s
### Add Test Use-Cases
1. Add atmos configuration for the component use case
{Step4UsecaseYaml}
1. Import the use case for `default-test` stack
{Step4Stack}
1. Write tests
```go title="test/component_test.go"
package test
import (
"testing"
helper "github.com/cloudposse/test-helpers/pkg/atmos/component-helper"
)
type ComponentSuite struct {
helper.TestSuite
}
func TestRunSuite(t *testing.T) {
suite := new(ComponentSuite)
suite.AddDependency(t, "vpc", "default-test", nil)
helper.Run(t, suite)
}
func (s *ComponentSuite) TestBasic() {
const component = "example-component/basic"
const stack = "default-test"
const awsRegion = "us-east-2"
// How to read outputs from the dependent component
// vpcOptions, err := s.GetAtmosOptions("vpc", stack, nil)
// id := atmos.Output(s.T(), vpcOptions, "id")
inputs := map[string]interface{}{
// Add other inputs that are required for the use case
}
defer s.DestroyAtmosComponent(s.T(), component, stack, &inputs)
options, _ := s.DeployAtmosComponent(s.T(), component, stack, &inputs)
assert.NotNil(s.T(), options)
}
```
1. Deploy test component
```console
go test -v -timeout 1h --skip-deploy-dependencies --skip-destroy-dependencies --skip-destroy-component --skip-teardown
```
=== RUN TestRunSuite
2025/03/07 14:13:34 INFO TestRunSuite: setup → started
2025/03/07 14:13:34 INFO TestRunSuite: tests will be run in temp directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212
2025/03/07 14:13:34 INFO TestRunSuite: terraform state for tests will be saved in state directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212/state
2025/03/07 14:13:34 INFO TestRunSuite: setup/bootstrap temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → started
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: vendor dependencies → started
TestRunSuite 2025-03-07T14:13:35+01:00 retry.go:91: atmos [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Running command atmos with args [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Processing vendor config file 'vendor.yaml'
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Pulling sources for the component 'account-map' from 'github.com/cloudposse/terraform-aws-components.git//modules/account-map?ref=1.520.0' into 'components/terraform/account-map'
2025/03/07 14:13:42 INFO TestRunSuite: vendor dependencies → completed
2025/03/07 17:38:24 INFO TestRunSuite: deploy dependencies → skipped
2025/03/07 17:43:27 INFO TestRunSuite: setup → completed
...
2025/03/07 17:43:27 WARN TestRunSuite: teardown → skipped
--- PASS: TestRunSuite (322.74s)
--- PASS: TestRunSuite/TestBasic (3.19s)
PASS
ok test 324.052s
### Add Assertions
1. Include assertions
Within your test, include assertions to validate the expected outcomes. Use Go's testing package to assert conditions that must
be true for the test to pass. This will help ensure that your component behaves as expected.
```go title="test/component_test.go"
package test
import (
"testing"
"github.com/cloudposse/test-helpers/pkg/atmos"
helper "github.com/cloudposse/test-helpers/pkg/atmos/component-helper"
"github.com/stretchr/testify/assert"
)
type ComponentSuite struct {
helper.TestSuite
}
func TestRunSuite(t *testing.T) {
suite := new(ComponentSuite)
suite.AddDependency(t, "vpc", "default-test", nil)
helper.Run(t, suite)
}
func (s *ComponentSuite) TestBasic() {
const component = "example-component/basic"
const stack = "default-test"
const awsRegion = "us-east-2"
// How to read outputs from the dependent component
// vpcOptions, err := s.GetAtmosOptions("vpc", stack, nil)
// id := atmos.Output(s.T(), vpcOptions, "id")
inputs := map[string]interface{}{
// Add other inputs that are required for the use case
}
defer s.DestroyAtmosComponent(s.T(), component, stack, &inputs)
options, _ := s.DeployAtmosComponent(s.T(), component, stack, &inputs)
assert.NotNil(s.T(), options)
// How to read string output from the component
output1 := atmos.Output(s.T(), options, "output_name_1")
assert.Equal(s.T(), "expected_value_1", output1)
// How to read list of strings output from the component
output2 := atmos.OutputList(s.T(), options, "output_name_2")
assert.Equal(s.T(), "expected_value_2", output2[0])
assert.ElementsMatch(s.T(), ["expected_value_2"], output2)
// How to read map of objects output from the component
output3 := atmos.OutputMapOfObjects(s.T(), options, "output_name_3")
assert.Equal(s.T(), "expected_value_3", output3["key"])
// How to read struct output from the component
type outputStruct struct {
keyName string `json:"key"`
}
output4 := outputStruct{}
atmos.OutputStruct(s.T(), options, "output_name_4", &output4)
assert.Equal(s.T(), "expected_value_4", output4["keyName"])
}
```
1. Run test
```console
go test -v -timeout 1h --skip-deploy-dependencies --skip-destroy-dependencies --skip-destroy-component --skip-teardown
```
=== RUN TestRunSuite
2025/03/07 14:13:34 INFO TestRunSuite: setup → started
2025/03/07 14:13:34 INFO TestRunSuite: tests will be run in temp directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212
2025/03/07 14:13:34 INFO TestRunSuite: terraform state for tests will be saved in state directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212/state
2025/03/07 14:13:34 INFO TestRunSuite: setup/bootstrap temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → started
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: vendor dependencies → started
TestRunSuite 2025-03-07T14:13:35+01:00 retry.go:91: atmos [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Running command atmos with args [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Processing vendor config file 'vendor.yaml'
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Pulling sources for the component 'account-map' from 'github.com/cloudposse/terraform-aws-components.git//modules/account-map?ref=1.520.0' into 'components/terraform/account-map'
2025/03/07 14:13:42 INFO TestRunSuite: vendor dependencies → completed
2025/03/07 17:38:24 INFO TestRunSuite: deploy dependencies → skipped
2025/03/07 17:43:27 INFO TestRunSuite: setup → completed
...
2025/03/07 17:43:27 WARN TestRunSuite: teardown → skipped
--- PASS: TestRunSuite (322.74s)
--- PASS: TestRunSuite/TestBasic (3.19s)
PASS
ok test 324.052s
### Add Drift Detection Test
The drifting test ensures that the component is not change any resources on rerun with the same inputs.
1. Add a "drifting test" check
```go title="test/component_test.go"
func (s *ComponentSuite) TestBasic() {
const component = "example-component/basic"
const stack = "default-test"
const awsRegion = "us-east-2"
inputs := map[string]interface{}{}
defer s.DestroyAtmosComponent(s.T(), component, stack, &inputs)
options, _ := s.DeployAtmosComponent(s.T(), component, stack, &inputs)
assert.NotNil(s.T(), options)
// ...
// Just add this line to the check for drift
s.DriftTest(component, stack, &inputs)
}
```
1. Run test
```console
go test -v -timeout 1h --skip-deploy-dependencies --skip-destroy-dependencies --skip-destroy-component --skip-teardown
```
=== RUN TestRunSuite
2025/03/07 14:13:34 INFO TestRunSuite: setup → started
2025/03/07 14:13:34 INFO TestRunSuite: tests will be run in temp directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212
2025/03/07 14:13:34 INFO TestRunSuite: terraform state for tests will be saved in state directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212/state
2025/03/07 14:13:34 INFO TestRunSuite: setup/bootstrap temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → started
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: vendor dependencies → started
TestRunSuite 2025-03-07T14:13:35+01:00 retry.go:91: atmos [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Running command atmos with args [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Processing vendor config file 'vendor.yaml'
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Pulling sources for the component 'account-map' from 'github.com/cloudposse/terraform-aws-components.git//modules/account-map?ref=1.520.0' into 'components/terraform/account-map'
2025/03/07 14:13:42 INFO TestRunSuite: vendor dependencies → completed
2025/03/07 17:38:24 INFO TestRunSuite: deploy dependencies → skipped
2025/03/07 17:43:27 INFO TestRunSuite: setup → completed
...
2025/03/07 17:43:27 WARN TestRunSuite: teardown → skipped
--- PASS: TestRunSuite (322.74s)
--- PASS: TestRunSuite/TestBasic (3.19s)
PASS
ok test 324.052s
### Test `disabled` Use-case
All components should avoid creating any resources if the `enabled` input is set to `false`.
1. Add atmos configuration for the component use case
{Step7UsecaseYaml}
1. Import the use case for `default-test` stack
{Step7Stack}
1. Add a "disabled" use case test
```go title="test/component_test.go"
// ...
func (s *ComponentSuite) TestEnabledFlag() {
const component = "example-component/disabled"
const stack = "default-test"
s.VerifyEnabledFlag(component, stack, nil)
}
```
1. Run test
```console
go test -v -timeout 1h --skip-deploy-dependencies --skip-destroy-dependencies --skip-destroy-component --skip-teardown
```
=== RUN TestRunSuite
2025/03/07 14:13:34 INFO TestRunSuite: setup → started
2025/03/07 14:13:34 INFO TestRunSuite: tests will be run in temp directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212
2025/03/07 14:13:34 INFO TestRunSuite: terraform state for tests will be saved in state directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212/state
2025/03/07 14:13:34 INFO TestRunSuite: setup/bootstrap temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → started
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: vendor dependencies → started
TestRunSuite 2025-03-07T14:13:35+01:00 retry.go:91: atmos [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Running command atmos with args [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Processing vendor config file 'vendor.yaml'
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Pulling sources for the component 'account-map' from 'github.com/cloudposse/terraform-aws-components.git//modules/account-map?ref=1.520.0' into 'components/terraform/account-map'
2025/03/07 14:13:42 INFO TestRunSuite: vendor dependencies → completed
2025/03/07 17:38:24 INFO TestRunSuite: deploy dependencies → skipped
2025/03/07 17:43:27 INFO TestRunSuite: setup → completed
...
2025/03/07 17:43:27 WARN TestRunSuite: teardown → skipped
--- PASS: TestRunSuite (322.74s)
--- PASS: TestRunSuite/TestBasic (3.19s)
--- PASS: TestRunSuite/TestEnabledFlag (1.02s)
PASS
ok test 324.052s
### Tear Down Resources
Tear down the test environment
```console
go test -v -timeout 1h --skip-deploy-dependencies
```
=== RUN TestRunSuite
2025/03/07 14:13:34 INFO TestRunSuite: setup → started
2025/03/07 14:13:34 INFO TestRunSuite: tests will be run in temp directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212
2025/03/07 14:13:34 INFO TestRunSuite: terraform state for tests will be saved in state directory path=/var/folders/1l/hcm6nfms6g58mdrpwcxklsvh0000gn/T/atmos-test-helper3047340212/state
2025/03/07 14:13:34 INFO TestRunSuite: setup/bootstrap temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → started
2025/03/07 14:13:34 INFO TestRunSuite: setup/copy component to temp dir → completed
2025/03/07 14:13:34 INFO TestRunSuite: vendor dependencies → started
TestRunSuite 2025-03-07T14:13:35+01:00 retry.go:91: atmos [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Running command atmos with args [vendor pull]
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Processing vendor config file 'vendor.yaml'
TestRunSuite 2025-03-07T14:13:35+01:00 logger.go:67: Pulling sources for the component 'account-map' from 'github.com/cloudposse/terraform-aws-components.git//modules/account-map?ref=1.520.0' into 'components/terraform/account-map'
2025/03/07 14:13:42 INFO TestRunSuite: vendor dependencies → completed
2025/03/07 17:38:24 INFO TestRunSuite: deploy dependencies → completed
2025/03/07 17:43:27 INFO TestRunSuite: setup → completed
...
2025/03/07 17:43:27 WARN TestRunSuite: teardown → completed
--- PASS: TestRunSuite (322.74s)
--- PASS: TestRunSuite/TestBasic (3.19s)
--- PASS: TestRunSuite/TestEnabledFlag (1.02s)
PASS
ok test 324.052s
## FAQ
### Why do my tests fail when looking up remote state for components?
If you encounter an error like:
```
Error: Attempt to get attribute from null value
...
│ module.s3_bucket.outputs is null
...
This value is null, so it does not have any attributes.
```
This typically occurs when using an older version of the remote-state module. The solution is to upgrade to version `1.8.0` or higher of the `cloudposse/stack-config/yaml//modules/remote-state` module. For example:
```hcl
module "s3_bucket" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.8.0"
component = var.destination_bucket_component_name
context = module.this.context
}
```
### How do I handle dependencies in my tests?
When testing components that depend on other infrastructure (like EKS clusters, VPCs, or other foundational components), you need to configure and deploy these dependencies in your test suite. This is done by adding dependencies to the stack test fixtures and deploying before running the tests. For example:
```go
func TestRunSuite(t *testing.T) {
suite := new(ComponentSuite)
// Add dependencies
suite.AddDependency(t, "s3-bucket/cloudwatch", "default-test", nil)
helper.Run(t, suite)
}
```
---
## GitHub Contributors
## About
Cloud Posse maintains 300+ projects under our GitHub organization. All of our projects have stemmed from past
consulting engagements with our customers. Everything we do is Open Sourced under our GitHub under the permissive APACHE2 license. With so many projects, however, it wouldn't be possible to maintain all of them if we didn't have the
support from our community and some tools to make life easier.
## Our Tools
Here are some of the tools we depend on for running our Open Source organization.
| Tool | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `geodesic` | [Cloud Automation Shell](https://github.com/cloudposse/geodesic) that we use as the base Docker image for many of our projects |
| `build-harness` | Our collection of GNU-style `Makefiles` for building stuff |
| `README.yaml` | Our specification for generating beautiful READMEs |
| `packages` | Our toolchain that we rely on throughout our projects |
| GitHub Actions | Our CI/CD Platform |
## How to Become a Contributor
Becoming a contributor is easy. Just start opening Pull Requests with enhancements, bug fixes, or other improvements.
Once we take notice, we'll reach out to you. We recommend that you start by participating in the
[`#pr-reviews`](https://slack.cloudposse.com/) channel. This way we'll work with you directly.
## Responsibilities
* Participate in Code Reviews. Help us out by reviewing pull requests from our community.
* Report issues and provide feedback for how we can improve processes.
* Help answer questions from our community of tens of thousands users from around the world.
* Cutting releases when Pull Requests are merged to master.
## Current Contributors
We're grateful to our volunteers helping to review community pull requests.
| Avatar | GitHub Username | Name |
| -------------------------------------------------------------------------------- | -------------------------------------------- | ------------- |
|  | [@osterman](https://github.com/osterman) | Erik Osterman |
|  | [@aknysh](https://github.com/aknysh) | Andriy Knyshh |
|  | [@goruha](https://github.com/goruha) | Igor Rodionov |
|  | [@nuru](https://github.com/nuru) | Jeremy |
|  | [@jamengual](https://github.com/jamengual) | PePe Amengual |
|  | [@adamcrews](https://github.com/jamengual) | Adam Crews |
|  | [@nitrocode](https://github.com/nitrocode) | Ronak |
|  | [@RothAndrew](https://github.com/RothAndrew) | Andrew Roth |
|  | [@Gowiem](https://github.com/Gowiem) | Matt Gowie |
---
## Contributor Tips & Tricks
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Note from '@site/src/components/Note';
This document is intended to describe common and not-so-common processes that the contributor team executes as part of maintaining the 300+ open source projects within the Cloud Posse Organization.
## Tips & Tricks
### Update Multiple Repos at Once
To update many of the open source repos with a common change such as updating Terraform `required_version` pinning, adding GitHub actions, or updating pinned providers, the contributor team has adopted using [microplane](https://github.com/Clever/microplane). This tool allows us to execute automated changes across dozens or even hundreds of our open source repos, which saves many hours of contributor time.
Here is a standard usage pattern that contributors can adopt to specific changes as they see fit:
1. [Download the microplane binary from their releases page](https://github.com/Clever/microplane/releases)
1. Open your terminal, rename and add the downloaded binary into your $PATH, and add execution privileges to the binary:
1. `mv ~/Downloads/mp-0.0.21-darwin-amd64 /usr/local/bin/mp && chmod 755 /usr/local/bin/mp`
1. Add a [GH Personal Access Token](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token) to your shell's environment variables under the variable `GITHUB_API_TOKEN`:
1. `export GITHUB_API_TOKEN=$YOUR_TOKEN`
1. Change to an empty directory that you can use as a scratch workspace for your Many Repos change:
1. `mkdir ~/mp_workspace && cd ~/mp_workspace`
1. Initialize microplane:
1. `mp init --all-repos "cloudposse"`
1. Initializing creates an `mp/` folder in your current directory, finds all of the Cloud Posse public projects, and then creates an `mp/init.json` file describing them.
1. NOTE: microplane supposedly has the ability to search against an organization and narrow the returned repos that end up in `init.json`, but that functionality seems buggy and not working. We do our repo filtering manually in the next step.
1. Manually edit `mp/init.json` to only include the repos which you want to make changes against.
1. Your editor's multi-select and edit capabilities are your friend here! (or maybe some [`jq`](https://stedolan.github.io/jq/) if that's your thing)
1. Duplicate the original `init.json` so you don't need to re-run `mp init` in the case that you want to start fresh.
1. Run microplane's clone command to pull all of the repos specified in `init.json` down to your local machine for mass changes:
1. `mp clone`
1. Create a bash script to facilitate the changes that you're attempting to make against the many repos you've specified.
1. Use the microplane `-r` or `--repo` flag to operate on a single repo for testing your script prior to making the changes across all repos.
1. Go through the full microplane process (complete the following steps) for this single repo test-run and get it signed off by the other Cloud Posse `#contributors` to ensure everyone agrees with the change prior to making it.
1. Once you've got a script that is working like you expect, you can run the microplane plan command. This step executes the given script across all the repos specified in `init.json` and then commits the result. You should format your `mp plan` command as follows:
```bash
mp plan -b $YOUR_BRANCH_NAME -m "[AUTOMATED] $YOUR_PR_TITLE
## What
1. $INFO_ON_YOUR_CHANGE_NUMBER_ONE
1. $INFO_ON_YOUR_CHANGE_NUMBER_TWO
## Why
1. $INFO_ON_WHY_YOU_MADE_YOUR_CHANGE_NUMBER_ONE
1. $INFO_ON_WHY_YOU_MADE_YOUR_CHANGE_NUMBER_TWO" \
-- \
sh -c $PWD/$YOUR_SCRIPT_NAME.sh
```
the quotes around the message passing this as an argument to `-m` (message)
1. Verify one of the repos that you're updating exemplifies the changes you're trying to make:
1. `cd mp/terraform-aws-tfstate-backend/plan/planned/`
1. `git show`
1. Confirm the changes and commit message that you see are what you want to push.
1. If everything looks legit, ship it:
1. `mp push -a $YOUR_GITHUB_USERNAME`
That should cycle through all of the repos in `init.json`, pushing them to the branch you specified, and creating a PR against the `master` branch. Now go forth and run `/test all` against all of those PRs and ask some kind soul to help you get them merged 😎.
---
## GitHub Contributors FAQ
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
Welcome to the Cloud Posse Contributors FAQ! This guide answers common questions about contributing to our open source projects. Whether you're a first-time contributor or a seasoned maintainer, you'll find helpful information about our processes, best practices, and community guidelines. If you don't find what you're looking for here, feel free to reach out through our community channels.
## How do I ask questions?
There are several ways to get help and ask questions:
1. GitHub Discussions
- Visit our [GitHub Discussions](https://github.com/orgs/cloudposse/discussions)
- Search existing discussions to see if your question has already been answered
- Create a new discussion if you can't find an answer
2. SweetOps Slack
- Join our [SweetOps Slack workspace](https://slack.sweetops.com/)
- Recommended channels:
- `#terraform` - For Terraform-related questions
- `#aws` - For AWS-specific questions
- `#cloudposse` - For general Cloud Posse questions
- `#atmos` - For Atmos-related questions
- Please search the channel history before asking questions
3. **GitHub Issues**
- For bug reports or feature requests, create an issue in the relevant repository
- Make sure to follow the issue template and provide all requested information
4. **Documentation**
- Check our [documentation](/learn/) for answers to common questions
- Many questions can be answered by reading the relevant documentation
## How do I see all open Pull Requests?
You can find all open Pull Requests by going to [GitHub and searching for open PRs](https://github.com/pulls?q=is%3Apr+is%3Aopen+org%3Acloudposse).
## What if we approve and merge a Pull Request with a problem?
We encourage everyone who uses our modules to practice version pinning. So while we try to ensure `master` is always stable,
we're not concerned if we occasionally break things. Also, we believe in a blameless culture. We rather figure out and fix
why something happened than blame or chastise our volunteers.
## What are your best-practices we should follow?
See our [Terraform Best Practices](/best-practices/terraform) and [Best Practices](/best-practices/). These are just some guidelines to follow and we're open to your feedback!
## What benefits do I receive as a contributor?
As a contributor, you'll be able to expedite the reviews of Pull Requests for your organization by having a direct
line of communication with our community of volunteers.
## Are contributors paid?
All of our contributors are volunteers. Granted, some of our "volunteers" happen to work for Cloud Posse. They get paid! =)
## How do contributors collaborate?
Contributors participate in a private Slack channel on the [SweetOps Slack team](https://slack.sweetops.com/) and via GitHub on issues and pull requests.
## When do we cut new releases?
We cut a release every single merge to `master`.
## What is our versioning strategy?
We practice [`semver`](https://semver.org).
Our versioning strategy allows us to systematically and consistently increase patch, minor and major releases. When in doubt, bump the minor release.
Following this strategy allows us to move quickly, release often while enabling our community to version pin for stability, and still convey the *semantics* of the kind of change that happened.
1. **Patch Releases** We bump the patch release for bug fixes of *existing* functionality or small updates to documentation
2. **Minor Releases** Projects that are `< 1.x`, every merge to `master` else is a minor release. This is the proper [semver convention](https://semver.org/#spec-item-4) for `0.x.y` releases.
- While we always try to ensure the interfaces won't change radically, we cannot promise that will remain the case, especially when the tool itself (e.g. `terraform` is not yet `1.0`).
- Once the interface is more or less guaranteed to be stable we will release a 1.0.
3. **Major Releases** The major version is milestone-driven (e.g. `> 1.x`). The first milestone is always stability. A major release will correspond to the previous minor release that closes out that milestone.
- The 1.0 milestone doesn’t happen until we have had a very long burn-in period where it is stable and the interface works. For comparison, the `terraform` language is still `0.x` since July 28, 2014.
- **After 1.0** all major releases are driven by achieving a particular feature set
A common strategy practiced by other organizations is to bump the major release when there's a “known breaking change” and usually includes many changes all at once. This is typically practiced post-1.0 and it's still somewhat arbitrary and difficult to verify. Philosophically speaking, every change is breaking for somebody.
For example, if a project has a bug, chances are that someone has implemented a workaround for that bug. If we release a bug fix as a patch release, that could very well be a breaking change for anyone who had a workaround. By releasing frequently on every commit to `master`,
we allow the greatest number of users to benefit from the work we do. If we break something, no big deal. Users should always practice strict version pinning - never using `master` directly. That way, users can just pin to the previous release of of a module. As a small organization managing *hundreds* of projects, attempting a formal release schedule for each project is not feasible.
## How do we create a new release?
As a member of the `@cloudposse/contributors` team, create a new release, use the [built-in GitHub release functionality](https://help.github.com/en/enterprise/2.13/user/articles/creating-releases). Please do not create releases manually by creating tags and pushing them as this lacks all the metadata associated with a release, which can have a rich markdown description. All GitHub releases also have tags, but not all tags have a GitHub release.
:::caution
Versions must follow the [`semver`](https://semver.org) convention. Do not prefix releases with a version specifier (e.g. a *good* version is `0.1.0` and a *bad* version is `v0.1.0`).
:::
## Why are releases not always in sequential order?
Some of our `terraform` modules support backwards compatibility with HCLv1 (pre terraform 0.12). You'll notice these projects usually have a branch named `0.11/master`. When we accept a bugfix for one of these projects and merge to `master`, we will cut a patch release against the last minor release version for terraform 0.11.
:::info
We're not accepting new features for pre-terraform-0.12 modules.
:::
## Why is my Terraform Pull Request not yet reviewed or merged?
If your Pull Request is to upgrade a Terraform module from HCLv1 to HCLv2, then chances are we haven't approved it because it does not have `terratest` integration tests. As a general policy, we're only upgrading modules to HCLv2 that have `terratest` integration tests. Attempting to maintain stability
with hundreds of modules is only possible with integration testing.
:::info
All Terraform Modules updated to HCL2 **must** have `terratest` integration tests.
:::
## Do we have to update integration tests?
We do not expect contributors to be experts at integration testing or writing Golang. For that reason, we do not require that Open Source community contributors update integration tests. However, if existing tests break due to changes in a Pull Request, we will not accept the contributions until the tests pass.
## How are Pull Requests merged? Can I merge my own Pull Requests?
Once a Pull Request is approved and tests pass, then it may be merged. Anyone with permissions to merge is permitted to merge. Note, if any changes are pushed to the branch, the approval is automatically dismissed. This is why we let you merge your own PRs. Approvers are free to leave a Pull Request open so that the originator of the PR may have the recourse to change things if something comes up.
While we try to keep `master` stable, it's just a best-effort. If something goes wrong, it's better that we address what broke down procedurally (e.g. improving tests, communication, etc.), than micro-managing the merging process.
We recommend users version pin to releases for stability and never pin to master.
After merging a Pull Request to `master`, then cut a release. We cut a release for every merge to master. If it's a bug fix, bump the patch release (e.g. `0.0.x`). If it's a new feature, bump the minor (e.g. `0.x.0`). It's that easy! Review the rest of this FAQ for more details on our `semver` strategy.
## What are the merge constraints?
All of our GitHub repositories implement the following convention with branch protections:
1. At least (1) approver determined by the [`CODEOWNER`](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners) file
2. Required tests passing
---
## Our GitHub
import Steps from '@site/src/components/Steps';
import Note from '@site/src/components/Note';
## Our Commitment
We commit to always provide free and public access to our Open Source repositories. If you see repository on our GitHub today, then it will be here tomorrow and thereafter in perpetuity.
From time to time, we might decide we can no longer maintain a repository. If that happens, we will mark it as "archived" on GitHub. This will ensure you will continue to have access to the code.
## Getting Involved
The best way to get involved is to checkout our "[Choose Your Path](/intro/path/)" guide.
Then join us in our Slack [`#community`](https://cloudposse.com/slack/) channel to get support or talk with others in the community.
## Contributing
Cloud Posse accepts contributions from the community. In the interest of fostering an open and welcoming environment, we have a strict [code of conduct](/community/code-of-conduct) to ensure participation in our projects and our community is a harassment-free experience for everyone.
If you want to make some big changes or don't know where to begin, it's best to first get in touch. You can discuss the change you wish to make via GitHub issues, [email](mailto:hello@cloudposse.com), or join our [`#community`](https://cloudposse.com/slack/) channel.
In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
1. Fork the repo on GitHub
2. Clone the project to your own machine
3. Commit changes to your own branch
4. Push your work back up to your fork
5. Submit a Pull request so that we can review your changes
Be sure to merge the latest from "upstream" before making a pull request!
## Maintenance
Cloud Posse actively maintains all projects on [our GitHub](https://github.com/cloudposse/).
## Deprecation and Archival Process
From time to time, we may need to deprecate and archive repositories that are no longer actively maintained. We follow a structured process to ensure transparency and give the community adequate notice.
This process applies to all Cloud Posse repositories, including:
- Terraform modules in the [cloudposse](https://github.com/cloudposse/) organization
- Terraform components in the [cloudposse-terraform-components](https://github.com/cloudposse-terraform-components) organization
- GitHub Actions and other tooling
### Step 1: Create and Pin a GitHub Issue
Create a comprehensive GitHub issue that includes:
- Detailed explanation of the deprecation
- Timeline and key dates
- Migration path or alternatives
- Answers to common questions
- Contact information for support
Pin this issue to the repository so it appears at the top of the Issues tab.
### Step 2: Add Deprecation Notice to README
Add a deprecation warning at the top of the `README.md` using a GitHub-style warning admonition:
```markdown
> [!WARNING]
> **Deprecated**: This repository is deprecated and will be archived on [DATE].
> Please see [Issue #XXX](link-to-pinned-issue) for more information.
```
### Step 3: Update README.yaml
If the project uses `README.yaml` for generating documentation, add the `deprecated` field:
```yaml
deprecated:
notice: |-
This module is deprecated and will be archived on [DATE].
Please see the [pinned issue](link-to-pinned-issue) for details and migration guidance.
Consider using [alternative-module](link) as a replacement.
```
After updating `README.yaml`, regenerate the `README.md`:
```bash
atmos docs generate readme
```
### Step 4: Publish Blog Post Announcement
Create a blog post announcing the deprecation. This post should:
- Link to the pinned GitHub issue
- Explain the reason for deprecation
- Provide the timeline and deprecation date
- Offer migration guidance or alternatives
- Direct readers where to ask questions
This ensures the broader community is aware of the deprecation, even if they're not actively monitoring the repository.
### Step 5: Submit Pull Request
Create a pull request with the changes from Steps 2-3. This PR provides visibility to those monitoring repository activity. The PR should:
- Clearly state the reason for deprecation
- Specify the planned deprecation date
- Provide migration guidance or alternative solutions (if applicable)
- Reference the GitHub issue created in Step 1
- Tag relevant stakeholders for visibility
### Step 6: Wait Until Deprecation Date
Allow sufficient time (typically 90+ days) for the community to:
- Migrate away from the deprecated component
- Ask questions and get support
- Complete any in-flight work
### Step 7: Archive the Repository
Once the deprecation date has passed:
1. Update the `.github/settings.yml` file in the repository:
```yaml
repository:
archived: true
```
2. Commit and merge this change
3. The repository will be automatically archived by GitHub settings automation
Once archived, the repository becomes read-only but remains publicly accessible for historical reference.
### Step 8: Update Blog Post
Update the blog post from Step 4 to reflect that the repository has been archived:
- Add a note that the deprecation period has ended
- Confirm the repository is now archived
- Remind readers that the code remains publicly accessible for historical reference
## GitHub Projects
There's a lot going on in our GitHub. With over [200+ Open Source repositories](https://github.com/cloudposse/), keeping track of all the [Open Issues](https://github.com/search?q=org%3Acloudposse+type%3Aissues+is%3Aopen), Feature Requests, and Pull Requests is a fulltime job.
We use Kanban board to manage our [Open Source projects](https://github.com/orgs/cloudposse/projects/3).
(**Help wanted!**)
## Something Missing?
[Get in touch](/community/contact-us) with us.
---
## Office Hours Registration
import CloudPosseOfficeHoursEmbed from '@site/src/components/CloudPosseOfficeHoursEmbed';
import { YouTubePlaylist } from '@codesweetly/react-youtube-playlist';
## Past Recordings
---
## #refarch
import CloudPosseSlackEmbed from '@site/src/components/CloudPosseSlackEmbed';
## Join our Slack Community!
Cloud Posse has a great community of active users who are more than willing to help each other. So, join us!
---
## Community Support
import Intro from '@site/src/components/Intro';
Cloud Posse is an **open source company**. Everything we develop is open source and free to use under permissive licenses, but **Support is not included**.
:::tip
If your team depends on our work, the best way to support Cloud Posse—and to ensure your team gets the help it needs—is by subscribing to one of our [premium support options](/support).
:::
## Getting Help
We have a [Slack community](https://cloudposse.com/slack) and a [GitHub Discussions forum](https://github.com/orgs/cloudposse/discussions) for questions and collaboration relating to our open source projects.
### Recommended etiquette:
- **Ask one question at a time** — split complex topics into separate posts.
- **Keep it high-level and clear** — we prioritize general questions that help the broader community.
- **Deeper technical or commercial-related questions** may be answered from time to time, **solely at our discretion**.
## Free Tier Support
**Free Tier Support does not include** support for:
- [Quickstart](/quickstart), [Jumpstart](/jumpstart), or other Commercial Reference Architectures
- [Cloud Posse Components](/components)
- [GitHub Actions and workflows](/github-actions)
- **Deliverables from paid implementations or services**
Our projects are open source and free to use — **but support is not included**.
We occasionally answer complex or commercial-related questions here, but this is voluntary and unscheduled. If your team depends on our work, we strongly encourage you to [subscribe to premium support](https://cloudposse.com/support).
**That’s what makes open source work.**
---
## Terraform Components
import Intro from "@site/src/components/Intro";
import DocCardList from "@theme/DocCardList";
This is a library of reusable Terraform "root module" components.
:::info
## Terraform Component GitHub Repository Has Moved!
The GitHub repository for Cloud Posse's Terraform components has migrated to a [dedicated GitHub organization](https://github.com/cloudposse-terraform-components). All documentation remains here, but all future updates, contributions, and issue tracking for the source code should now be directed to the respective repositories in the new organization.
[Learn more](/learn/maintenance/tutorials/how-to-update-components-yaml-to-new-organization/) about updating your references to point to the new repositories.
:::
---
## access-analyzer
This component is responsible for configuring AWS Identity and Access Management Access Analyzer within an AWS
Organization.
IAM Access Analyzer helps identify resources in your organization and accounts that are shared with external entities,
as well as unused access permissions. This enables you to identify unintended access to your resources and data, which
is a critical security risk. Access Analyzer uses logic-based reasoning to analyze resource-based policies in your AWS
environment and generates findings for each instance of a resource shared outside your account.
## Key Features
- **External Access Analysis**: Identifies resources shared with external principals outside your organization
- **Unused Access Analysis**: Detects unused IAM roles, users, and permissions to implement least privilege
- **Policy Validation**: Validates IAM policies against policy grammar and AWS best practices
- **Custom Policy Checks**: Validates IAM policies against your specified security standards
- **Policy Generation**: Generates least-privilege IAM policies based on CloudTrail access activity
## Analyzer Types
This component creates two types of organization-wide analyzers:
| Analyzer Type | Purpose | Findings |
|---------------|---------|----------|
| `ORGANIZATION` | External access analysis | Public access, cross-account access, cross-organization access |
| `ORGANIZATION_UNUSED_ACCESS` | Unused access analysis | Unused roles, users, permissions (configurable threshold) |
## Supported Resources
External access analyzer monitors the following resource types:
- Amazon S3 buckets and access points
- IAM roles and policies
- AWS KMS keys
- AWS Lambda functions and layers
- Amazon SQS queues
- AWS Secrets Manager secrets
- Amazon SNS topics
- Amazon EBS volume snapshots
- Amazon RDS DB snapshots and cluster snapshots
- Amazon ECR repositories
- Amazon EFS file systems
## Regional Deployment
IAM Access Analyzer is a regional service. You must deploy analyzers to each region where you have resources that need
monitoring. The delegation from the management account only needs to happen once (globally), but analyzers must be
created in each region.
## Deployment Workflow
> **Important**: Step 1 must be completed successfully before Step 2 can run. The delegation and service-linked role
> created in Step 1 are prerequisites for creating organization-level analyzers in Step 2.
**Step 1 - Delegate Access Analyzer (Management Account)**: From the Organization management (root) account, delegate
administration to the security account. This step also creates the required service-linked role.
**Step 2 - Create Analyzers (Delegated Administrator)**: Deploy the external access and unused access analyzers in the
delegated administrator account for each region.
## Service-Linked Role
AWS Access Analyzer requires a service-linked role (`AWSServiceRoleForAccessAnalyzer`) in the organization management
account before organization-level analyzers can be created from the delegated administrator. This component
automatically creates this role when deploying to the root account with `organizations_delegated_administrator_enabled: true`.
The service-linked role creation can be controlled with the `service_linked_role_enabled` variable:
- `true` (default): Creates the service-linked role when delegating administration
- `false`: Skips creation (use if the role already exists or was created manually/by another process)
## Configuration
> **Note**: The examples below use Cloud Posse naming conventions (e.g., `core-security` for the security account,
> `plat-gbl-root` for stack names). Adjust these values to match your organization's account and stack naming conventions.
### Defaults (Abstract Component)
```yaml
components:
terraform:
access-analyzer/defaults:
metadata:
component: access-analyzer
type: abstract
vars:
enabled: true
global_environment: gbl
account_map_tenant: core
root_account_stage: root
# The account name of your delegated administrator (typically your security account)
# Adjust to match your organization's account naming convention
delegated_administrator_account_name: core-security
accessanalyzer_service_principal: "access-analyzer.amazonaws.com"
accessanalyzer_organization_enabled: false
accessanalyzer_organization_unused_access_enabled: false
organizations_delegated_administrator_enabled: false
service_linked_role_enabled: true
```
### Root Account Configuration (Step 1)
```yaml
import:
- catalog/access-analyzer/defaults
components:
terraform:
# Step 1: Deploy to root account to delegate administration and create service-linked role
access-analyzer/root:
metadata:
component: access-analyzer
inherits:
- access-analyzer/defaults
vars:
organizations_delegated_administrator_enabled: true
# Set to false if the service-linked role already exists
service_linked_role_enabled: true
```
### Delegated Administrator Configuration (Step 2)
```yaml
import:
- catalog/access-analyzer/defaults
components:
terraform:
# Step 2: Deploy to delegated administrator (security) account to create analyzers
access-analyzer/delegated-administrator:
metadata:
component: access-analyzer
inherits:
- access-analyzer/defaults
vars:
accessanalyzer_organization_enabled: true
accessanalyzer_organization_unused_access_enabled: true
# Number of days without use before generating unused access findings (default: 30)
unused_access_age: 30
```
## Provisioning
> **Note**: Replace the stack names below (e.g., `plat-gbl-root`, `plat-use1-security`) with your actual stack names
> based on your Atmos stack configuration.
**Step 1:** Delegate Access Analyzer to the security account (run once from root/management account):
```bash
# Replace with your root account stack name
atmos terraform apply access-analyzer/root -s plat-gbl-root
```
This step:
- Creates the service-linked role for Access Analyzer (if `service_linked_role_enabled: true`)
- Delegates Access Analyzer administration to the security account
**Step 2:** Create analyzers in the delegated administrator (security) account for each region:
```bash
# Replace with your security account stack names for each region
atmos terraform apply access-analyzer/delegated-administrator -s plat-use1-security
atmos terraform apply access-analyzer/delegated-administrator -s plat-usw2-security
```
This step creates the organization-wide analyzers:
- External access analyzer (type: `ORGANIZATION`)
- Unused access analyzer (type: `ORGANIZATION_UNUSED_ACCESS`)
## Cost Considerations
- **External Access Analyzer**: No additional charge (included with AWS account)
- **Unused Access Analyzer**: Charged per IAM role or user analyzed per month
- See [IAM Access Analyzer pricing](https://aws.amazon.com/iam/access-analyzer/pricing/) for current rates
## References
### AWS Documentation
- [What is IAM Access Analyzer?](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html)
- [Getting Started with Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html)
- [Access Analyzer Findings](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-findings.html)
- [Unused Access Analysis](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-unused-access.html)
- [Service-Linked Role for Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html#access-analyzer-permissions)
- [Delegated Administrator for Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-settings.html#access-analyzer-delegated-administrator)
### Terraform Resources
- [aws_accessanalyzer_analyzer](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/accessanalyzer_analyzer)
- [aws_iam_service_linked_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_service_linked_role)
- [aws_organizations_delegated_administrator](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_delegated_administrator)
### Additional Resources
- [IAM Access Analyzer Product Page](https://aws.amazon.com/iam/access-analyzer/)
- [IAM Access Analyzer Pricing](https://aws.amazon.com/iam/access-analyzer/pricing/)
- [Setting up Access Analyzer for Organization](https://repost.aws/knowledge-center/iam-access-analyzer-organization)
## Variables
### Required Variables
The Access Analyzer service principal for which you want to make the member account a delegated administrator
**Default value:** `"access-analyzer.amazonaws.com"`
`global_environment` (`string`) optional
Global environment name
**Default value:** `"gbl"`
`root_account_stage` (`string`) optional
The stage name for the Organization root (management) account. This is used to lookup account IDs from account names
using the `account-map` component.
**Default value:** `"root"`
`service_linked_role_enabled` (`bool`) optional
Create the service-linked role `access-analyzer.amazonaws.com` in the management account
**Default value:** `true`
`unused_access_age` (`number`) optional
The specified access age in days for which to generate findings for unused access
**Default value:** `30`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_accessanalyzer_analyzer.organization`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/accessanalyzer_analyzer) (resource)
- [`aws_accessanalyzer_analyzer.organization_unused_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/accessanalyzer_analyzer) (resource)
- [`aws_iam_service_linked_role.access_analyzer`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_service_linked_role) (resource)
- [`aws_organizations_delegated_administrator.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_delegated_administrator) (resource)
## Data Sources
The following data sources are used by this module:
---
## account
This component is responsible for creating or importing a single AWS Account within an AWS Organization.
Unlike the monolithic `account` component which manages the entire organization hierarchy,
this component follows the single-resource pattern - it only manages a single AWS account.
:::note
This component should be deployed from the **management/root account** as it creates accounts
within AWS Organizations.
:::
## Key Features
- **Single-resource pattern**: Manages exactly one AWS account per component instance
- **Conditional import blocks** (OpenTofu/Terraform 1.7+): Easily import existing accounts into Terraform state
- **Independent lifecycle**: Each account can be managed independently without affecting others
- **Simple configuration**: Minimal variables required for account creation
## Usage
**Stack Level**: Global (deployed in the management/root account)
This component creates or imports a single AWS account. For managing the entire organization hierarchy,
see the companion components: `aws-organization`, `aws-organizational-unit`, `aws-account-settings`, and `aws-scp`.
### Basic Usage
```yaml
components:
terraform:
aws-account/core-analytics:
metadata:
component: aws-account
vars:
name: core-analytics
account_email: "aws+myorg-core-analytics@example.com"
parent_id: "ou-xxxx-xxxxxxxx"
```
### Using Remote State for Parent ID
Reference the parent OU dynamically using Atmos remote state:
```yaml
components:
terraform:
aws-account/core-analytics:
metadata:
component: aws-account
vars:
name: core-analytics
account_email: "aws+myorg-core-analytics@example.com"
parent_id: !terraform.output aws-organizational-unit/core organizational_unit_id
```
### Importing an Existing Account
To import an existing AWS account into Terraform state:
```yaml
components:
terraform:
aws-account/core-analytics:
metadata:
component: aws-account
vars:
name: core-analytics
account_email: "aws+myorg-core-analytics@example.com"
parent_id: "ou-xxxx-xxxxxxxx"
import_account_id: "123456789012"
```
After the import succeeds, you can remove the `import_account_id` variable.
### Using Catalog Defaults
Create a defaults file for consistent configuration:
```yaml
# stacks/catalog/aws-account/defaults.yaml
components:
terraform:
aws-account/defaults:
metadata:
component: aws-account
type: abstract
vars:
enabled: true
iam_user_access_to_billing: DENY
close_on_deletion: false
```
Then inherit from defaults:
```yaml
# stacks/orgs/myorg/core/root/global-region.yaml
import:
- catalog/aws-account/defaults
components:
terraform:
aws-account/core-analytics:
metadata:
component: aws-account
inherits:
- aws-account/defaults
vars:
name: core-analytics
account_email: "aws+myorg-core-analytics@example.com"
parent_id: !terraform.output aws-organizational-unit/core organizational_unit_id
```
### Complete Example with Multiple Accounts
```yaml
components:
terraform:
# Core OU Accounts
aws-account/core-analytics:
metadata:
component: aws-account
inherits:
- aws-account/defaults
vars:
name: core-analytics
account_email: "aws+myorg-core-analytics@example.com"
parent_id: !terraform.output aws-organizational-unit/core organizational_unit_id
import_account_id: "111111111111"
aws-account/core-security:
metadata:
component: aws-account
inherits:
- aws-account/defaults
vars:
name: core-security
account_email: "aws+myorg-core-security@example.com"
parent_id: !terraform.output aws-organizational-unit/core organizational_unit_id
import_account_id: "222222222222"
# Platform OU Accounts
aws-account/plat-dev:
metadata:
component: aws-account
inherits:
- aws-account/defaults
vars:
name: plat-dev
account_email: "aws+myorg-plat-dev@example.com"
parent_id: !terraform.output aws-organizational-unit/plat organizational_unit_id
import_account_id: "333333333333"
aws-account/plat-prod:
metadata:
component: aws-account
inherits:
- aws-account/defaults
vars:
name: plat-prod
account_email: "aws+myorg-plat-prod@example.com"
parent_id: !terraform.output aws-organizational-unit/plat organizational_unit_id
import_account_id: "444444444444"
```
## Related Components
This component is part of a suite of single-resource components for AWS Organizations:
| Component | Purpose |
|-----------|---------|
| `aws-organization` | Creates/imports the AWS Organization |
| `aws-organizational-unit` | Creates/imports a single Organizational Unit |
| `aws-account` | Creates/imports a single AWS Account (this component) |
| `aws-account-settings` | Configures account settings (IAM alias, S3 block, EBS encryption) |
| `aws-scp` | Creates/imports Service Control Policies |
## Variables
### Required Variables
`account_email` (`string`) required
The email address for the AWS account
`parent_id` (`string`) required
The ID of the parent Organizational Unit or organization root
`region` (`string`) required
AWS Region
### Optional Variables
`close_on_deletion` (`bool`) optional
Whether to close the account on deletion
**Default value:** `false`
`iam_user_access_to_billing` (`string`) optional
Whether IAM users can access billing. ALLOW or DENY
**Default value:** `"DENY"`
`import_account_id` (`string`) optional
The AWS account ID to import. Set this to import an existing account into Terraform state.
**Default value:** `null`
`role_name` (`string`) optional
The name of the IAM role that Organizations creates in the new member account
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`account_arn`
The ARN of the AWS account
`account_email`
The email of the AWS account
`account_id`
The ID of the AWS account
`account_name`
The name of the AWS account
`parent_id`
The parent ID of the account
## Dependencies
### Requirements
- `terraform`, version: `>= 1.7.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_organizations_account.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_account) (resource)
## Data Sources
The following data sources are used by this module:
---
## account-map
This component is responsible for provisioning information only: it simply populates Terraform state with data (account
ids, groups, and roles) that other root modules need via outputs.
## Usage
## Pre-requisites
- [account](https://docs.cloudposse.com/components/library/aws/account) must be provisioned before
[account-map](https://docs.cloudposse.com/components/library/aws/account-map) component
## Usage
**Stack Level**: Global
Here is an example snippet for how to use this component. Include this snippet in the stack configuration for the
management account (typically `root`) in the management tenant/OU (usually something like `mgmt` or `core`) in the
global region (`gbl`). You can include the content directly, or create a `stacks/catalog/account-map.yaml` file and
import it from there.
```yaml
components:
terraform:
account-map:
vars:
enabled: true
# Set profiles_enabled to false unless we are using AWS config profiles for Terraform access.
# When profiles_enabled is false, role_arn must be provided instead of profile in each terraform component provider.
# This is automatically handled by the component's `provider.tf` file in conjunction with
# the `account-map/modules/iam-roles` module.
profiles_enabled: false
root_account_aws_name: "aws-root"
root_account_account_name: root
identity_account_account_name: identity
dns_account_account_name: dns
audit_account_account_name: audit
# The following variables contain `format()` strings that take the labels from `null-label`
# as arguments in the standard order. The default values are shown here, assuming
# the `null-label.label_order` is
# ["namespace", "tenant", "environment", "stage", "name", "attributes"]
# Note that you can rearrange the order of the labels in the template by
# using [explicit argument indexes](https://pkg.go.dev/fmt#hdr-Explicit_argument_indexes) just like in `go`.
# `iam_role_arn_template_template` is the template for the template [sic] used to render Role ARNs.
# The template is first used to render a template for the account that takes only the role name.
# Then that rendered template is used to create the final Role ARN for the account.
iam_role_arn_template_template: "arn:%s:iam::%s:role/%s-%s-%s-%s-%%s"
# `profile_template` is the template used to render AWS Profile names.
profile_template: "%s-%s-%s-%s-%s"
```
## Variables
### Required Variables
The template for the template used to render Role ARNs.
The template is first used to render a template for the account that takes only the role name.
Then that rendered template is used to create the final Role ARN for the account.
Default is appropriate when using `tenant` and default label order with `null-label`.
Use `"arn:%s:iam::%s:role/%s-%s-%s-%%s"` when not using `tenant`.
Note that if the `null-label` variable `label_order` is truncated or extended with additional labels, this template will
need to be updated to reflect the new number of labels.
**Default value:** `"arn:%s:iam::%s:role/%s-%s-%s-%s-%%s"`
The short name for the account holding primary IAM roles
**Default value:** `"identity"`
`import_organization_accounts` (`bool`) optional
Retrieve accounts from AWS Organizations and import them into the account map.
Set false for brownfield environments where you want to curate the list of
accounts manually via the `account` component with a static backend.
Note that the brownfield `account` component needs to include the `root` account
in the `account_names_account_ids` map, whereas the greenfield `account` component
does not.
**Default value:** `true`
`legacy_terraform_uses_admin` (`bool`) optional
If `true`, the legacy behavior of using the `admin` role rather than the `terraform` role in the
`root` and identity accounts will be preserved.
The default is to use the negations of the value of `terraform_dynamic_role_enabled`.
**Default value:** `null`
`profile_template` (`string`) optional
The template used to render AWS Profile names.
Default is appropriate when using `tenant` and default label order with `null-label`.
Use `"%s-%s-%s-%s"` when not using `tenant`.
Note that if the `null-label` variable `label_order` is truncated or extended with additional labels, this template will
need to be updated to reflect the new number of labels.
**Default value:** `"%s-%s-%s-%s-%s"`
`profiles_enabled` (`bool`) optional
Whether or not to enable profiles instead of roles for the backend. If true, profile must be set. If false, role_arn must be set.
**Default value:** `false`
`root_account_account_name` (`string`) optional
The short name for the root account
**Default value:** `"root"`
Mapping of Terraform action (plan or apply) to aws-team-role name to assume for that action
**Default value:**
```hcl
{
"apply": "terraform",
"plan": "planner"
}
```
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`account_info_map`
A map from account name to various information about the account.
See the `account_info_map` output of `account` for more detail.
`all_accounts`
A list of all accounts in the AWS Organization
`artifacts_account_account_name`
The short name for the artifacts account
`audit_account_account_name`
The short name for the audit account
`aws_partition`
The AWS "partition" to use when constructing resource ARNs
`cicd_profiles` OBSOLETE
dummy results returned to avoid breaking code that depends on this output
`cicd_roles` OBSOLETE
dummy results returned to avoid breaking code that depends on this output
`dns_account_account_name`
The short name for the primary DNS account
`eks_accounts`
A list of all accounts in the AWS Organization that contain EKS clusters
`full_account_map`
The map of account name to account ID (number).
`helm_profiles` OBSOLETE
dummy results returned to avoid breaking code that depends on this output
`helm_roles` OBSOLETE
dummy results returned to avoid breaking code that depends on this output
`iam_role_arn_templates`
Map of accounts to corresponding IAM Role ARN templates
`identity_account_account_name`
The short name for the account holding primary IAM roles
`non_eks_accounts`
A list of all accounts in the AWS Organization that do not contain EKS clusters
`org`
The name of the AWS Organization
`profiles_enabled`
Whether or not to enable profiles instead of roles for the backend
`root_account_account_name`
The short name for the root account
`root_account_aws_name`
The name of the root account as reported by AWS
`terraform_access_map`
Mapping of team Role ARN to map of account name to terraform action role ARN to assume
For each team in `aws-teams`, look at every account and see if that team has access to the designated "apply" role.
If so, add an entry `<account-name> = "apply"` to the `terraform_access_map` entry for that team.
If not, see if it has access to the "plan" role, and if so, add a "plan" entry.
Otherwise, no entry is added.
`terraform_dynamic_role_enabled`
True if dynamic role for Terraform is enabled
`terraform_profiles`
A list of all SSO profiles used to run terraform updates
`terraform_role_name_map`
Mapping of Terraform action (plan or apply) to aws-team-role name to assume for that action
`terraform_roles`
A list of all IAM roles used to run terraform updates
## Dependencies
### Requirements
- `terraform`, version: `>= 1.2.0`
- `aws`, version: `>= 4.9.0`
- `local`, version: `>= 1.3`
- `utils`, version: `>= 1.10.0`
### Providers
- `aws`, version: `>= 4.9.0`
- `local`, version: `>= 1.3`
- `utils`, version: `>= 1.10.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`accounts` | 2.0.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/2.0.0) | n/a
`atmos` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`local_file.account_info`](https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_organizations_organization.organization`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/organizations_organization) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`utils_describe_stacks.team_roles`](https://registry.terraform.io/providers/cloudposse/utils/latest/docs/data-sources/describe_stacks) (data source)
- [`utils_describe_stacks.teams`](https://registry.terraform.io/providers/cloudposse/utils/latest/docs/data-sources/describe_stacks) (data source)
---
## iam-roles
# Submodule `iam-roles`
This submodule is used by other modules to determine which IAM Roles or AWS CLI Config Profiles to use for various
tasks, most commonly for applying Terraform plans.
## Special Configuration Needed
In order to avoid having to pass customization information through every module that uses this submodule, if the default
configuration does not suit your needs, you are expected to add `variables_override.tf` to override the variables with
the defaults you want to use in your project. For example, if you are not using "core" as the `tenant` portion of your
"root" account (your Organization Management Account), then you should include the
`variable "overridable_global_tenant_name"` declaration in your `variables_override.tf` so that
`overridable_global_tenant_name` defaults to the value you are using (or the empty string if you are not using `tenant`
at all).
## Variables
### Required Variables
### Optional Variables
`bypass` (`bool`) optional
Skip the account-map lookup and return safe defaults. Use when the caller does not need dynamic role resolution (e.g., legacy accounts that authenticate via environment credentials).
**Default value:** `false`
The tenant name used for organization-wide resources
**Default value:** `"core"`
`privileged` (`bool`) optional
True if the Terraform user already has access to the backend
**Default value:** `false`
`profiles_enabled` (`bool`) optional
Whether or not to use profiles instead of roles for Terraform. Default (null) means to use global settings.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`audit_terraform_profile_name`
The AWS config profile name for Terraform to use to provision resources in the "audit" role account, when profiles are in use
`audit_terraform_role_arn`
The AWS Role ARN for Terraform to use to provision resources in the "audit" role account, when Role ARNs are in use
`aws_partition`
The AWS "partition" to use when constructing resource ARNs
`current_account_account_name`
The account name (usually `<tenant>-<stage>`) for the account configured by this module's inputs.
Roughly analogous to `data "aws_caller_identity"`, but returning the name of the caller account as used in our configuration.
`dns_terraform_profile_name`
The AWS config profile name for Terraform to use to provision DNS Zone delegations, when profiles are in use
`dns_terraform_role_arn`
The AWS Role ARN for Terraform to use to provision DNS Zone delegations, when Role ARNs are in use
`global_environment_name`
The `null-label` `environment` value used for regionless (global) resources
`global_stage_name`
The `null-label` `stage` value for the organization management account (where the `account-map` state is stored)
`global_tenant_name`
The `null-label` `tenant` value used for organization-wide resources
`identity_account_account_name`
The account name (usually `<tenant>-<stage>`) for the account holding primary IAM roles
`identity_terraform_profile_name`
The AWS config profile name for Terraform to use to provision resources in the "identity" role account, when profiles are in use
`identity_terraform_role_arn`
The AWS Role ARN for Terraform to use to provision resources in the "identity" role account, when Role ARNs are in use
`org_role_arn`
The AWS Role ARN for Terraform to use when SuperAdmin is provisioning resources in the account
`profiles_enabled`
When true, use AWS config profiles in Terraform AWS provider configurations. When false, use Role ARNs.
`terraform_profile_name`
The AWS config profile name for Terraform to use when provisioning resources in the account, when profiles are in use
`terraform_role_arn`
The AWS Role ARN for Terraform to use when provisioning resources in the account, when Role ARNs are in use
`terraform_role_arns`
All of the terraform role arns
## Dependencies
### Requirements
- `terraform`, version: `>= 1.2.0`
- `awsutils`, version: `>= 0.16.0`
### Providers
- `awsutils`, version: `>= 0.16.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 2.0.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/2.0.0) | n/a
`always` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`awsutils_caller_identity.current`](https://registry.terraform.io/providers/cloudposse/awsutils/latest/docs/data-sources/caller_identity) (data source)
None
---
## roles-to-principals
# Submodule `roles-to-principals`
This submodule is used by other modules to map short role names and AWS SSO Permission Set names in accounts designated
by short account names (for example, `terraform` in the `dev` account) to full IAM Role ARNs and other related tasks.
## Special Configuration Needed
As with `iam-roles`, in order to avoid having to pass customization information through every module that uses this
submodule, if the default configuration does not suit your needs, you are expected to add `variables_override.tf` to
override the variables with the defaults you want to use in your project. For example, if you are not using "core" as
the `tenant` portion of your "root" account (your Organization Management Account), then you should include the
`variable "overridable_global_tenant_name"` declaration in your `variables_override.tf` so that
`overridable_global_tenant_name` defaults to the value you are using (or the empty string if you are not using `tenant`
at all).
## Variables
### Required Variables
### Optional Variables
`account_map_bypass` (`bool`) optional
Set to true to skip looking up the remote state and just return the defaults
**Default value:** `false`
`account_map_defaults` (`any`) optional
Default values if the data source is empty
**Default value:** `null`
When true, any roles (teams or team-roles) in the identity account references in `role_map`
will cause corresponding AWS SSO PermissionSets to be included in the `permission_set_arn_like` output.
This has the effect of treating those PermissionSets as if they were teams.
The main reason to set this `false` is if IAM trust policies are exceeding size limits and you are not using AWS SSO.
**Default value:** `true`
Map of account:[PermissionSet, PermissionSet...] specifying AWS SSO PermissionSets when accessed from specified accounts
**Default value:** `{ }`
`privileged` (`bool`) optional
True if the default provider already has access to the backend
**Default value:** `false`
`role_map` (`map(list(string))`) optional
Map of account:[role, role...]. Use `*` as role for entire account
**Default value:** `{ }`
`teams` (`list(string)`) optional
List of team names to translate to AWS SSO PermissionSet names
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`aws_partition`
The AWS "partition" to use when constructing resource ARNs
`full_account_map`
Map of account names to account IDs
`permission_set_arn_like`
List of Role ARN regexes suitable for IAM Condition `ArnLike` corresponding to given input `permission_set_map`
`principals`
Consolidated list of AWS principals corresponding to given input `role_map`
`principals_map`
Map of AWS principals corresponding to given input `role_map`
`team_permission_set_name_map`
Map of team names (from `var.teams` and `role_map["identity"]) to permission set names
## Dependencies
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 2.0.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/2.0.0) | n/a
`always` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
None
---
## team-assume-role-policy
This submodule generates a JSON-encoded IAM Policy Document suitable for use as an "Assume Role Policy".
You can designate both who is allowed to assume a role and who is explicitly denied permission to assume a role. The
value of this submodule is that it allows for many ways to specify the "who" while at the same time limiting the "who"
to assumed IAM roles:
- All assumed roles in the `dev` account: `allowed_roles = { dev = ["*"] }`
- Only the `admin` role in the dev account: `allowed_roles = { dev = ["admin"] }`
- A specific principal in any account (though it must still be an assumed role):
`allowed_principal_arns = arn:aws:iam::123456789012:role/trusted-role`
- A user of a specific AWS SSO Permission Set: `allowed_permission_sets = { dev = ["DeveloperAccess"] }`
## Usage
```hcl
module "assume_role" {
source = "../account-map/modules/team-assume-role-policy"
allowed_roles = { dev = ["admin"] }
context = module.this.context
}
resource "aws_iam_role" "default" {
assume_role_policy = module.assume_role.policy_document
# ...
}
```
## Variables
### Required Variables
### Optional Variables
`account_map_bypass` (`bool`) optional
Set to true to skip looking up the remote state and just return the defaults
**Default value:** `false`
`account_map_defaults` (`any`) optional
Default values if the data source is empty
**Default value:** `null`
Map of account:[PermissionSet, PermissionSet...] specifying AWS SSO PermissionSets allowed to assume the role when coming from specified account
**Default value:** `{ }`
List of AWS principal ARNs allowed to assume the role.
**Default value:** `[ ]`
`allowed_roles` (`map(list(string))`) optional
Map of account:[role, role...] specifying roles allowed to assume the role.
Roles are symbolic names like `ops` or `terraform`. Use `*` as role for entire account.
**Default value:** `{ }`
Map of account:[PermissionSet, PermissionSet...] specifying AWS SSO PermissionSets denied access to the role when coming from specified account
**Default value:** `{ }`
`denied_principal_arns` (`list(string)`) optional
List of AWS principal ARNs explicitly denied access to the role.
**Default value:** `[ ]`
`denied_roles` (`map(list(string))`) optional
Map of account:[role, role...] specifying roles explicitly denied permission to assume the role.
Roles are symbolic names like `ops` or `terraform`. Use `*` as role for entire account.
**Default value:** `{ }`
`global_environment_name` (`string`) optional
Global environment name
**Default value:** `"gbl"`
`iam_users_enabled` (`bool`) optional
True if you would like IAM Users to be able to assume the role.
**Default value:** `false`
`privileged` (`bool`) optional
True if the default provider already has access to the backend
**Default value:** `false`
`trusted_github_org` (`string`) optional
The GitHub organization unqualified repos are assumed to belong to. Keeps `*` from meaning all orgs and all repos.
**Default value:** `"cloudposse"`
`trusted_github_repos` (`list(string)`) optional
A list of GitHub repositories allowed to access this role.
Format is either "orgName/repoName" or just "repoName",
in which case "cloudposse" will be used for the "orgName".
Wildcard ("*") is allowed for "repoName".
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`github_assume_role_policy`
JSON encoded string representing the "Assume Role" policy configured by the inputs
`policy_document`
JSON encoded string representing the "Assume Role" policy configured by the inputs
## Dependencies
### Requirements
- `terraform`, version: `>= 1.2.0`
- `aws`, version: `>= 4.9.0`
### Providers
- `aws`, version: `>= 4.9.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`allowed_role_map` | latest | [`../roles-to-principals`](https://registry.terraform.io/modules/../roles-to-principals/) | n/a
`denied_role_map` | latest | [`../roles-to-principals`](https://registry.terraform.io/modules/../roles-to-principals/) | n/a
`github_oidc_provider` | 2.0.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/2.0.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_arn.allowed`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/arn) (data source)
- [`aws_arn.denied`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/arn) (data source)
- [`aws_iam_policy_document.assume_role`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.github_oidc_provider_assume`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
None
---
## account-quotas
This component is responsible for requesting AWS Service Quota increases.
We recommend making requests here rather than in `account-settings` because
`account-settings` is a restricted component that can only be applied by SuperAdmin.
## Usage
**Stack Level**: Global and Regional (depending on quota)
Global resources must be provisioned in `us-east-1`. Put them in the `gbl` stack,
but set `region: us-east-1` in the `vars` section.
You can refer to services either by their exact full name (e.g.
`service_name: "Amazon Elastic Compute Cloud (Amazon EC2)"`) or by the service
code (e.g. `service_code: "ec2"`). Similarly, you can refer to quota names either
by their exact full name (e.g. `quota_name: "EC2-VPC Elastic IPs"`) or by the quota
code (e.g. `quota_code: "L-0263D0A3"`).
You can find service codes and full names via the AWS CLI (be sure to use the
correct region):
```bash
aws --region us-east-1 service-quotas list-services
```
You can find quota codes and full names, and also whether the quotas are adjustable
or global, via the AWS CLI, but you will need the service code from the previous step:
```bash
aws --region us-east-1 service-quotas list-service-quotas --service-code ec2
```
If you make a request to raise a quota, the output will show the requested value as
`value` while the request is pending.
### Special Usage Notes
Even though the Terraform will submit the support request, you may need to follow up
with AWS Support to get the request approved, via the AWS Console or email.
#### Resources are destroyed on change
Because the AWS API often returns default values rather than configured or applicable
values for a given quota, we must ignore the value returned by the API or else face
perpetual drift. To allow us to change the value in the future, even though we are
ignoring it, we encode the value in the resource key, so that a change of value will
result in a new resource being created and the old one being destroyed. Destroying the
old resource has no actual effect (it does not even close an open request), so it is
safe to do.
### Example
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
account-quotas:
vars:
quotas:
vpcs-per-region:
service_code: vpc
quota_name: "VPCs per Region"
value: 10
vpc-elastic-ips:
service_code: ec2
quota_name: "EC2-VPC Elastic IPs"
value: 10
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`quotas` optional
Map of quotas to set. Map keys are arbitrary and are used to allow Atmos to merge configurations.
Delete an inherited quota by setting its key's value to null.
You only need to provide one of either name or code for each of "service" and "quota".
If you provide both, the code will be used.
**Type:**
```hcl
map(object({
service_name = optional(string)
service_code = optional(string)
quota_name = optional(string)
quota_code = optional(string)
value = number
}))
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`quotas`
Full report on all service quotas managed by this component.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_servicequotas_service_quota.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/servicequotas_service_quota) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_servicequotas_service.by_name`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/servicequotas_service) (data source)
- [`aws_servicequotas_service_quota.by_name`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/servicequotas_service_quota) (data source)
---
## account-settings
This component is responsible for provisioning account-level settings: AWS Account Alias, EBS encryption, S3 block public access, alternate contacts, SSM session preferences, EBS snapshot block public access, EC2 instance metadata defaults, EC2 AMI block public access, and EMR block public access configuration.
## Introduction
:::warning
The latest version of this component (version 2) assumes you have Atmos Auth set up, and it has a very simple `providers.tf`.
If you are still using `aws-teams` and `team-roles`, update your `component.yaml` to use `providers.depth-1.tf` from
[cloudposse-terraform-components/mixins](https://github.com/cloudposse-terraform-components/mixins/blob/main/src/mixins/providers.depth-1.tf) via:
```yaml
mixins:
# Use upstream mixin for providers.tf without account-map dependency
- uri: https://raw.githubusercontent.com/cloudposse-terraform-components/mixins/\{\{ .Version \}\}/src/mixins/providers.depth-1.tf
version: v0.3.2
filename: providers.tf
```
to overwrite the current one.
:::
## Usage
**Stack Level**: Global
Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts, so
create a file `stacks/catalog/account-settings.yaml` with the following content and then import that file in each
account's global stack (overriding any parameters as needed):
```yaml
components:
terraform:
account-settings:
vars:
enabled: true
account_alias_enabled: true
s3_block_public_access_enabled: true
ebs_default_encryption_enabled: true
ebs_snapshot_block_public_access_enabled: true
ec2_instance_metadata_defaults_enabled: true
ec2_image_block_public_access_enabled: true
emr_block_public_access_enabled: true
billing_contact:
name: "John Doe"
title: "CFO"
email_address: "billing@example.com"
phone_number: "+1-555-123-4567"
operations_contact:
name: "Jane Smith"
title: "DevOps Lead"
email_address: "ops@example.com"
phone_number: "+1-555-234-5678"
security_contact:
name: "Bob Wilson"
title: "CISO"
email_address: "security@example.com"
phone_number: "+1-555-345-6789"
ssm_session_preferences_enabled: true
ssm_session_idle_timeout_minutes: 30
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_alias` (`string`) optional
The IAM account alias. If not set, uses the module ID
**Default value:** `null`
`account_alias_enabled` (`bool`) optional
Whether to create the IAM account alias
**Default value:** `true`
`billing_contact` optional
Billing alternate contact information
**Type:**
```hcl
object({
name = string
title = string
email_address = string
phone_number = string
})
```
**Default value:** `null`
The state of EBS snapshot block public access. Valid values are 'block-all-sharing', 'block-new-sharing', and 'unblocked'.
**Default value:** `"block-all-sharing"`
The desired HTTP PUT response hop limit for instance metadata requests. Valid values are between 1 and 64, or -1 for no preference.
**Default value:** `1`
Whether the instance metadata service requires session tokens (IMDSv2). Valid values are 'required', 'optional', and 'no-preference'.
**Default value:** `"required"`
`ec2_instance_metadata_tags` (`string`) optional
Whether to enable access to instance tags from the instance metadata service. Valid values are 'enabled', 'disabled', and 'no-preference'.
**Default value:** `"enabled"`
List of permitted port ranges for public security group rules in EMR. Each object must have min_range and max_range. Default is an empty list (no permitted ranges).
**Type:**
```hcl
list(object({
min_range = number
max_range = number
}))
```
**Default value:** `[ ]`
`import_account_alias` (`string`) optional
Set to the existing IAM account alias to import it into Terraform state. Set to null after successful import.
**Default value:** `null`
`operations_contact` optional
Operations alternate contact information
**Type:**
```hcl
object({
name = string
title = string
email_address = string
phone_number = string
})
```
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`account_alias`
The IAM account alias
`billing_contact_configured`
Whether billing contact was configured
`ebs_encryption_configured`
Whether EBS default encryption was configured
`ebs_snapshot_block_public_access_configured`
Whether EBS snapshot block public access was configured
`ebs_snapshot_block_public_access_state`
The state of EBS snapshot block public access
`ec2_image_block_public_access_configured`
Whether EC2 AMI block public access was configured
`ec2_image_block_public_access_state`
The state of EC2 AMI block public access
`ec2_instance_metadata_defaults_configured`
Whether EC2 instance metadata defaults were configured
`emr_block_public_access_configured`
Whether EMR block public access was configured
`operations_contact_configured`
Whether operations contact was configured
`s3_public_access_block_configured`
Whether S3 public access block was configured
`security_contact_configured`
Whether security contact was configured
`ssm_session_idle_timeout_minutes`
The configured SSM session idle timeout in minutes
`ssm_session_preferences_configured`
Whether SSM Session Manager preferences were configured
## Dependencies
### Requirements
- `terraform`, version: `>= 1.7.0`
- `aws`, version: `>= 6.0.0`
### Providers
- `aws`, version: `>= 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_account_alternate_contact.billing`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/account_alternate_contact) (resource)
- [`aws_account_alternate_contact.operations`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/account_alternate_contact) (resource)
- [`aws_account_alternate_contact.security`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/account_alternate_contact) (resource)
- [`aws_ebs_default_kms_key.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ebs_default_kms_key) (resource)
- [`aws_ebs_encryption_by_default.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ebs_encryption_by_default) (resource)
- [`aws_ebs_snapshot_block_public_access.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ebs_snapshot_block_public_access) (resource)
- [`aws_ec2_image_block_public_access.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_image_block_public_access) (resource)
- [`aws_ec2_instance_metadata_defaults.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_instance_metadata_defaults) (resource)
- [`aws_emr_block_public_access_configuration.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/emr_block_public_access_configuration) (resource)
- [`aws_iam_account_alias.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_account_alias) (resource)
- [`aws_s3_account_public_access_block.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_account_public_access_block) (resource)
- [`aws_ssm_document.session_manager_prefs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_document) (resource)
## Data Sources
The following data sources are used by this module:
---
## acm
This component is responsible for requesting an ACM certificate for a domain and adding a CNAME record to the DNS zone
to complete certificate validation.
The ACM component is to manage an unlimited number of certificates, predominantly for vanity domains. While the
[dns-primary](https://docs.cloudposse.com/components/library/aws/dns-primary) component has the ability to generate ACM
certificates, it is very opinionated and can only manage one zone. In reality, companies have many branded domains
associated with a load balancer, so we need to be able to generate more complicated certificates.
We have, as a convenience, the ability to create an ACM certificate as part of creating a DNS zone, whether primary or
delegated. That convenience is limited to creating `example.com` and `*.example.com` when creating a zone for
`example.com`. For example, Acme has delegated `acct.acme.com` and in addition to `*.acct.acme.com` needed an ACM
certificate for `*.usw2.acct.acme.com`, so we use the ACM component to provision that, rather than extend the DNS
primary or delegated components to take a list of additional certificates. Both are different views on the Single
Responsibility Principle.
## Usage
**Stack Level**: Global or Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
acm:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
domain_name: acme.com
process_domain_validation_options: false
validation_method: DNS
# NOTE: The following subject alternative name is automatically added by the module.
# Additional entries can be added by providing this input.
# subject_alternative_names:
# - "*.acme.com"
```
ACM using a private CA
```yaml
components:
terraform:
acm:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
domain_name: acme.com
process_domain_validation_options: false
dns_private_zone_enabled: true
certificate_authority_component_name: private-ca-subordinate
certificate_authority_stage_name: pca
certificate_authority_environment_name: use2
certificate_authority_component_key: subordinate
```
## Variables
### Required Variables
Use this component key e.g. `root` or `mgmt` to read from the remote state to get the certificate_authority_arn if using an authority type of SUBORDINATE
**Default value:** `null`
Use this component name to read from the remote state to get the certificate_authority_arn if using an authority type of SUBORDINATE
**Default value:** `null`
`certificate_authority_enabled` (`bool`) optional
Whether to use the certificate authority or not
**Default value:** `false`
Use this environment name to read from the remote state to get the certificate_authority_arn if using an authority type of SUBORDINATE
**Default value:** `null`
Use this stage name to read from the remote state to get the certificate_authority_arn if using an authority type of SUBORDINATE
**Default value:** `null`
`certificate_export` (`bool`) optional
Specifies whether the certificate can be exported
**Default value:** `false`
A list of domain prefixes to use with DNS delegated remote state that should be SANs in the issued certificate
**Default value:** `[ ]`
`validation_method` (`string`) optional
Method to use for validation, DNS or EMAIL
**Default value:** `"DNS"`
`zone_name` (`string`) optional
Name of the zone in which to place the DNS validation records to validate the certificate.
Typically a domain name. Default of `""` actually defaults to `domain_name`.
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
The ARN of the certificate
`domain_name`
Certificate domain name
`domain_validation_options`
CNAME records that are added to the DNS zone to complete certificate validation
`id`
The ID of the certificate
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 6.4.0, < 7.0.0`
### Providers
- `aws`, version: `>= 6.4.0, < 7.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`acm` | 0.18.1 | [`cloudposse/acm-request-certificate/aws`](https://registry.terraform.io/modules/cloudposse/acm-request-certificate/aws/0.18.1) | https://github.com/cloudposse/terraform-aws-acm-request-certificate
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`private_ca` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_parameter.acm_arn`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_route53_zone.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/route53_zone) (data source)
---
## alb
This component is responsible for provisioning a generic Application Load Balancer. It depends on the `vpc` and
`dns-delegated` components.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
alb:
vars:
https_ssl_policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
health_check_path: /api/healthz
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`access_logs_enabled` (`bool`) optional
A boolean flag to enable/disable access_logs
**Default value:** `true`
`access_logs_prefix` (`string`) optional
The S3 log bucket prefix
**Default value:** `""`
`access_logs_s3_bucket_id` (`string`) optional
An external S3 Bucket name to store access logs in. If specified, no logging bucket will be created.
**Default value:** `null`
A boolean that indicates all objects should be deleted from the ALB access logs S3 bucket so that the bucket can be destroyed without error
**Default value:** `false`
A boolean flag to enable/disable cross zone load balancing
**Default value:** `true`
`deletion_protection_enabled` (`bool`) optional
A boolean flag to enable/disable deletion protection for ALB
**Default value:** `false`
`deregistration_delay` (`number`) optional
The amount of time to wait in seconds before changing the state of a deregistering target to unused
**Default value:** `15`
`dns_acm_enabled` (`bool`) optional
If `true`, use the ACM ARN created by the given `dns-delegated` component. Otherwise, use the ACM ARN created by the given `acm` component.
**Default value:** `false`
`dns-delegated` component environment name
**Default value:** `null`
`drop_invalid_header_fields` (`bool`) optional
Indicates whether HTTP headers with header fields that are not valid are removed by the load balancer (true) or routed to targets (false).
**Default value:** `false`
List of prefix list IDs for allowing access to HTTPS ingress security group
**Default value:** `[ ]`
`https_port` (`number`) optional
The port for the HTTPS listener
**Default value:** `443`
`https_ssl_policy` (`string`) optional
The name of the SSL Policy for the listener
**Default value:** `"ELBSecurityPolicy-TLS13-1-2-2021-06"`
`idle_timeout` (`number`) optional
The time in seconds that the connection is allowed to be idle
**Default value:** `60`
`internal` (`bool`) optional
A boolean flag to determine whether the ALB should be internal
**Default value:** `false`
`ip_address_type` (`string`) optional
The type of IP addresses used by the subnets for your load balancer. The possible values are `ipv4` and `dualstack`.
**Default value:** `"ipv4"`
`lifecycle_rule_enabled` (`bool`) optional
A boolean that indicates whether the s3 log bucket lifecycle rule should be enabled.
**Default value:** `true`
`stickiness` optional
Target group sticky configuration
**Type:**
```hcl
object({
cookie_duration = number
enabled = bool
})
```
**Default value:** `null`
`target_group_name` (`string`) optional
The name for the default target group, uses a module label name if left empty
**Default value:** `""`
`target_group_port` (`number`) optional
The port for the default target group
**Default value:** `80`
`target_group_protocol` (`string`) optional
The protocol for the default target group HTTP or HTTPS
**Default value:** `"HTTP"`
`target_group_target_type` (`string`) optional
The type (`instance`, `ip` or `lambda`) of targets that can be registered with the target group
**Default value:** `"ip"`
`vpc_component_name` (`string`) optional
Atmos `vpc` component name
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`access_logs_bucket_id`
The S3 bucket ID for access logs
`alb_arn`
The ARN of the ALB
`alb_arn_suffix`
The ARN suffix of the ALB
`alb_dns_name`
DNS name of ALB
`alb_name`
The ARN suffix of the ALB
`alb_zone_id`
The ID of the zone which ALB is provisioned
`certificate_arn`
SSL certificate ARN to use with the ALB
`default_target_group_arn`
The default target group ARN
`http_listener_arn`
The ARN of the HTTP forwarding listener
`http_redirect_listener_arn`
The ARN of the HTTP to HTTPS redirect listener
`https_listener_arn`
The ARN of the HTTPS listener
`listener_arns`
A list of all the listener ARNs
`security_group_id`
The security group ID of the ALB
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `local`, version: `>= 2.1`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`acm` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`alb` | 2.4.0 | [`cloudposse/alb/aws`](https://registry.terraform.io/modules/cloudposse/alb/aws/2.4.0) | n/a
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## amplify
This component is responsible for provisioning AWS Amplify apps, backend environments, branches, domain associations,
and webhooks.
## Usage
**Stack Level**: Regional
Here's an example for how to use this component:
```yaml
# stacks/catalog/amplify/defaults.yaml
components:
terraform:
amplify/defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
# https://docs.aws.amazon.com/amplify/latest/userguide/setting-up-GitHub-access.html
github_personal_access_token_secret_path: "/amplify/github_personal_access_token"
platform: "WEB"
enable_auto_branch_creation: false
enable_basic_auth: false
enable_branch_auto_build: true
enable_branch_auto_deletion: false
iam_service_role_enabled: false
environment_variables: {}
dns_delegated_component_name: "dns-delegated"
dns_delegated_environment_name: "gbl"
```
```yaml
# stacks/catalog/amplify/example.yaml
import:
- catalog/amplify/defaults
components:
terraform:
amplify/example:
metadata:
# Point to the Terraform component
component: amplify
inherits:
# Inherit the default settings
- amplify/defaults
vars:
name: "example"
description: "example Amplify App"
repository: "https://github.com/cloudposse/amplify-test2"
platform: "WEB_COMPUTE"
enable_auto_branch_creation: false
enable_basic_auth: false
enable_branch_auto_build: true
enable_branch_auto_deletion: false
iam_service_role_enabled: true
# https://docs.aws.amazon.com/amplify/latest/userguide/ssr-CloudWatch-logs.html
iam_service_role_actions:
- "logs:CreateLogStream"
- "logs:CreateLogGroup"
- "logs:DescribeLogGroups"
- "logs:PutLogEvents"
custom_rules: []
auto_branch_creation_patterns: []
environment_variables:
NEXT_PRIVATE_STANDALONE: false
NEXT_PUBLIC_TEST: test
_LIVE_UPDATES: '[{"pkg":"node","type":"nvm","version":"16"},{"pkg":"next-version","type":"internal","version":"13.1.1"}]'
environments:
main:
branch_name: "main"
enable_auto_build: true
backend_enabled: false
enable_performance_mode: false
enable_pull_request_preview: false
framework: "Next.js - SSR"
stage: "PRODUCTION"
environment_variables: {}
develop:
branch_name: "develop"
enable_auto_build: true
backend_enabled: false
enable_performance_mode: false
enable_pull_request_preview: false
framework: "Next.js - SSR"
stage: "DEVELOPMENT"
environment_variables: {}
domain_config:
enable_auto_sub_domain: false
wait_for_verification: false
sub_domain:
- branch_name: "main"
prefix: "example-prod"
- branch_name: "develop"
prefix: "example-dev"
subdomains_dns_records_enabled: true
certificate_verification_dns_record_enabled: false
```
The `amplify/example` YAML configuration defines an Amplify app in AWS. The app is set up to use the `Next.js` framework
with SSR (server-side rendering) and is linked to the GitHub repository "https://github.com/cloudposse/amplify-test2".
The app is set up to have two environments: `main` and `develop`. Each environment has different configuration settings,
such as the branch name, framework, and stage. The `main` environment is set up for production, while the `develop`
environments is set up for development.
The app is also configured to have custom subdomains for each environment, with prefixes such as `example-prod` and
`example-dev`. The subdomains are configured to use DNS records, which are enabled through the
`subdomains_dns_records_enabled` variable.
The app also has an IAM service role configured with specific IAM actions, and environment variables set up for each
environment. Additionally, the app is configured to use the Atmos Spacelift workspace, as indicated by the
`workspace_enabled: true` setting.
The `amplify/example` Atmos component extends the `amplify/defaults` component.
The `amplify/example` configuration is imported into the `stacks/mixins/stage/dev.yaml` stack config file to be
provisioned in the `dev` account.
```yaml
# stacks/mixins/stage/dev.yaml
import:
- catalog/amplify/example
```
You can execute the following command to provision the Amplify app using Atmos:
```shell
atmos terraform apply amplify/example -s
```
## Variables
### Required Variables
The automated branch creation glob patterns for the Amplify app
**Default value:** `[ ]`
`basic_auth_credentials` (`string`) optional
The credentials for basic authorization for the Amplify app
**Default value:** `null`
`build_spec` (`string`) optional
The [build specification](https://docs.aws.amazon.com/amplify/latest/userguide/build-settings.html) (build spec) for the Amplify app.
If not provided then it will use the `amplify.yml` at the root of your project / branch.
**Default value:** `null`
Whether or not to create DNS records for SSL certificate validation.
If using the DNS zone from `dns-delegated`, the SSL certificate is already validated, and this variable must be set to `false`.
**Default value:** `false`
`custom_rules` optional
The custom rules to apply to the Amplify App
**Type:**
```hcl
list(object({
condition = optional(string)
source = string
status = optional(string)
target = string
}))
```
**Default value:** `[ ]`
`description` (`string`) optional
The description for the Amplify app
**Default value:** `null`
List of IAM policy actions for the AWS Identity and Access Management (IAM) service role for the Amplify app.
If not provided, the default set of actions will be used for the role if the variable `iam_service_role_enabled` is set to `true`.
**Default value:** `[ ]`
`iam_service_role_arn` (`list(string)`) optional
The AWS Identity and Access Management (IAM) service role for the Amplify app.
If not provided, a new role will be created if the variable `iam_service_role_enabled` is set to `true`.
**Default value:** `[ ]`
`iam_service_role_enabled` (`bool`) optional
Flag to create the IAM service role for the Amplify app
**Default value:** `false`
`oauth_token` (`string`) optional
The OAuth token for a third-party source control system for the Amplify app.
The OAuth token is used to create a webhook and a read-only deploy key.
The OAuth token is not stored.
**Default value:** `null`
`platform` (`string`) optional
The platform or framework for the Amplify app
**Default value:** `"WEB"`
`repository` (`string`) optional
The repository for the Amplify app
**Default value:** `null`
Whether or not to create DNS records for the Amplify app custom subdomains
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
DNS records and the verified status for the subdomains
`webhooks`
Created webhooks
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`amplify_app` | 1.2.0 | [`cloudposse/amplify-app/aws`](https://registry.terraform.io/modules/cloudposse/amplify-app/aws/1.2.0) | n/a
`certificate_verification_dns_record` | 0.13.0 | [`cloudposse/route53-cluster-hostname/aws`](https://registry.terraform.io/modules/cloudposse/route53-cluster-hostname/aws/0.13.0) | Create the SSL certificate validation record
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`subdomains_dns_record` | 0.13.0 | [`cloudposse/route53-cluster-hostname/aws`](https://registry.terraform.io/modules/cloudposse/route53-cluster-hostname/aws/0.13.0) | Create DNS records for the subdomains
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.github_pat`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## api-gateway-account-settings
This component is responsible for setting the global, regional settings required to allow API Gateway to write to
CloudWatch logs.
Every AWS region you want to deploy an API Gateway to must be configured with an IAM Role that gives API Gateway
permissions to create and write to CloudWatch logs. Without this configuration, API Gateway will not be able to send
logs to CloudWatch. This configuration is done once per region regardless of the number of API Gateways deployed in that
region. This module creates an IAM role, assigns it the necessary permissions to write logs and sets it as the
"CloudWatch log role ARN" in the API Gateway configuration.
## Usage
**Stack Level**: Regional
The following is a snippet for how to use this component:
```yaml
components:
terraform:
api-gateway-account-settings:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
tags:
Service: api-gateway
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`role_arn`
Role ARN of the API Gateway logging role
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`api_gateway_account_settings` | 0.9.0 | [`cloudposse/api-gateway/aws//modules/account-settings`](https://registry.terraform.io/modules/cloudposse/api-gateway/aws/modules/account-settings/0.9.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## api-gateway-rest-api
This component is responsible for deploying an API Gateway REST API.
## Usage
**Stack Level**: Regional
The following is a snippet for how to use this component:
```yaml
components:
terraform:
api-gateway-rest-api:
vars:
enabled: true
name: api
openapi_config:
openapi: 3.0.1
info:
title: Example API Gateway
version: 1.0.0
paths:
"/":
get:
x-amazon-apigateway-integration:
httpMethod: GET
payloadFormatVersion: 1.0
type: HTTP_PROXY
uri: https://api.ipify.org
"/{proxy+}":
get:
x-amazon-apigateway-integration:
httpMethod: GET
payloadFormatVersion: 1.0
type: HTTP_PROXY
uri: https://api.ipify.org
```
## Variables
### Required Variables
Whether data trace logging is enabled for this method, which affects the log entries pushed to Amazon CloudWatch Logs. WARNING: This logs full request/response data to CloudWatch and should not be enabled in production if sensitive data may be present in API payloads.
**Default value:** `false`
`deregistration_delay` (`number`) optional
The amount of time to wait in seconds before changing the state of a deregistering target to unused
**Default value:** `15`
`enable_private_link_nlb` (`bool`) optional
A flag to indicate whether to enable private link.
**Default value:** `false`
A flag to indicate whether to enable private link deletion protection.
**Default value:** `false`
`endpoint_type` (`string`) optional
The type of the endpoint. One of - PUBLIC, PRIVATE, REGIONAL
**Default value:** `"REGIONAL"`
`fully_qualified_domain_name` (`string`) optional
The fully qualified domain name of the API.
**Default value:** `null`
`logging_level` (`string`) optional
The logging level of the API. One of - OFF, INFO, ERROR
**Default value:** `"INFO"`
`metrics_enabled` (`bool`) optional
A flag to indicate whether to enable metrics collection.
**Default value:** `true`
`openapi_config` (`any`) optional
The OpenAPI specification for the API
**Default value:** `{ }`
`rest_api_policy` (`string`) optional
The IAM policy document for the API.
**Default value:** `null`
`stage_name` (`string`) optional
The name of the stage
**Default value:** `""`
`throttling_burst_limit` (`number`) optional
The API request burst limit
**Default value:** `-1`
`throttling_rate_limit` (`number`) optional
The API request rate limit
**Default value:** `-1`
`xray_tracing_enabled` (`bool`) optional
A flag to indicate whether to enable X-Ray tracing.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
The ARN of the REST API
`created_date`
The date the REST API was created
`execution_arn`
The execution ARN part to be used in lambda_permission's source_arn when allowing API Gateway to invoke a Lambda
function, e.g., arn:aws:execute-api:eu-west-2:123456789012:z4675bid1j, which can be concatenated with allowed stage,
method and resource path.The ARN of the Lambda function that will be executed.
`id`
The ID of the REST API
`invoke_url`
The URL to invoke the REST API
`root_resource_id`
The resource ID of the REST API's root
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`acm` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`api_gateway_rest_api` | 0.9.0 | [`cloudposse/api-gateway/aws`](https://registry.terraform.io/modules/cloudposse/api-gateway/aws/0.9.0) | n/a
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`nlb` | 0.18.2 | [`cloudposse/nlb/aws`](https://registry.terraform.io/modules/cloudposse/nlb/aws/0.18.2) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_api_gateway_base_path_mapping.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_base_path_mapping) (resource)
- [`aws_api_gateway_domain_name.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_domain_name) (resource)
- [`aws_route53_record.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_acm_certificate.issued`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/acm_certificate) (data source)
- [`aws_route53_zone.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/route53_zone) (data source)
---
## argocd-github-repo
This component is responsible for creating and managing an ArgoCD desired state repository.
## Usage
**Stack Level**: Regional
The following are example snippets of how to use this component:
```yaml
# stacks/argocd/repo/default.yaml
components:
terraform:
argocd-repo:
vars:
enabled: true
github_user: ci-acme
github_user_email: ci@acme.com
github_organization: ACME
github_codeowner_teams:
- "@ACME/cloud-admins"
- "@ACME/cloud-posse"
# the team must be present in the org where the repository lives
# team_slug is the name of the team without the org
# e.g. `@cloudposse/engineering` is just `engineering`
permissions:
- team_slug: admins
permission: admin
- team_slug: bots
permission: admin
- team_slug: engineering
permission: push
```
```yaml
# stacks/argocd/repo/non-prod.yaml
import:
- catalog/argocd/repo/defaults
components:
terraform:
argocd-deploy-non-prod:
component: argocd-repo
settings:
spacelift:
workspace_enabled: true
vars:
name: argocd-deploy-non-prod
description: "ArgoCD desired state repository (Non-production) for ACME applications"
environments:
- tenant: mgmt
environment: uw2
stage: sandbox
```
```yaml
# stacks/mgmt-gbl-corp.yaml
import:
---
- catalog/argocd/repo/non-prod
```
If the repository already exists, it will need to be imported (replace names of IAM profile var file accordingly):
```bash
$ export TF_VAR_github_token_override=[REDACTED]
$ atmos terraform varfile argocd-deploy-non-prod -s mgmt-gbl-corp
$ cd components/terraform/argocd-repo
$ terraform import -var "import_profile_name=eg-mgmt-gbl-corp-admin" -var-file="mgmt-gbl-corp-argocd-deploy-non-prod.terraform.tfvars.json" "github_repository.default[0]" argocd-deploy-non-prod
$ atmos terraform varfile argocd-deploy-non-prod -s mgmt-gbl-corp
$ cd components/terraform/argocd-repo
$ terraform import -var "import_profile_name=eg-mgmt-gbl-corp-admin" -var-file="mgmt-gbl-corp-argocd-deploy-non-prod.terraform.tfvars.json" "github_branch.default[0]" argocd-deploy-non-prod:main
$ cd components/terraform/argocd-repo
$ terraform import -var "import_profile_name=eg-mgmt-gbl-corp-admin" -var-file="mgmt-gbl-corp-argocd-deploy-non-prod.terraform.tfvars.json" "github_branch_default.default[0]" argocd-deploy-non-prod
```
## Variables
### Required Variables
List of GitHub usernames and team slugs that can bypass pull request requirements
**Default value:** `[ ]`
`create_repo` (`bool`) optional
Whether or not to create the repository or use an existing one
**Default value:** `true`
`deploy_keys_enabled` (`bool`) optional
Enable GitHub deploy keys for the repository. These are used for Argo CD application syncing. Alternatively, you can use a GitHub App to access this desired state repository.
**Default value:** `true`
`description` (`string`) optional
The description of the repository
**Default value:** `null`
`environments` optional
Environments to populate `applicationset.yaml` files and repository deploy keys (for ArgoCD) for.
`auto-sync` determines whether or not the ArgoCD application will be automatically synced.
`ignore-differences` determines whether or not the ArgoCD application will ignore the number of
replicas in the deployment. Read more on ignore differences here:
https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#respect-ignore-difference-configs
Example:
```
tenant: plat
environment: use1
stage: sandbox
auto-sync: true
ignore-differences:
- group: apps
kind: Deployment
json-pointers:
- /spec/replicas
```
**Type:**
```hcl
list(object({
tenant = optional(string, null)
environment = string
stage = string
attributes = optional(list(string), [])
auto-sync = bool
ignore-differences = optional(list(object({
group = string,
kind = string,
json-pointers = list(string)
})), [])
}))
```
**Default value:** `[ ]`
`github_base_url` (`string`) optional
This is the target GitHub base API endpoint. Providing a value is a requirement when working with GitHub Enterprise. It is optional to provide this value and it can also be sourced from the `GITHUB_BASE_URL` environment variable. The value must end with a slash, for example: `https://terraformtesting-ghe.westus.cloudapp.azure.com/`
**Default value:** `null`
Enable default GitHub commit statuses notifications (required for CD sync mode)
**Default value:** `true`
`github_notifications` (`list(string)`) optional
ArgoCD notification annotations for subscribing to GitHub.
The default value given uses the same notification template names as defined in the `eks/argocd` component. If want to add additional notifications, include any existing notifications from this list that you want to keep in addition.
**Default value:**
```hcl
[
"notifications.argoproj.io/subscribe.on-deploy-started.app-repo-github-commit-status: \"\"",
"notifications.argoproj.io/subscribe.on-deploy-started.argocd-repo-github-commit-status: \"\"",
"notifications.argoproj.io/subscribe.on-deploy-succeeded.app-repo-github-commit-status: \"\"",
"notifications.argoproj.io/subscribe.on-deploy-succeeded.argocd-repo-github-commit-status: \"\"",
"notifications.argoproj.io/subscribe.on-deploy-failed.app-repo-github-commit-status: \"\"",
"notifications.argoproj.io/subscribe.on-deploy-failed.argocd-repo-github-commit-status: \"\""
]
```
`github_token_override` (`string`) optional
Use the value of this variable as the GitHub token instead of reading it from SSM
**Default value:** `null`
The namespace used for the ArgoCD application
**Default value:** `"argocd"`
`permissions` optional
A list of Repository Permission objects used to configure the team permissions of the repository
`team_slug` should be the name of the team without the `@{org}` e.g. `@cloudposse/team` => `team`
`permission` is just one of the available values listed below
**Type:**
```hcl
list(object({
team_slug = string,
permission = string
}))
```
**Default value:** `[ ]`
`push_restrictions_enabled` (`bool`) optional
Enforce who can push to the main branch
**Default value:** `true`
`required_pull_request_reviews` (`bool`) optional
Enforce restrictions for pull request reviews
**Default value:** `true`
Format string of the SSM parameter path to which the deploy keys will be written to (%s will be replaced with the environment name)
**Default value:** `"/argocd/deploy_keys/%s"`
`use_local_github_credentials` (`bool`) optional
Use local GitHub credentials from environment variables instead of SSM
**Default value:** `false`
`vulnerability_alerts_enabled` (`bool`) optional
Enable security alerts for vulnerable dependencies
**Default value:** `false`
`web_commit_signoff_required` (`bool`) optional
Require contributors to sign off on web-based commits
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`deploy_keys_ssm_path_format`
SSM Parameter Store path format for the repository's deploy keys
`deploy_keys_ssm_paths`
SSM Parameter Store paths for the repository's deploy keys
`repository`
Repository name
`repository_default_branch`
Repository default branch
`repository_description`
Repository description
`repository_git_clone_url`
Repository git clone URL
`repository_http_clone_url`
Repository HTTP clone URL
`repository_ssh_clone_url`
Repository SSH clone URL
`repository_url`
Repository URL
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `github`, version: `>= 6.0`
- `tls`, version: `>= 3.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `github`, version: `>= 6.0`
- `tls`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`store_write` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`github_branch_default.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/branch_default) (resource)
- [`github_branch_protection.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/branch_protection) (resource)
- [`github_repository.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository) (resource)
- [`github_repository_deploy_key.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_deploy_key) (resource)
- [`github_repository_file.application_set`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_file) (resource)
- [`github_repository_file.codeowners_file`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_file) (resource)
- [`github_repository_file.gitignore`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_file) (resource)
- [`github_repository_file.pull_request_template`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_file) (resource)
- [`github_repository_file.readme`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_file) (resource)
- [`github_team_repository.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/team_repository) (resource)
- [`tls_private_key.default`](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.github_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`github_repository.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/repository) (data source)
- [`github_team.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/team) (data source)
- [`github_user.automation_user`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/user) (data source)
---
## athena
This component is responsible for provisioning an Amazon Athena workgroup, databases, and related resources.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
`stacks/catalog/athena/defaults.yaml` file (base component for all Athena deployments with default settings):
```yaml
components:
terraform:
athena/defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
tags:
Team: sre
Service: athena
create_s3_bucket: true
create_kms_key: true
athena_kms_key_deletion_window: 7
bytes_scanned_cutoff_per_query: null
enforce_workgroup_configuration: true
publish_cloudwatch_metrics_enabled: true
encryption_option: "SSE_KMS"
s3_output_path: ""
workgroup_state: "ENABLED"
database: []
```
```yaml
import:
- catalog/athena/defaults
components:
terraform:
athena/example:
metadata:
component: athena
inherits:
- athena/defaults
vars:
enabled: true
name: athena-example
workgroup_description: "My Example Athena Workgroup"
database:
- example_db_1
- example_db_2
```
### CloudTrail Integration
Using Athena with CloudTrail logs is a powerful way to enhance your analysis of AWS service activity. This component
supports creating a CloudTrail table for each account and setting up queries to read CloudTrail logs from a centralized
location.
To set up the CloudTrail Integration, first create the `create` and `alter` queries in Athena with this component. When
`var.cloudtrail_database` is defined, this component will create these queries.
```yaml
import:
- catalog/athena/defaults
components:
terraform:
athena/audit:
metadata:
component: athena
inherits:
- athena/defaults
vars:
enabled: true
name: athena-audit
workgroup_description: "Athena Workgroup for Auditing"
cloudtrail_database: audit
databases:
audit:
comment: "Auditor database for Athena"
properties: {}
named_queries:
platform_dev:
database: audit
description: "example query against CloudTrail logs"
query: |
SELECT
useridentity.arn,
eventname,
sourceipaddress,
eventtime
FROM %s.platform_dev_cloudtrail_logs
LIMIT 100;
```
Once those are created, run the `create` and then the `alter` queries in the AWS Console to create and then fill the
tables in Athena.
:::info
Athena runs queries with the permissions of the user executing the query. In order to be able to query CloudTrail logs,
the `audit` account must have access to the KMS key used to encrypt CloudTrails logs. Set `var.audit_access_enabled` to
`true` in the `cloudtrail` component
:::
## Variables
### Required Variables
`databases` (`map(any)`) required
Map of Athena databases and related configuration.
`region` (`string`) required
AWS Region
### Optional Variables
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
`athena_kms_key` (`string`) optional
Use an existing KMS key for Athena if `create_kms_key` is `false`.
**Default value:** `null`
Integer for the upper data usage limit (cutoff) for the amount of bytes a single query in a workgroup is allowed to scan. Must be at least 10485760.
**Default value:** `null`
Boolean whether Amazon CloudWatch metrics are enabled for the workgroup.
**Default value:** `true`
`s3_output_path` (`string`) optional
The S3 bucket path used to store query results.
**Default value:** `""`
`workgroup_description` (`string`) optional
Description of the Athena workgroup.
**Default value:** `""`
`workgroup_encryption_option` (`string`) optional
Indicates whether Amazon S3 server-side encryption with Amazon S3-managed keys (SSE_S3), server-side encryption with KMS-managed keys (SSE_KMS), or client-side encryption with KMS-managed keys (CSE_KMS) is used.
**Default value:** `"SSE_KMS"`
`workgroup_force_destroy` (`bool`) optional
The option to delete the workgroup and its contents even if the workgroup contains any named queries.
**Default value:** `false`
`workgroup_state` (`string`) optional
State of the workgroup. Valid values are `DISABLED` or `ENABLED`.
**Default value:** `"ENABLED"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`data_catalogs`
List of newly created Athena data catalogs.
`databases`
List of newly created Athena databases.
`kms_key_arn`
ARN of KMS key used by Athena.
`named_queries`
List of newly created Athena named queries.
`s3_bucket_id`
ID of S3 bucket used for Athena query results.
`workgroup_id`
ID of newly created Athena workgroup.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`athena` | 0.2.1 | [`cloudposse/athena/aws`](https://registry.terraform.io/modules/cloudposse/athena/aws/0.2.1) | n/a
`cloudtrail_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_athena_named_query.cloudtrail_query_alter_tables`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/athena_named_query) (resource)
- [`aws_athena_named_query.cloudtrail_query_create_tables`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/athena_named_query) (resource)
## Data Sources
The following data sources are used by this module:
---
## audit-manager
This component is responsible for configuring AWS Audit Manager within an AWS Organization.
AWS Audit Manager helps you continuously audit your AWS usage to simplify how you assess risk and compliance with
regulations and industry standards. It automates evidence collection, organizes compliance data, and generates
audit-ready reports.
## Key Features
- **Prebuilt Frameworks**: AWS Control Tower, CIS, FedRAMP, GDPR, HIPAA, PCI DSS, SOC 2, NIST 800-53
- **Custom Controls**: Build custom frameworks and controls for specific business requirements
- **Automated Evidence**: Collects evidence from CloudTrail, Config, Security Hub, License Manager
- **Multi-account**: Supports assessments across multiple AWS accounts via AWS Organizations
- **Delegation Workflow**: Delegate control sets to specialized team members
- **Evidence Search**: Search through thousands of pieces of collected evidence with filters and groupings
- **Assessment Reports**: Cryptographically verified reports with organized evidence
- **Manual Evidence**: Upload policy documents, training transcripts, architecture diagrams
## Architecture
Audit Manager uses a **unique single-step deployment model** different from other AWS security services:
| Component | Description |
|-----------|-------------|
| **Organization Management Account** | Enables Audit Manager AND delegates administration in a single deployment |
| **Delegated Administrator Account** | Receives delegated administration automatically, creates/manages assessments |
| **Member Accounts** | Evidence automatically collected, no additional configuration required |
## Deployment Model Comparison
| Aspect | AWS Audit Manager | AWS Inspector2 | AWS Access Analyzer |
|--------|-------------------|----------------|---------------------|
| **Deployment Approach** | Single-step in root account only | Delegated administrator (2 steps) | Delegated administrator (2 steps) |
| **Member Account Setup** | No setup (evidence auto-collected) | Auto-enabled by delegated admin | No setup (auto-analyzed) |
| **Provisioning Steps** | 1 step (root only) | 2 steps (root → security) | 2 steps (root → security) |
## Regional Deployment
Audit Manager is a regional service. You must deploy it to each region where you want to run compliance assessments.
Assessment reports are stored in region-specific S3 buckets.
## Service-Linked Role
AWS Audit Manager automatically creates a service-linked role when you enable the service. No manual role creation is
required.
## Assessment Report S3 Buckets
When generating assessment reports, Audit Manager publishes reports to an S3 bucket of your choice:
- **Same-Region Buckets**: Recommended. Supports up to 22,000 evidence items (vs. 3,500 for cross-region)
- **Encryption**: If using SSE-KMS, the KMS key must match your Audit Manager data encryption settings
- **Account**: Use buckets in the delegated administrator account (cross-account not recommended)
- **Per-Region**: Create a bucket in each region where you'll run assessments
## Configuration
### Defaults (Abstract Component)
```yaml
components:
terraform:
audit-manager/defaults:
metadata:
component: audit-manager
type: abstract
vars:
enabled: true
global_environment: gbl
account_map_tenant: core
root_account_stage: root
delegated_administrator_account_name: core-security
deregister_on_destroy: true
```
### Root Account Configuration (Single-Step Deployment)
```yaml
import:
- catalog/audit-manager/defaults
components:
terraform:
# Single-step: Enable Audit Manager and delegate administration
audit-manager/root:
metadata:
component: audit-manager
inherits:
- audit-manager/defaults
vars:
# Requires SuperAdmin permissions
privileged: true
```
## Provisioning
Deploy to the organization management (root) account for each region where you want assessments:
```bash
# Deploy to us-east-1
atmos terraform apply audit-manager/root -s plat-use1-root
# Deploy to us-west-2
atmos terraform apply audit-manager/root -s plat-usw2-root
```
This single deployment:
- Enables Audit Manager in the organization
- Delegates administration to the security account
- Begins automatic evidence collection from member accounts
## Assessment Report S3 Bucket Setup
Create S3 buckets in the delegated administrator (security) account for each region:
```yaml
# stacks/catalog/s3-bucket/audit-manager-reports.yaml
import:
- catalog/s3-bucket/defaults
components:
terraform:
audit-manager-reports-bucket:
metadata:
component: s3-bucket
inherits:
- s3-bucket/defaults
vars:
enabled: true
name: audit-manager-reports
s3_object_ownership: "BucketOwnerEnforced"
versioning_enabled: false
```
Deploy to each region in the security account:
```bash
atmos terraform apply audit-manager-reports-bucket -s plat-use1-security
atmos terraform apply audit-manager-reports-bucket -s plat-usw2-security
```
## Creating Assessments
After deploying Audit Manager, create assessments in the delegated administrator account:
1. **Via Console**: AWS Audit Manager console → Assessments → Create assessment
2. **Via CLI**: Use `aws auditmanager` CLI commands
3. **Via Terraform**: Use `aws_auditmanager_assessment` resource
**Assessment Components:**
- **Framework**: Choose prebuilt or custom framework
- **Scope**: Select AWS accounts and services to assess
- **Roles**: Define who can access the assessment
- **Report Destination**: Specify S3 bucket for reports
## Cost Considerations
- **Assessment Price**: Based on number of evidence items collected per month
- **Evidence Storage**: S3 storage costs for assessment reports
- **Evidence Finder**: Additional cost if enabling CloudTrail Lake integration
- **Free Tier**: Limited free usage during first 13 months
- **Regional**: Costs are per region
See [AWS Audit Manager Pricing](https://aws.amazon.com/audit-manager/pricing/) for current rates.
## Compliance Frameworks Supported
Audit Manager provides prebuilt frameworks for common compliance standards:
- **PCI DSS**: Payment Card Industry Data Security Standard
- **HIPAA**: Health Insurance Portability and Accountability Act
- **SOC 2**: Service Organization Control 2
- **NIST 800-53**: National Institute of Standards and Technology (Rev 4 and Rev 5)
- **FedRAMP**: Federal Risk and Authorization Management Program
- **GDPR**: General Data Protection Regulation
- **ISO 27001**: Information Security Management
- **CIS**: Center for Internet Security benchmarks (v1.2.0, v1.3.0, v1.4.0, v7.1, v8)
- **GxP**: Good Practice quality guidelines (21 CFR Part 11)
- **AWS Control Tower**: AWS Control Tower guardrails
## References
### AWS Documentation
- [What is AWS Audit Manager?](https://docs.aws.amazon.com/audit-manager/latest/userguide/what-is.html)
- [Setting Up AWS Audit Manager](https://docs.aws.amazon.com/audit-manager/latest/userguide/setting-up.html)
- [Assessment Settings](https://docs.aws.amazon.com/audit-manager/latest/userguide/assessment-settings.html)
- [Audit Manager Frameworks](https://docs.aws.amazon.com/audit-manager/latest/userguide/frameworks.html)
- [Evidence Collection](https://docs.aws.amazon.com/audit-manager/latest/userguide/evidence.html)
- [Delegated Administrator](https://docs.aws.amazon.com/audit-manager/latest/userguide/delegated-admin.html)
### Terraform Resources
- [aws_auditmanager_account_registration](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/auditmanager_account_registration)
- [aws_auditmanager_organization_admin_account_registration](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/auditmanager_organization_admin_account_registration)
- [aws_auditmanager_assessment](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/auditmanager_assessment)
- [aws_auditmanager_control](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/auditmanager_control)
- [aws_auditmanager_framework](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/auditmanager_framework)
### Additional Resources
- [AWS Audit Manager Product Page](https://aws.amazon.com/audit-manager/)
- [AWS Audit Manager Pricing](https://aws.amazon.com/audit-manager/pricing/)
- [AWS Audit Manager Features](https://aws.amazon.com/audit-manager/features/)
## Variables
### Required Variables
### Optional Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`mock`
Mock output example for the Cloud Posse Terraform component template
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.66.1, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## aurora-mysql
This component provisions Amazon Aurora MySQL RDS clusters and seeds relevant database information
(hostnames, username, password, etc.) into AWS SSM Parameter Store.
## Usage
**Stack Level**: Regional
Here's an example for how to use this component.
`stacks/catalog/aurora-mysql/defaults.yaml` file (base component for all Aurora MySQL clusters with default settings):
```yaml
components:
terraform:
aurora-mysql/defaults:
metadata:
type: abstract
vars:
enabled: false
name: rds
mysql_deletion_protection: false
mysql_storage_encrypted: true
aurora_mysql_engine: "aurora-mysql"
allowed_cidr_blocks:
# all automation
- 10.128.0.0/22
# all corp
- 10.128.16.0/22
eks_component_names:
- eks/eks
# https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.3020.html
# aws rds describe-db-engine-versions --engine aurora-mysql --query 'DBEngineVersions[].EngineVersion'
aurora_mysql_engine_version: "8.0.mysql_aurora.3.02.0"
# engine and cluster family are notoriously hard to find.
# If you know the engine version (example here is "8.0.mysql_aurora.3.02.0"), use Engine and DBParameterGroupFamily from:
# aws rds describe-db-engine-versions --engine aurora-mysql --query "DBEngineVersions[]" | \
# jq '.[] | select(.EngineVersion == "8.0.mysql_aurora.3.02.0") |
# { Engine: .Engine, EngineVersion: .EngineVersion, DBParameterGroupFamily: .DBParameterGroupFamily }'
#
# Returns:
# {
# "Engine": "aurora-mysql",
# "EngineVersion": "8.0.mysql_aurora.3.02.0",
# "DBParameterGroupFamily": "aurora-mysql8.0"
# }
aurora_mysql_cluster_family: "aurora-mysql8.0"
mysql_name: shared
# 1 writer, 1 reader
mysql_cluster_size: 2
mysql_admin_user: "" # generate random username
mysql_admin_password: "" # generate random password
mysql_db_name: "" # generate random db name
# https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html
mysql_instance_type: "db.t3.medium"
mysql_skip_final_snapshot: false
```
Example configuration for a dev cluster. Import this file into the primary region.
`stacks/catalog/aurora-mysql/dev.yaml` file (override the default settings for the cluster in the `dev` account):
```yaml
import:
- catalog/aurora-mysql/defaults
components:
terraform:
aurora-mysql/dev:
metadata:
component: aurora-mysql
inherits:
- aurora-mysql/defaults
vars:
instance_type: db.r5.large
mysql_cluster_size: 1
mysql_name: main
mysql_db_name: main
```
Example deployment with primary cluster deployed to us-east-1 in a `platform-dev` account:
`atmos terraform apply aurora-mysql/dev -s platform-use1-dev`
## Disaster Recovery with Cross-Region Replication
This component is designed to support cross-region replication with continuous replication. If enabled and deployed, a
secondary cluster will be deployed in a different region than the primary cluster. This approach is highly aggressive and
costly, but in a disaster scenario where the primary cluster fails, the secondary cluster can be promoted to take its
place. Follow these steps to handle a Disaster Recovery.
### Usage
To deploy a secondary cluster for cross-region replication, add the following catalog entries to an alternative region:
Default settings for a secondary, replica cluster. For this example, this file is saved as
`stacks/catalog/aurora-mysql/replica/defaults.yaml`
```yaml
import:
- catalog/aurora-mysql/defaults
components:
terraform:
aurora-mysql/replica/defaults:
metadata:
component: aurora-mysql
inherits:
- aurora-mysql/defaults
vars:
eks_component_names: []
allowed_cidr_blocks:
# all automation in primary region (where Spacelift is deployed)
- 10.128.0.0/22
# all corp in the same region as this cluster
- 10.132.16.0/22
mysql_instance_type: "db.t3.medium"
mysql_name: "replica"
primary_cluster_region: use1
is_read_replica: true
is_promoted_read_replica: false # False by default, added for visibility
```
Environment specific settings for `dev` as an example:
```yaml
import:
- catalog/aurora-mysql/replica/defaults
components:
terraform:
aurora-mysql/dev:
metadata:
component: aurora-mysql
inherits:
- aurora-mysql/defaults
- aurora-mysql/replica/defaults
vars:
enabled: true
primary_cluster_component: aurora-mysql/dev
```
### Promoting the Read Replica
Promoting an existing RDS Replicate cluster to a fully standalone cluster is not currently supported by Terraform:
https://github.com/hashicorp/terraform-provider-aws/issues/6749
Instead, promote the Replicate cluster with the AWS CLI command:
`aws rds promote-read-replica-db-cluster --db-cluster-identifier `
After promoting the replica, update the stack configuration to prevent future Terraform runs from re-enabling
replication. In this example, modify `stacks/catalog/aurora-mysql/replica/defaults.yaml`
```yaml
is_promoted_read_replica: true
```
Re-deploying the component should show no changes. For example,
`atmos terraform apply aurora-mysql/dev -s platform-use2-dev`
## Variables
### Required Variables
`aurora_mysql_cluster_family` (`string`) required
DBParameterGroupFamily (e.g. `aurora5.6`, `aurora-mysql5.7` for Aurora MySQL databases). See https://stackoverflow.com/a/55819394 for help finding the right one to use.
`aurora_mysql_engine` (`string`) required
Engine for Aurora database: `aurora` for MySQL 5.6, `aurora-mysql` for MySQL 5.7
`region` (`string`) required
AWS Region
### Optional Variables
`allow_ingress_from_vpc_accounts` optional
List of account contexts to pull VPC ingress CIDR and add to cluster security group.
e.g.
\{
environment = "ue2",
stage = "auto",
tenant = "core"
\}
Defaults to the "vpc" component in the given account
**Type:**
```hcl
list(object({
vpc = optional(string, "vpc")
environment = optional(string)
stage = optional(string)
tenant = optional(string)
}))
```
**Default value:** `[ ]`
`allowed_cidr_blocks` (`list(string)`) optional
List of CIDR blocks to be allowed to connect to the RDS cluster
**Default value:** `[ ]`
`aurora_mysql_cluster_parameters` optional
List of DB cluster parameters to apply
**Type:**
```hcl
list(object({
apply_method = string
name = string
value = string
}))
```
**Default value:** `[ ]`
`aurora_mysql_engine_version` (`string`) optional
Engine Version for Aurora database.
**Default value:** `""`
`aurora_mysql_instance_parameters` optional
List of DB instance parameters to apply
**Type:**
```hcl
list(object({
apply_method = string
name = string
value = string
}))
```
**Default value:** `[ ]`
`auto_minor_version_upgrade` (`bool`) optional
Automatically update the cluster when a new minor version is released
**Default value:** `false`
`eks_component_names` (`set(string)`) optional
The names of the eks components
**Default value:**
```hcl
[
"eks/cluster"
]
```
Enable IAM database authentication
**Default value:** `false`
`is_promoted_read_replica` (`bool`) optional
If `true`, do not assign a Replication Source to the Cluster. Set to `true` after manually promoting the cluster from a replica to a standalone cluster.
**Default value:** `false`
`is_read_replica` (`bool`) optional
If `true`, create this DB cluster as a Read Replica.
**Default value:** `false`
`mysql_admin_password` (`string`) optional
MySQL password for the admin user
**Default value:** `""`
List of log types to export to cloudwatch. The following log types are supported: audit, error, general, slowquery
**Default value:**
```hcl
[
"audit",
"error",
"general",
"slowquery"
]
```
`mysql_instance_type` (`string`) optional
EC2 instance type for RDS MySQL cluster
**Default value:** `"db.t3.medium"`
`mysql_maintenance_window` (`string`) optional
Weekly time range during which system maintenance can occur, in UTC
**Default value:** `"sat:10:00-sat:10:30"`
`mysql_name` (`string`) optional
MySQL solution name (part of cluster identifier)
**Default value:** `""`
`mysql_skip_final_snapshot` (`string`) optional
Determines whether a final DB snapshot is created before the DB cluster is deleted
**Default value:** `false`
`mysql_storage_encrypted` (`string`) optional
Set to `true` to keep the database contents encrypted
**Default value:** `true`
`performance_insights_enabled` (`bool`) optional
Set `true` to enable Performance Insights
**Default value:** `false`
`primary_cluster_component` (`string`) optional
If this cluster is a read replica and no replication source is explicitly given, the component name for the primary cluster
**Default value:** `"aurora-mysql"`
`primary_cluster_region` (`string`) optional
If this cluster is a read replica and no replication source is explicitly given, the region to look for a matching cluster
**Default value:** `""`
`publicly_accessible` (`bool`) optional
Set to true to create the cluster in a public subnet
**Default value:** `false`
ARN of a source DB cluster or DB instance if this DB cluster is to be created as a Read Replica.
If this value is empty and replication is enabled, remote state will attempt to find
a matching cluster in the Primary DB Cluster's region
**Default value:** `""`
`secrets_store_type` (`string`) optional
Secret Store type to save database credentials. Valid values: `SSM`, `ASM`
**Default value:** `"SSM"`
`ssm_password_source` (`string`) optional
If `var.ssm_passwords_enabled` is `true`, DB user passwords will be retrieved from SSM using
`var.ssm_password_source` and the database username. If this value is not set,
a default path will be created using the SSM path prefix and ID of the associated Aurora Cluster.
**Default value:** `""`
`ssm_path_prefix` (`string`) optional
SSM path prefix
**Default value:** `"rds"`
`vpc_component_name` (`string`) optional
The name of the VPC component
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`aurora_mysql_cluster_arn`
The ARN of Aurora cluster
`aurora_mysql_cluster_id`
The ID of Aurora cluster
`aurora_mysql_cluster_name`
Aurora MySQL cluster identifier
`aurora_mysql_endpoint`
Aurora MySQL endpoint
`aurora_mysql_master_hostname`
Aurora MySQL DB master hostname
`aurora_mysql_master_password`
Location of admin password
`aurora_mysql_master_password_asm_key`
ASM key for admin password
`aurora_mysql_master_password_ssm_key`
SSM key for admin password
`aurora_mysql_master_username`
Aurora MySQL username for the master DB user
`aurora_mysql_reader_endpoint`
Aurora MySQL reader endpoint
`aurora_mysql_replicas_hostname`
Aurora MySQL replicas hostname
`cluster_domain`
Cluster DNS name
`kms_key_arn`
KMS key ARN for Aurora MySQL
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 2.2`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 2.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`aurora_mysql` | 2.4.0 | [`cloudposse/rds-cluster/aws`](https://registry.terraform.io/modules/cloudposse/rds-cluster/aws/2.4.0) | n/a
`cluster` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`dns-delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key_rds` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | n/a
`parameter_store_write` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`primary_cluster` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`vpc_ingress` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_secretsmanager_secret.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret) (resource)
- [`aws_secretsmanager_secret_version.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret_version) (resource)
- [`random_password.mysql_admin_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
- [`random_pet.mysql_admin_user`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
- [`random_pet.mysql_db_name`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.kms_key_rds`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_ssm_parameter.password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## aurora-mysql-resources
This component provisions Aurora MySQL resources: additional databases, users, permissions, and grants.
NOTE: Creating additional users (including read-only users) and databases requires Spacelift, since that action must be
done via the MySQL provider, and by default only the automation account is whitelisted by the Aurora cluster.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
`stacks/catalog/aurora-mysql/resources/defaults.yaml` file (base component for Aurora MySQL Resources with default
settings):
```yaml
components:
terraform:
aurora-mysql-resources/defaults:
metadata:
type: abstract
vars:
enabled: true
```
Example (not actual):
`stacks/uw2-dev.yaml` file (override the default settings for the cluster resources in the `dev` account, create an
additional database and user):
```yaml
import:
- catalog/aurora-mysql/resources/defaults
components:
terraform:
aurora-mysql-resources/dev:
metadata:
component: aurora-mysql-resources
inherits:
- aurora-mysql-resources/defaults
vars:
aurora_mysql_component_name: aurora-mysql/dev
additional_users:
example:
db_user: example
db_password: ""
grants:
- grant: ["ALL"]
db: example
object_type: database
schema: null
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`additional_databases` (`set(string)`) optional
Additional databases to be created with the cluster
**Default value:** `[ ]`
`additional_grants` optional
Create additional database user with specified grants.
If `var.ssm_password_source` is set, passwords will be retrieved from SSM parameter store,
otherwise, passwords will be generated and stored in SSM parameter store under the service's key.
**Type:**
```hcl
map(list(object({
grant : list(string)
db : string
})))
```
**Default value:** `{ }`
`additional_users` optional
Create additional database user for a service, specifying username, grants, and optional password.
If no password is specified, one will be generated. Username and password will be stored in
SSM parameter store under the service's key.
**Type:**
```hcl
map(object({
db_user : string
db_password : string
grants : list(object({
grant : list(string)
db : string
}))
}))
```
**Default value:** `{ }`
`aurora_mysql_component_name` (`string`) optional
Aurora MySQL component name to read the remote state from
**Default value:** `"aurora-mysql"`
`mysql_cluster_enabled` (`string`) optional
Set to `false` to prevent the module from creating any resources
**Default value:** `true`
`read_passwords_from_ssm` (`bool`) optional
When `true`, fetch user passwords from SSM
**Default value:** `true`
`ssm_password_source` (`string`) optional
If var.read_passwords_from_ssm is true, DB user passwords will be retrieved from SSM using `var.ssm_password_source` and the database username. If this value is not set, a default path will be created using the SSM path prefix and ID of the associated Aurora Cluster.
**Default value:** `""`
`ssm_path_prefix` (`string`) optional
SSM path prefix
**Default value:** `"rds"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`additional_grants`
Additional DB users created
`additional_users`
Additional DB users created
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `mysql`, version: `>= 3.0.22`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `mysql`, version: `>= 3.0.22`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`additional_grants` | latest | [`./modules/mysql-user`](https://registry.terraform.io/modules/./modules/mysql-user/) | n/a
`additional_users` | latest | [`./modules/mysql-user`](https://registry.terraform.io/modules/./modules/mysql-user/) | n/a
`aurora_mysql` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`mysql_database.additional`](https://registry.terraform.io/providers/petoju/mysql/latest/docs/resources/database) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## mysql-user
# MySQL User
## Variables
### Required Variables
`service_name` (`string`) required
Name of service owning the database (used in SSM key)
### Optional Variables
`db_password` (`string`) optional
MySQL password for the admin user (generated if not provided)
**Default value:** `""`
`db_user` (`string`) optional
MySQL admin user name (default is service name)
**Default value:** `""`
`grants` optional
List of \{ grant: "[<grant>, <grant>, ...]", db: "db" \}.
Normal grants plus `ALL_APP` for all RDS allowed grants that an app should need
(can be limited to a single database). `ALL` is not the normal MySQL `ALL` but
is all the grants RDS allows.
**Type:**
```hcl
list(object({
grant : list(string)
db : string
}))
```
**Default value:**
```hcl
[
{
"db": "*",
"grant": [
"ALL_APP"
]
}
]
```
`kms_key_id` (`string`) optional
KMS key ID, ARN, or alias to use for encrypting MySQL database
**Default value:** `"alias/aws/rds"`
`save_password_in_ssm` (`bool`) optional
If true, DB user's password will be stored in SSM
**Default value:** `true`
`ssm_path_prefix` (`string`) optional
SSM path prefix
**Default value:** `"rds"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`db_user`
DB user name
`notice`
Note to user
`password_ssm_key`
SSM key under which user password is stored
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `mysql`, version: `>= 3.0.22`
- `random`, version: `>= 2.2`
### Providers
- `mysql`, version: `>= 3.0.22`
- `random`, version: `>= 2.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`parameter_store_write` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`mysql_grant.default`](https://registry.terraform.io/providers/petoju/mysql/latest/docs/resources/grant) (resource)
- [`mysql_user.default`](https://registry.terraform.io/providers/petoju/mysql/latest/docs/resources/user) (resource)
- [`random_password.db_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
## Data Sources
The following data sources are used by this module:
None
---
## aurora-postgres
This component is responsible for provisioning Aurora Postgres RDS clusters. It seeds relevant database information
(hostnames, username, password, etc.) into AWS SSM Parameter Store.
## Usage
**Stack Level**: Regional
Here's an example for how to use this component.
`stacks/catalog/aurora-postgres/defaults.yaml` file (base component for all Aurora Postgres clusters with default
settings):
```yaml
components:
terraform:
aurora-postgres/defaults:
metadata:
type: abstract
vars:
enabled: true
name: aurora-postgres
tags:
Team: sre
Service: aurora-postgres
cluster_name: shared
deletion_protection: false
storage_encrypted: true
engine: aurora-postgresql
# Provisioned configuration
engine_mode: provisioned
engine_version: "15.3"
cluster_family: aurora-postgresql15
# 1 writer, 1 reader
cluster_size: 2
# https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html
instance_type: db.t3.medium
admin_user: postgres
admin_password: "" # generate random password
database_name: postgres
database_port: 5432
skip_final_snapshot: false
# Enhanced Monitoring
# A boolean flag to enable/disable the creation of the enhanced monitoring IAM role.
# If set to false, the module will not create a new role and will use rds_monitoring_role_arn for enhanced monitoring
enhanced_monitoring_role_enabled: true
# The interval, in seconds, between points when enhanced monitoring metrics are collected for the DB instance.
# To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0. Valid Values: 0, 1, 5, 10, 15, 30, 60
rds_monitoring_interval: 15
# Allow ingress from the following accounts
# If any of tenant, stage, or environment aren't given, this will be taken
allow_ingress_from_vpc_accounts:
- tenant: core
stage: auto
```
Example (not actual):
`stacks/uw2-dev.yaml` file (override the default settings for the cluster in the `dev` account, create an additional
database and user):
```yaml
import:
- catalog/aurora-postgres/defaults
components:
terraform:
aurora-postgres:
metadata:
component: aurora-postgres
inherits:
- aurora-postgres/defaults
vars:
enabled: true
```
### Finding Aurora Engine Version
Use the following to query the AWS API by `engine-mode`. Both provisioned and Serverless v2 use the `privisoned` engine
mode, whereas only Serverless v1 uses the `serverless` engine mode.
```bash
aws rds describe-db-engine-versions \
--engine aurora-postgresql \
--query 'DBEngineVersions[].EngineVersion' \
--filters 'Name=engine-mode,Values=serverless'
```
Use the following to query AWS API by `db-instance-class`. Use this query to find supported versions for a specific
instance class, such as `db.serverless` with Serverless v2.
```bash
aws rds describe-orderable-db-instance-options \
--engine aurora-postgresql \
--db-instance-class db.serverless \
--query 'OrderableDBInstanceOptions[].[EngineVersion]'
```
Once a version has been selected, use the following to find the cluster family.
```bash
aws rds describe-db-engine-versions --engine aurora-postgresql --query "DBEngineVersions[]" | \
jq '.[] | select(.EngineVersion == "15.3") |
{ Engine: .Engine, EngineVersion: .EngineVersion, DBParameterGroupFamily: .DBParameterGroupFamily }'
```
## Examples
Generally there are three different engine configurations for Aurora: provisioned, Serverless v1, and Serverless v2.
### Provisioned Aurora Postgres
[See the default usage example above](#usage)
### Serverless v1 Aurora Postgres
Serverless v1 requires `engine-mode` set to `serverless` uses `scaling_configuration` to configure scaling options.
For valid values, see
[ModifyCurrentDBClusterCapacity](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyCurrentDBClusterCapacity.html).
```yaml
components:
terraform:
aurora-postgres:
vars:
enabled: true
name: aurora-postgres
eks_component_names:
- eks/cluster
allow_ingress_from_vpc_accounts:
# Allows Spacelift
- tenant: core
stage: auto
environment: use2
# Allows VPN
- tenant: core
stage: network
environment: use2
cluster_name: shared
engine: aurora-postgresql
# Serverless v1 configuration
engine_mode: serverless
instance_type: "" # serverless engine_mode ignores `var.instance_type`
engine_version: "13.9" # Latest supported version as of 08/28/2023
cluster_family: aurora-postgresql13
cluster_size: 0 # serverless
scaling_configuration:
- auto_pause: true
max_capacity: 4
min_capacity: 2
seconds_until_auto_pause: 300
timeout_action: null
admin_user: postgres
admin_password: "" # generate random password
database_name: postgres
database_port: 5432
storage_encrypted: true
deletion_protection: true
skip_final_snapshot: false
# Creating read-only users or additional databases requires Spacelift
read_only_users_enabled: false
# Enhanced Monitoring
# A boolean flag to enable/disable the creation of the enhanced monitoring IAM role.
# If set to false, the module will not create a new role and will use rds_monitoring_role_arn for enhanced monitoring
enhanced_monitoring_role_enabled: true
enhanced_monitoring_attributes: ["monitoring"]
# The interval, in seconds, between points when enhanced monitoring metrics are collected for the DB instance.
# To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0. Valid Values: 0, 1, 5, 10, 15, 30, 60
rds_monitoring_interval: 15
iam_database_authentication_enabled: false
additional_users: {}
```
### Serverless v2 Aurora Postgres
Aurora Postgres Serverless v2 uses the `provisioned` engine mode with `db.serverless` instances. In order to configure
scaling with Serverless v2, use `var.serverlessv2_scaling_configuration`.
For more on valid scaling configurations, see
[Performance and scaling for Aurora Serverless v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html).
```yaml
components:
terraform:
aurora-postgres:
vars:
enabled: true
name: aurora-postgres
eks_component_names:
- eks/cluster
allow_ingress_from_vpc_accounts:
# Allows Spacelift
- tenant: core
stage: auto
environment: use2
# Allows VPN
- tenant: core
stage: network
environment: use2
cluster_name: shared
engine: aurora-postgresql
# Serverless v2 configuration
engine_mode: provisioned
instance_type: "db.serverless"
engine_version: "15.3"
cluster_family: aurora-postgresql15
cluster_size: 2
serverlessv2_scaling_configuration:
min_capacity: 2
max_capacity: 64
admin_user: postgres
admin_password: "" # generate random password
database_name: postgres
database_port: 5432
storage_encrypted: true
deletion_protection: true
skip_final_snapshot: false
# Creating read-only users or additional databases requires Spacelift
read_only_users_enabled: false
# Enhanced Monitoring
# A boolean flag to enable/disable the creation of the enhanced monitoring IAM role.
# If set to false, the module will not create a new role and will use rds_monitoring_role_arn for enhanced monitoring
enhanced_monitoring_role_enabled: true
enhanced_monitoring_attributes: ["monitoring"]
# The interval, in seconds, between points when enhanced monitoring metrics are collected for the DB instance.
# To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0. Valid Values: 0, 1, 5, 10, 15, 30, 60
rds_monitoring_interval: 15
iam_database_authentication_enabled: false
additional_users: {}
```
## Variables
### Required Variables
The amount of time, in seconds, after a scaling activity completes and before the next scaling down activity can start. Default is 300s
**Default value:** `300`
The amount of time, in seconds, after a scaling activity completes and before the next scaling up activity can start. Default is 300s
**Default value:** `300`
`autoscaling_target_metrics` (`string`) optional
The metrics type to use. If this value isn't provided the default is CPU utilization
**Default value:** `"RDSReaderAverageCPUUtilization"`
`autoscaling_target_value` (`number`) optional
The target value to scale with respect to target metrics
**Default value:** `75`
`backup_window` (`string`) optional
Daily time range during which the backups happen, UTC
**Default value:** `"07:00-09:00"`
`ca_cert_identifier` (`string`) optional
The identifier of the CA certificate for the DB instance
**Default value:** `null`
`cluster_dns_name_part` (`string`) optional
Part of DNS name added to module and cluster name for DNS for cluster endpoint
**Default value:** `"writer"`
`cluster_family` (`string`) optional
Family of the DB parameter group. Valid values for Aurora PostgreSQL: `aurora-postgresql9.6`, `aurora-postgresql10`, `aurora-postgresql11`, `aurora-postgresql12`
**Default value:** `"aurora-postgresql13"`
`cluster_parameters` optional
List of DB cluster parameters to apply
**Type:**
```hcl
list(object({
apply_method = string
name = string
value = string
}))
```
**Default value:** `[ ]`
`database_insights_mode` (`string`) optional
The database insights mode for the RDS cluster. Valid values are `standard`, `advanced`. See https://registry.terraform.io/providers/hashicorp/aws/6.16.0/docs/resources/rds_cluster#database_insights_mode-1
**Default value:** `null`
`database_name` (`string`) optional
Name for an automatically created database on cluster creation. An empty name will generate a db name.
**Default value:** `""`
`database_port` (`number`) optional
Database port
**Default value:** `5432`
`deletion_protection` (`bool`) optional
Specifies whether the Cluster should have deletion protection enabled. The database can't be deleted when this value is set to `true`
**Default value:** `false`
Attributes used to format the Enhanced Monitoring IAM role. If this role hits IAM role length restrictions (max 64 characters), consider shortening these strings.
**Default value:**
```hcl
[
"enhanced-monitoring"
]
```
A boolean flag to enable/disable the creation of the enhanced monitoring IAM role. If set to `false`, the module will not create a new role and will use `rds_monitoring_role_arn` for enhanced monitoring
**Default value:** `true`
Whether to allow traffic between resources inside the database's security group.
**Default value:** `false`
`maintenance_window` (`string`) optional
Weekly time range during which system maintenance can occur, in UTC
**Default value:** `"wed:03:00-wed:04:00"`
`manage_admin_user_password` (`bool`) optional
Set to true to allow RDS to manage the master user password in Secrets Manager. Cannot be set if admin_password is provided
**Default value:** `false`
`performance_insights_enabled` (`bool`) optional
Whether to enable Performance Insights
**Default value:** `false`
`promotion_tier` (`number`) optional
Failover Priority setting on instance level. The reader who has lower tier has higher priority to get promoted to writer.
Readers in promotion tiers 0 and 1 scale at the same time as the writer. Readers in promotion tiers 2–15 scale independently from the writer. For more information, see: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-works.scaling
**Default value:** `0`
`proxy_auth` optional
Configuration blocks with authorization mechanisms to connect to the associated database instances or clusters.
Each block supports:
- auth_scheme: The type of authentication that the proxy uses for connections. Valid values: SECRETS
- client_password_auth_type: The type of authentication the proxy uses for connections from clients. Valid values: MYSQL_NATIVE_PASSWORD, POSTGRES_SCRAM_SHA_256, POSTGRES_MD5, SQL_SERVER_AUTHENTICATION
- description: A user-specified description about the authentication used by a proxy
- iam_auth: Whether to require or disallow AWS IAM authentication. Valid values: DISABLED, REQUIRED, OPTIONAL
- secret_arn: The ARN of the Secrets Manager secret containing the database credentials
- username: The name of the database user to which the proxy connects
**Type:**
```hcl
list(object({
auth_scheme = optional(string, "SECRETS")
client_password_auth_type = optional(string)
description = optional(string)
iam_auth = optional(string, "DISABLED")
secret_arn = optional(string)
username = optional(string)
}))
```
**Default value:** `null`
The type of authentication the proxy uses for connections from clients. Valid values: MYSQL_NATIVE_PASSWORD, POSTGRES_SCRAM_SHA_256, POSTGRES_MD5, SQL_SERVER_AUTHENTICATION
**Default value:** `null`
The number of seconds for a proxy to wait for a connection to become available in the connection pool
**Default value:** `120`
`proxy_debug_logging` (`bool`) optional
Whether the proxy includes detailed information about SQL statements in its logs
**Default value:** `false`
`proxy_dns_enabled` (`bool`) optional
Whether to create a Route53 DNS record for the proxy endpoint
**Default value:** `true`
`proxy_dns_name_part` (`string`) optional
Part of DNS name added to module and cluster name for DNS for the proxy endpoint
**Default value:** `"proxy"`
`proxy_enabled` (`bool`) optional
Whether to enable RDS Proxy for the Aurora cluster
**Default value:** `false`
`proxy_existing_iam_role_arn` (`string`) optional
The ARN of an existing IAM role that the proxy can use to access secrets in AWS Secrets Manager. If not provided, the module will create a role to access secrets in Secrets Manager
**Default value:** `null`
`proxy_iam_auth` (`string`) optional
Whether to require or disallow AWS IAM authentication for connections to the proxy. Valid values: DISABLED, REQUIRED, OPTIONAL
**Default value:** `"DISABLED"`
Controls how actively the proxy closes idle database connections in the connection pool. Must be between 0 and 100.
**Default value:** `50`
`proxy_require_tls` (`bool`) optional
A Boolean parameter that specifies whether Transport Layer Security (TLS) encryption is required for connections to the proxy
**Default value:** `true`
`proxy_secret_arn` (`string`) optional
The ARN of the secret in AWS Secrets Manager that contains the database credentials. Required if manage_admin_user_password is false and proxy_auth is not provided
**Default value:** `null`
Each item in the list represents a class of SQL operations that normally cause all later statements in a session using a proxy to be pinned to the same underlying database connection
**Default value:** `null`
`publicly_accessible` (`bool`) optional
Set true to make this database accessible from the public internet
**Default value:** `false`
`rds_monitoring_interval` (`number`) optional
The interval, in seconds, between points when enhanced monitoring metrics are collected for the DB instance. To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0. Valid Values: 0, 1, 5, 10, 15, 30, 60
**Default value:** `60`
`reader_dns_name_part` (`string`) optional
Part of DNS name added to module and cluster name for DNS for cluster reader
**Default value:** `"reader"`
`restore_to_point_in_time` optional
List of point-in-time recovery options. Valid parameters are:
`source_cluster_identifier`
Identifier of the source database cluster from which to restore.
`restore_type`:
Type of restore to be performed. Valid options are "full-copy" and "copy-on-write".
`use_latest_restorable_time`:
Set to true to restore the database cluster to the latest restorable backup time. Conflicts with `restore_to_time`.
`restore_to_time`:
Date and time in UTC format to restore the database cluster to. Conflicts with `use_latest_restorable_time`.
**Type:**
```hcl
list(object({
source_cluster_identifier = string
restore_type = optional(string, "copy-on-write")
use_latest_restorable_time = optional(bool, true)
restore_to_time = optional(string, null)
}))
```
**Default value:** `[ ]`
`retention_period` (`number`) optional
Number of days to retain backups for
**Default value:** `5`
`scaling_configuration` optional
List of nested attributes with scaling properties. Only valid when `engine_mode` is set to `serverless`. This is required for Serverless v1
**Type:**
```hcl
list(object({
auto_pause = bool
max_capacity = number
min_capacity = number
seconds_until_auto_pause = number
timeout_action = string
}))
```
**Default value:** `[ ]`
`serverlessv2_scaling_configuration` optional
Nested attribute with scaling properties for ServerlessV2. Only valid when `engine_mode` is set to `provisioned.` This is required for Serverless v2
**Type:**
```hcl
object({
min_capacity = number
max_capacity = number
})
```
**Default value:** `null`
`skip_final_snapshot` (`bool`) optional
Normally AWS makes a snapshot of the database before deleting it. Set this to `true` in order to skip this.
NOTE: The final snapshot has a name derived from the cluster name. If you delete a cluster, get a final snapshot,
then create a cluster of the same name, its final snapshot will fail with a name collision unless you delete
the previous final snapshot first.
**Default value:** `false`
`snapshot_identifier` (`string`) optional
Specifies whether or not to create this cluster from a snapshot
**Default value:** `null`
`ssm_cluster_name_override` (`string`) optional
Set a cluster name into the ssm path prefix
**Default value:** `""`
`ssm_path_prefix` (`string`) optional
Top level SSM path prefix (without leading or trailing slash)
**Default value:** `"aurora-postgres"`
`storage_encrypted` (`bool`) optional
Specifies whether the DB cluster is encrypted
**Default value:** `true`
`storage_type` (`string`) optional
One of 'standard' (magnetic), 'gp2' (general purpose SSD), 'io1' (provisioned IOPS SSD), 'aurora', or 'aurora-iopt1'
**Default value:** `null`
`vpc_component_name` (`string`) optional
The name of the VPC component
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`admin_username`
Postgres admin username
`allowed_security_groups`
The resulting list of security group IDs that are allowed to connect to the Aurora Postgres cluster.
`cluster_endpoint`
Postgres cluster endpoint
`cluster_identifier`
Postgres cluster identifier
`config_map`
Map containing information pertinent to a PostgreSQL client configuration.
`database_name`
Postgres database name
`instance_endpoints`
List of Postgres instance endpoints
`kms_key_arn`
KMS key ARN for Aurora Postgres
`master_hostname`
Postgres master hostname
`proxy_arn`
The ARN of the RDS Proxy
`proxy_default_target_group_arn`
The Amazon Resource Name (ARN) representing the default target group
`proxy_default_target_group_name`
The name of the default target group
`proxy_dns_name`
The DNS name of the RDS Proxy (Route53 record)
`proxy_endpoint`
The endpoint of the RDS Proxy
`proxy_iam_role_arn`
The ARN of the IAM role that the proxy uses to access secrets in AWS Secrets Manager
`proxy_id`
The ID of the RDS Proxy
`proxy_security_group_id`
The security group ID of the RDS Proxy
`proxy_target_endpoint`
Hostname for the target RDS DB Instance
`proxy_target_id`
Identifier of db_proxy_name, target_group_name, target type, and resource identifier separated by forward slashes
`proxy_target_port`
Port for the target Aurora DB cluster
`proxy_target_rds_resource_id`
Identifier representing the DB cluster target
`proxy_target_type`
Type of target (e.g. RDS_INSTANCE or TRACKED_CLUSTER)
`reader_endpoint`
Postgres reader endpoint
`replicas_hostname`
Postgres replicas hostname
`security_group_id`
The security group ID of the Aurora Postgres cluster
`ssm_key_paths`
Names (key paths) of all SSM parameters stored for this cluster
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `postgresql`, version: `>= 1.17.1`
- `random`, version: `>= 2.3`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `random`, version: `>= 2.3`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`aurora_postgres_cluster` | 2.4.0 | [`cloudposse/rds-cluster/aws`](https://registry.terraform.io/modules/cloudposse/rds-cluster/aws/2.4.0) | https://www.terraform.io/docs/providers/aws/r/rds_cluster.html
`cluster` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key_rds` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | n/a
`parameter_store_write` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`rds_proxy` | 1.1.1 | [`cloudposse/rds-db-proxy/aws`](https://registry.terraform.io/modules/cloudposse/rds-db-proxy/aws/1.1.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`vpc_ingress` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_route53_record.proxy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
- [`aws_security_group.proxy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) (resource)
- [`aws_security_group_rule.cluster_ingress_from_proxy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) (resource)
- [`aws_security_group_rule.proxy_egress_to_cluster`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) (resource)
- [`random_password.admin_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
- [`random_pet.admin_user`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
- [`random_pet.database_name`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.kms_key_rds`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_security_groups.allowed`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/security_groups) (data source)
---
## aurora-postgres-resources
This component is responsible for provisioning Aurora Postgres resources: additional databases, users, permissions,
grants, etc.
## PostgreSQL Quick Reference on Grants
GRANTS can be on database, schema, role, table, and other database objects (e.g. columns in a table for fine control).
Database and schema do not have much to grant. The `object_type` field in the input determines which kind of object the
grant is being applied to. The `db` field is always required. The `schema` field is required unless the `object_type` is
`db`, in which case it should be set to the empty string (`""`).
The keyword PUBLIC indicates that the privileges are to be granted to all roles, including those that might be created
later. PUBLIC can be thought of as an implicitly defined group that always includes all roles. Any particular role will
have the sum of privileges granted directly to it, privileges granted to any role it is presently a member of, and
privileges granted to PUBLIC.
When an object is created, it is assigned an owner. The owner is normally the role that executed the creation statement.
For most kinds of objects, the initial state is that only the owner (or a superuser) can do anything with the object. To
allow other roles to use it, privileges must be granted. (When using AWS managed RDS, you cannot have access to any
superuser roles; superuser is reserved for AWS to use to manage the cluster.)
PostgreSQL grants privileges on some types of objects to PUBLIC by default when the objects are created. No privileges
are granted to PUBLIC by default on tables, table columns, sequences, foreign data wrappers, foreign servers, large
objects, schemas, or tablespaces. For other types of objects, the default privileges granted to PUBLIC are as follows:
CONNECT and TEMPORARY (create temporary tables) privileges for databases; EXECUTE privilege for functions and
procedures; and USAGE privilege for languages and data types (including domains). The object owner can, of course,
REVOKE both default and expressly granted privileges. (For maximum security, issue the REVOKE in the same transaction
that creates the object; then there is no window in which another user can use the object.) Also, these default
privilege settings can be overridden using the ALTER DEFAULT PRIVILEGES command.
The CREATE privilege:
- For databases, allows new schemas and publications to be created within the database, and allows trusted extensions to
be installed within the database.
- For schemas, allows new objects to be created within the schema. To rename an existing object, you must own the object
and have this privilege for the containing schema.
For databases and schemas, there are not a lot of other privileges to grant, and all but CREATE are granted by default,
so you might as well grant "ALL". For tables etc., the creator has full control. You grant access to other users via
explicit grants. This component does not allow fine-grained grants. You have to specify the database, and unless the
grant is on the database, you have to specify the schema. For any other object type (table, sequence, function,
procedure, routine, foreign_data_wrapper, foreign_server, column), the component applies the grants to all objects of
that type in the specified schema.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
aurora-postgres-resources:
vars:
aurora_postgres_component_name: aurora-postgres-example
additional_users:
example:
db_user: example
db_password: ""
grants:
- grant: ["ALL"]
db: example
object_type: database
schema: ""
role_memberships:
- reporting_reader
- reporting_writer
default_privileges:
- role: another_role
privileges: ["SELECT", "INSERT", "UPDATE", "DELETE"]
db: example
object_type: table
schema: ""
```
Use the optional `role_memberships` list inside an `additional_users` entry to grant existing database roles to the user managed by this module.
Use the optional `default_privileges` list inside an `additional_users` entry to grant default privileges to the specified role. Whenever the user managed by this module creates a new object of the specified type in the specified schema, the specified role will be automatically granted the specified privileges.
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`additional_databases` (`set(string)`) optional
Additional databases to be created with the cluster
**Default value:** `[ ]`
`additional_grants` optional
Create additional database user with specified grants.
If `var.ssm_password_source` is set, passwords will be retrieved from SSM parameter store,
otherwise, passwords will be generated and stored in SSM parameter store under the service's key.
**Type:**
```hcl
map(list(object({
grant : list(string)
db : string
})))
```
**Default value:** `{ }`
`additional_schemas` optional
Create additional schemas for a given database.
If no database is given, the schema will use the database used by the provider configuration
**Type:**
```hcl
map(object({
database : string
}))
```
**Default value:** `{ }`
`additional_users` optional
Create additional database user for a service, specifying username, (default) grants, and optional password.
If no password is specified, one will be generated. Username and password will be stored in
SSM parameter store under the service's key.
**Type:**
```hcl
map(object({
db_user : string
db_password : string
grants : list(object({
grant : list(string)
db : string
schema : string
object_type : string
}))
default_privileges : optional(list(object({
role : string
privileges : list(string)
db : string
schema : string
object_type : string
})), [])
role_memberships : optional(list(string), [])
}))
```
**Default value:** `{ }`
`admin_password` (`string`) optional
postgresql password for the admin user
**Default value:** `""`
Aurora Postgres component name to read the remote state from
**Default value:** `"aurora-postgres"`
`cluster_enabled` (`string`) optional
Set to `false` to prevent the module from creating any resources
**Default value:** `true`
`db_name` (`string`) optional
Database name (default is not to create a database)
**Default value:** `""`
`kms_key_arn` (`string`) optional
The ARN for the KMS encryption key.
**Default value:** `null`
`read_passwords_from_ssm` (`bool`) optional
When `true`, fetch user passwords from SSM
**Default value:** `true`
`ssm_password_source` (`string`) optional
If var.read_passwords_from_ssm is true, DB user passwords will be retrieved from SSM using `var.ssm_password_source` and the database username. If this value is not set, a default path will be created using the SSM path prefix and ID of the associated Aurora Cluster.
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`additional_databases`
Additional databases
`additional_grants`
Additional grants
`additional_schemas`
Additional schemas
`additional_users`
Additional users
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `postgresql`, version: `>= 1.17.1`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `postgresql`, version: `>= 1.17.1`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`additional_grants` | latest | [`./modules/postgresql-user`](https://registry.terraform.io/modules/./modules/postgresql-user/) | n/a
`additional_users` | latest | [`./modules/postgresql-user`](https://registry.terraform.io/modules/./modules/postgresql-user/) | n/a
`aurora_postgres` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`postgresql_database.additional`](https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs/resources/database) (resource)
- [`postgresql_schema.additional`](https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs/resources/schema) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.admin_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## postgresql-user
# Aurora Postgresql User
## Variables
### Required Variables
`service_name` (`string`) required
Name of service owning the database (used in SSM key)
### Optional Variables
`db_password` (`string`) optional
PostgreSQL password created user (generated if not provided)
**Default value:** `""`
`db_user` (`string`) optional
PostgreSQL user name to create (default is service name)
**Default value:** `""`
`default_privileges` optional
List of \{ role: "", privileges: [<grant>, <grant>, ...], db: "db", schema: "", object_type: "table" \}
Role refers to the target database role (user) that will be automatically granted the specified privileges when the created by this module creates the specified objects.
**Type:**
```hcl
list(object({
role : string
privileges : list(string)
db : string
schema : optional(string, "")
object_type : string
}))
```
**Default value:** `[ ]`
KMS key ID, ARN, or alias to use for encrypting the database
**Default value:** `"alias/aws/rds"`
`role_memberships` (`list(string)`) optional
List of roles to grant membership in for the user created by this module.
**Default value:** `[ ]`
`save_password_in_ssm` (`bool`) optional
If true, DB user's password will be stored in SSM
**Default value:** `true`
`ssm_path_prefix` (`string`) optional
SSM path prefix (without leading or trailing slash)
**Default value:** `"aurora-postgres"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`db_user`
DB user name
`db_user_password`
DB user password
`db_user_password_ssm_key`
SSM key under which user password is stored
`notice`
Note to user
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `postgresql`, version: `>= 1.17.1`
- `random`, version: `>= 2.3`
### Providers
- `postgresql`, version: `>= 1.17.1`
- `random`, version: `>= 2.3`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`parameter_store_write` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`postgresql_default_privileges.default`](https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs/resources/default_privileges) (resource)
- [`postgresql_grant.default`](https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs/resources/grant) (resource)
- [`postgresql_grant_role.role_memberships`](https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs/resources/grant_role) (resource)
- [`postgresql_role.default`](https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs/resources/role) (resource)
- [`random_password.db_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
## Data Sources
The following data sources are used by this module:
None
---
## app
Auth0 Application component. [Auth0](https://auth0.com/docs/) is a third-party service that provides authentication and
authorization as a service. It is typically used to to authenticate users.
An Auth0 application is a client that can request authentication and authorization from an Auth0 server. Auth0
applications can be of different types, such as regular web applications, single-page applications, machine-to-machine
applications, and others. Each application has a set of allowed origins, allowed callback URLs, and allowed web origins.
## Usage
Before deploying this component, you need to deploy the `auth0/tenant` component. This components with authenticate with
the [Auth0 Terraform provider](https://registry.terraform.io/providers/auth0/auth0/latest/) using the Auth0 tenant's
client ID and client secret configured with the `auth0/tenant` component.
**Stack Level**: Global
Here's an example snippet for how to use this component.
:::important
Be sure that the context ID does not overlap with the context ID of other Auth0 components, such as `auth0/tenant`. We
use this ID to generate the SSM parameter names.
:::
```yaml
# stacks/catalog/auth0/app.yaml
components:
terraform:
auth0/app:
vars:
enabled: true
name: "auth0-app"
# We can centralize plat-sandbox, plat-dev, and plat-staging all use a "nonprod" Auth0 tenant, which is deployed in plat-staging.
auth0_tenant_stage_name: "plat-staging"
# Common client configuration
grant_types:
- "authorization_code"
- "refresh_token"
- "implicit"
- "client_credentials"
# Stage-specific client configuration
callbacks:
- "https://auth.acme-dev.com/login/auth0/callback"
allowed_origins:
- "https://*.acme-dev.com"
web_origins:
- "https://portal.acme-dev.com"
- "https://auth.acme-dev.com"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`allowed_origins` (`list(string)`) optional
Allowed Origins
**Default value:** `[ ]`
`app_type` (`string`) optional
Auth0 Application Type
**Default value:** `"regular_web"`
`auth0_debug` (`bool`) optional
Enable debug mode for the Auth0 provider
**Default value:** `true`
`auth0_tenant_component_name` (`string`) optional
The name of the component
**Default value:** `"auth0/tenant"`
The name of the environment where the Auth0 tenant component is deployed. Defaults to the environment of the current stack.
**Default value:** `""`
`auth0_tenant_stage_name` (`string`) optional
The name of the stage where the Auth0 tenant component is deployed. Defaults to the stage of the current stack.
**Default value:** `""`
`auth0_tenant_tenant_name` (`string`) optional
The name of the tenant where the Auth0 tenant component is deployed. Yes this is a bit redundant, since Auth0 also calls this resource a tenant. Defaults to the tenant of the current stack.
**Default value:** `""`
`authentication_method` (`string`) optional
The authentication method for the client credentials
**Default value:** `"client_secret_post"`
`callbacks` (`list(string)`) optional
Allowed Callback URLs
**Default value:** `[ ]`
`cross_origin_auth` (`bool`) optional
Whether this client can be used to make cross-origin authentication requests (true) or it is not allowed to make such requests (false).
**Default value:** `false`
`grant_types` (`list(string)`) optional
Allowed Grant Types
**Default value:** `[ ]`
`jwt_alg` (`string`) optional
JWT Algorithm
**Default value:** `"RS256"`
`jwt_lifetime_in_seconds` (`number`) optional
JWT Lifetime in Seconds
**Default value:** `36000`
`logo_uri` (`string`) optional
Logo URI
**Default value:** `"https://cloudposse.com/wp-content/uploads/2017/07/CloudPosse2-TRANSAPRENT.png"`
`oidc_conformant` (`bool`) optional
OIDC Conformant
**Default value:** `true`
`ssm_base_path` (`string`) optional
The base path for the SSM parameters. If not defined, this is set to the module context ID. This is also required when `var.enabled` is set to `false`
**Default value:** `""`
`sso` (`bool`) optional
Single Sign-On for the Auth0 app
**Default value:** `true`
`web_origins` (`list(string)`) optional
Allowed Web Origins
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`auth0_client_id`
The Auth0 Application Client ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `auth0`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `auth0`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`auth0_ssm_parameters` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`auth0_tenant` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`iam_roles_auth0_provider` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`auth0_client.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/client) (resource)
- [`auth0_client_credentials.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/client_credentials) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.auth0_client_id`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.auth0_client_secret`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.auth0_domain`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## connection
Auth 0 Connection component. [Auth0](https://auth0.com/docs/) is a third-party service that provides authentication and
authorization as a service. It is typically used to to authenticate users.
An Auth0 connection is a bridge between Auth0 and an identity provider (IdP) that allows your application to
authenticate users. Auth0 supports many types of connections, including social identity providers such as Google,
Facebook, and Twitter, enterprise identity providers such as Microsoft Azure AD, and passwordless authentication methods
such as email and SMS.
## Usage
Before deploying this component, you need to deploy the `auth0/tenant` component. This components with authenticate with
the [Auth0 Terraform provider](https://registry.terraform.io/providers/auth0/auth0/latest/) using the Auth0 tenant's
client ID and client secret configured with the `auth0/tenant` component.
**Stack Level**: Global
Here's an example snippet for how to use this component.
```yaml
# stacks/catalog/auth0/connection.yaml
components:
terraform:
auth0/connection:
vars:
enabled: true
name: "auth0"
# These must all be specified for the connection to be created
strategy: "email"
connection_name: "email"
options_name: "email"
email_from: "{{`{{ application.name }}`}} "
email_subject: "Welcome to {{`{{ application.name }}`}}"
syntax: "liquid"
auth_params:
scope: "openid profile"
response_type: "code"
totp:
time_step: 895
length: 6
template_file: "templates/email.html"
# Stage-specific configuration
auth0_app_connections:
- stage: sandbox
- stage: dev
- stage: staging
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`auth0_app_connections` optional
The list of Auth0 apps to add to this connection
**Type:**
```hcl
list(object({
component = optional(string, "auth0/app")
environment = optional(string, "")
stage = optional(string, "")
tenant = optional(string, "")
}))
```
**Default value:** `[ ]`
`auth0_debug` (`bool`) optional
Enable debug mode for the Auth0 provider
**Default value:** `true`
`auth0_tenant_component_name` (`string`) optional
The name of the component
**Default value:** `"auth0/tenant"`
The name of the environment where the Auth0 tenant component is deployed. Defaults to the environment of the current stack.
**Default value:** `""`
`auth0_tenant_stage_name` (`string`) optional
The name of the stage where the Auth0 tenant component is deployed. Defaults to the stage of the current stack.
**Default value:** `""`
`auth0_tenant_tenant_name` (`string`) optional
The name of the tenant where the Auth0 tenant component is deployed. Yes this is a bit redundant, since Auth0 also calls this resource a tenant. Defaults to the tenant of the current stack.
**Default value:** `""`
`auth_params` optional
Query string parameters to be included as part of the generated passwordless email link.
**Type:**
```hcl
object({
scope = optional(string, null)
response_type = optional(string, null)
})
```
**Default value:** `{ }`
`brute_force_protection` (`bool`) optional
Indicates whether to enable brute force protection, which will limit the number of signups and failed logins from a suspicious IP address.
**Default value:** `true`
`connection_name` (`string`) optional
The name of the connection
**Default value:** `""`
`disable_signup` (`bool`) optional
Indicates whether to allow user sign-ups to your application.
**Default value:** `false`
`email_from` (`string`) optional
When using an email strategy, the address to use as the sender
**Default value:** `null`
`email_subject` (`string`) optional
When using an email strategy, the subject of the email
**Default value:** `null`
`non_persistent_attrs` (`list(string)`) optional
If there are user fields that should not be stored in Auth0 databases due to privacy reasons, you can add them to the DenyList here.
**Default value:** `[ ]`
`options_name` (`string`) optional
The name of the connection options. Required for the email strategy.
**Default value:** `""`
`set_user_root_attributes` (`string`) optional
Determines whether to sync user profile attributes at each login or only on the first login. Options include: `on_each_login`, `on_first_login`.
**Default value:** `null`
`strategy` (`string`) optional
The strategy to use for the connection
**Default value:** `"auth0"`
`syntax` (`string`) optional
The syntax of the template body
**Default value:** `null`
`template` (`string`) optional
The template to use for the connection. If not provided, the `template_file` variable must be set.
**Default value:** `""`
`template_file` (`string`) optional
The path to the template file. If not provided, the `template` variable must be set.
**Default value:** `""`
`totp` optional
The TOTP settings for the connection
**Type:**
```hcl
object({
time_step = optional(number, 900)
length = optional(number, 6)
})
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`auth0_connection_id`
The Auth0 Connection ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `auth0`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `auth0`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`auth0_apps` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`auth0_tenant` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`iam_roles_auth0_provider` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`auth0_connection.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/connection) (resource)
- [`auth0_connection_clients.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/connection_clients) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.auth0_client_id`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.auth0_client_secret`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.auth0_domain`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## tenant
This component configures an [Auth0](https://auth0.com/docs/) tenant. This component is used to configure authentication
for the Terraform provider for Auth0 and to configure the Auth0 tenant itself.
## Usage
**Stack Level**: Global
Here's an example snippet for how to use this component.
```yaml
# catalog/auth0/tenant.yaml
components:
terraform:
auth0/tenant:
vars:
enabled: true
# Make sure this name does not conflict with other Auth0 components, such as `auth0/app`
name: auth0
support_email: "tech@acme.com"
support_url: "https://acme.com"
```
### Auth0 Tenant Creation
Chicken before the egg...
The Auth0 tenant must exist before we can manage it with Terraform. In order to create the Auth0 application used by the
[Auth0 Terraform provider](https://registry.terraform.io/providers/auth0/auth0/latest/), we must first create the Auth0
tenant. Then once we have the Auth0 provider configured, we can import the tenant into Terraform. However, the tenant is
not a resource identifiable by an ID within the Auth0 Management API! We can nevertheless import it using a random
string. On first run, we import the existing tenant using a random string. It does not matter what this value is.
Terraform will use the same tenant as the Auth0 application for the Terraform Auth0 Provider.
Create the Auth0 tenant now using the Auth0 Management API or the Auth0 Dashboard following
[the Auth0 create tenants documentation](https://auth0.com/docs/get-started/auth0-overview/create-tenants).
### Provider Pre-requisites
Once the Auth0 tenant is created or you've been given access to an existing tenant, you can configure the Auth0 provider
in Terraform. Follow the
[Auth0 provider documentation](https://registry.terraform.io/providers/auth0/auth0/latest/docs/guides/quickstart) to
create a Machine to Machine application.
:::tip
#### Machine to Machine App Name
Use the Context Label format for the machine name for consistency. For example, `acme-plat-gbl-prod-auth0-provider`.
:::
After creating the Machine to Machine application, add the app's domain, client ID, and client secret to AWS Systems
Manager Parameter Store in the same account and region as this component deployment. The path for the parameters are
defined by the component deployment's Null Label context ID as follows:
```hcl
auth0_domain_ssm_path = "/${module.this.id}/domain"
auth0_client_id_ssm_path = "/${module.this.id}/client_id"
auth0_client_secret_ssm_path = "/${module.this.id}/client_secret"
```
For example, if we're deploying `auth0/tenant` into `plat-gbl-prod` and my default region is `us-west-2`, then I would
add the following parameters to the `plat-prod` account in `us-west-2`:
:::important
Be sure that this AWS SSM parameter path does not conflict with SSM parameters used by other Auth0 components, such as
`auth0/app`. In both components, the SSM parameter paths are defined by the component deployment's context ID.
:::
```
/acme-plat-gbl-prod-auth0/domain
/acme-plat-gbl-prod-auth0/client_id
/acme-plat-gbl-prod-auth0/client_secret
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`support_email` (`string`) required
The email address to be displayed in the Auth0 Universal Login page.
`support_url` (`string`) required
The URL to be displayed in the Auth0 Universal Login page.
### Optional Variables
`allowed_logout_urls` (`list(string)`) optional
The URLs that Auth0 can redirect to after logout.
**Default value:** `[ ]`
`auth0_debug` (`bool`) optional
Enable debug mode for the Auth0 provider
**Default value:** `true`
`auth0_prompt_experience` (`string`) optional
Which prompt login experience to use. Options include classic and new.
**Default value:** `"new"`
`default_redirection_uri` (`string`) optional
The default redirection URI.
**Default value:** `""`
Whether to disclose enterprise connections.
**Default value:** `false`
`oidc_logout_prompt_enabled` (`bool`) optional
Whether the OIDC logout prompt is enabled.
**Default value:** `false`
`picture_url` (`string`) optional
The URL of the picture to be displayed in the Auth0 Universal Login page.
**Default value:** `"https://cloudposse.com/wp-content/uploads/2017/07/CloudPosse2-TRANSAPRENT.png"`
`provider_ssm_base_path` (`string`) optional
The base path for the SSM parameters. If not defined, this is set to the module context ID. This is also required when `var.enabled` is set to `false`
**Default value:** `""`
`sandbox_version` (`string`) optional
The sandbox version.
**Default value:** `"18"`
`sendgrid_api_key_ssm_path` (`string`) optional
The SSM path to the SendGrid API key. Only required if `email_provider_name` is `sendgrid`.
**Default value:** `""`
`session_cookie_mode` (`string`) optional
The session cookie mode.
**Default value:** `"persistent"`
`session_lifetime` (`number`) optional
The session lifetime in hours.
**Default value:** `168`
Whether to use scope descriptions for consent.
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`auth0_domain`
The Auth0 custom domain
`client_id_ssm_path`
The SSM parameter path for the Auth0 client ID
`client_secret_ssm_path`
The SSM parameter path for the Auth0 client secret
`domain_ssm_path`
The SSM parameter path for the Auth0 domain
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `auth0`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `auth0`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`auth0_custom_domain.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/custom_domain) (resource)
- [`auth0_custom_domain_verification.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/custom_domain_verification) (resource)
- [`auth0_email_provider.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/email_provider) (resource)
- [`auth0_prompt.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/prompt) (resource)
- [`auth0_tenant.this`](https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/tenant) (resource)
- [`aws_route53_record.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.auth0_client_id`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.auth0_client_secret`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.auth0_domain`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.sendgrid_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## aws-backup
This component is responsible for provisioning an AWS Backup Plan.
It creates a schedule for backing up given ARNs.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
### Component Abstraction and Separation
By separating the "common" settings from the component, we can first provision the IAM Role and AWS Backup Vault to
prepare resources for future use without incuring cost.
For example, `stacks/catalog/aws-backup/common`:
```yaml
# This configuration creates the AWS Backup Vault and IAM Role, and does not incur any cost on its own.
# See: https://aws.amazon.com/backup/pricing/
components:
terraform:
aws-backup:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars: {}
aws-backup/common:
metadata:
component: aws-backup
inherits:
- aws-backup
vars:
enabled: true
iam_role_enabled: true # this will be reused
vault_enabled: true # this will be reused
plan_enabled: false
## Please be careful when enabling backup_vault_lock_configuration,
# backup_vault_lock_configuration:
## `changeable_for_days` enables compliance mode and once the lock is set, the retention policy cannot be changed unless through account deletion!
# changeable_for_days: 36500
# max_retention_days: 365
# min_retention_days: 1
```
Then if we would like to deploy the component into a given stacks we can import the following to deploy our backup
plans.
Since most of these values are shared and common, we can put them in a `catalog/aws-backup/` yaml file and share them
across environments.
This makes deploying the same configuration to multiple environments easy.
`stacks/catalog/aws-backup/defaults`:
```yaml
import:
- catalog/aws-backup/common
components:
terraform:
aws-backup/plan-defaults:
metadata:
component: aws-backup
type: abstract
settings:
spacelift:
workspace_enabled: true
depends_on:
- aws-backup/common
vars:
enabled: true
iam_role_enabled: false # reuse from aws-backup-vault
vault_enabled: false # reuse from aws-backup-vault
plan_enabled: true
plan_name_suffix: aws-backup-defaults
aws-backup/daily-plan:
metadata:
component: aws-backup
inherits:
- aws-backup/plan-defaults
vars:
plan_name_suffix: aws-backup-daily
# https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
rules:
- name: "plan-daily"
schedule: "cron(0 5 ? * * *)"
start_window: 320 # 60 * 8 # minutes
completion_window: 10080 # 60 * 24 * 7 # minutes
lifecycle:
delete_after: 35 # 7 * 5 # days
selection_tags:
- type: STRINGEQUALS
key: aws-backup/efs
value: daily
- type: STRINGEQUALS
key: aws-backup/rds
value: daily
aws-backup/weekly-plan:
metadata:
component: aws-backup
inherits:
- aws-backup/plan-defaults
vars:
plan_name_suffix: aws-backup-weekly
# https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
rules:
- name: "plan-weekly"
schedule: "cron(0 5 ? * SAT *)"
start_window: 320 # 60 * 8 # minutes
completion_window: 10080 # 60 * 24 * 7 # minutes
lifecycle:
delete_after: 90 # 30 * 3 # days
selection_tags:
- type: STRINGEQUALS
key: aws-backup/efs
value: weekly
- type: STRINGEQUALS
key: aws-backup/rds
value: weekly
aws-backup/monthly-plan:
metadata:
component: aws-backup
inherits:
- aws-backup/plan-defaults
vars:
plan_name_suffix: aws-backup-monthly
# https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
rules:
- name: "plan-monthly"
schedule: "cron(0 5 1 * ? *)"
start_window: 320 # 60 * 8 # minutes
completion_window: 10080 # 60 * 24 * 7 # minutes
lifecycle:
delete_after: 2555 # 365 * 7 # days
cold_storage_after: 90 # 30 * 3 # days
selection_tags:
- type: STRINGEQUALS
key: aws-backup/efs
value: monthly
- type: STRINGEQUALS
key: aws-backup/rds
value: monthly
```
Deploying to a new stack (environment) then only requires:
```yaml
import:
- catalog/aws-backup/defaults
```
The above configuration can be used to deploy a new backup to a new region.
---
### Adding Resources to the Backup - Adding Tags
Once an `aws-backup` with a plan and `selection_tags` has been established we can begin adding resources for it to
backup by using the tagging method.
This only requires that we add tags to the resources we wish to backup, which can be done with the following snippet:
```yaml
components:
terraform:
vars:
tags:
aws-backup/resource_schedule: "daily-14day-backup"
```
Just ensure the tag key-value pair matches what was added to your backup plan and aws will take care of the rest.
### Copying across regions
If we want to create a backup vault in another region that we can copy to, then we need to create another vault, and
then specify that we want to copy to it.
To create a vault in a region simply:
```yaml
components:
terraform:
aws-backup:
vars:
plan_enabled: false # disables the plan (which schedules resource backups)
```
This will output an ARN - which you can then use as the destination in the rule object's `copy_action` (it will be
specific to that particular plan), as seen in the following snippet:
```yaml
components:
terraform:
aws-backup/plan-with-cross-region-replication:
metadata:
component: aws-backup
inherits:
- aws-backup/plan-defaults
vars:
plan_name_suffix: aws-backup-cross-region
# https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
rules:
- name: "plan-cross-region"
schedule: "cron(0 5 ? * * *)"
start_window: 320 # 60 * 8 # minutes
completion_window: 10080 # 60 * 24 * 7 # minutes
lifecycle:
delete_after: 35 # 7 * 5 # days
copy_action:
destination_vault_arn: "arn:aws:backup::111111111111:backup-vault:--"
lifecycle:
delete_after: 35
```
### Backup Lock Configuration
To enable backup lock configuration, you can use the following snippet:
#### Compliance Mode
Vaults locked in compliance mode cannot be deleted once the cooling-off period ("grace time") expires. During grace
time, you can still remove the vault lock and change the lock configuration.
To enable **Compliance Mode**, set `changeable_for_days` to a value greater than 0. Once the lock is set, the retention
policy cannot be changed unless through account deletion!
```yaml
# Please be careful when enabling backup_vault_lock_configuration,
backup_vault_lock_configuration:
# `changeable_for_days` enables compliance mode and once the lock is set, the retention policy cannot be changed unless through account deletion!
changeable_for_days: 36500
max_retention_days: 365
min_retention_days: 1
```
#### Governance Mode
Vaults locked in governance mode can have the lock removed by users with sufficient IAM permissions.
To enable **governance mode**
```yaml
backup_vault_lock_configuration:
max_retention_days: 365
min_retention_days: 1
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`advanced_backup_setting` optional
An object that specifies backup options for each resource type.
**Type:**
```hcl
object({
backup_options = map(string)
resource_type = string
})
```
**Default value:** `null`
`backup_resources` (`list(string)`) optional
An array of strings that either contain Amazon Resource Names (ARNs) or match patterns of resources to assign to a backup plan
**Default value:** `[ ]`
`backup_vault_lock_configuration` optional
The backup vault lock configuration, each vault can have one vault lock in place. This will enable Backup Vault Lock on an AWS Backup vault it prevents the deletion of backup data for the specified retention period. During this time, the backup data remains immutable and cannot be deleted or modified."
`changeable_for_days` - The number of days before the lock date. If omitted creates a vault lock in `governance` mode, otherwise it will create a vault lock in `compliance` mode.
**Type:**
```hcl
object({
changeable_for_days = optional(number)
max_retention_days = optional(number)
min_retention_days = optional(number)
})
```
**Default value:** `null`
`iam_role_enabled` (`bool`) optional
Whether or not to create a new IAM Role and Policy Attachment
**Default value:** `true`
`kms_key_arn` (`string`) optional
The server-side encryption key that is used to protect your backups
**Default value:** `null`
`plan_enabled` (`bool`) optional
Whether or not to create a new Plan
**Default value:** `true`
`plan_name_suffix` (`string`) optional
The string appended to the plan name
**Default value:** `null`
`rules` optional
An array of rule maps used to define schedules in a backup plan
**Type:**
```hcl
list(object({
name = string
schedule = optional(string)
enable_continuous_backup = optional(bool)
start_window = optional(number)
completion_window = optional(number)
lifecycle = optional(object({
cold_storage_after = optional(number)
delete_after = optional(number)
opt_in_to_archive_for_supported_resources = optional(bool)
}))
copy_action = optional(object({
destination_vault_arn = optional(string)
lifecycle = optional(object({
cold_storage_after = optional(number)
delete_after = optional(number)
opt_in_to_archive_for_supported_resources = optional(bool)
}))
}))
}))
```
**Default value:** `[ ]`
`selection_tags` (`list(map(string))`) optional
An array of tag condition objects used to filter resources based on tags for assigning to a backup plan
**Default value:** `[ ]`
`vault_enabled` (`bool`) optional
Whether or not a new Vault should be created
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`backup_plan_arn`
Backup Plan ARN
`backup_plan_version`
Unique, randomly generated, Unicode, UTF-8 encoded string that serves as the version ID of the backup plan
`backup_selection_id`
Backup Selection ID
`backup_vault_arn`
Backup Vault ARN
`backup_vault_id`
Backup Vault ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`backup` | 1.1.1 | [`cloudposse/backup/aws`](https://registry.terraform.io/modules/cloudposse/backup/aws/1.1.1) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## aws-config
This component provisions AWS Config across all accounts in an AWS Organization. AWS Config is a service that enables
you to assess, audit, and evaluate the configurations of your AWS resources. It continuously monitors and records
configuration changes to your AWS resources and provides a detailed view of the relationships between those resources.
## Component Features
This component is responsible for:
- **Configuration Recording**: Deploys Configuration Recorders in each account and region to track resource configurations
- **Centralized Aggregation**: Configures a designated account (typically `security`) as the central aggregation point for all AWS Config data
- **Compliance Monitoring**: Deploys conformance packs to monitor resources for compliance with best practices and industry standards (e.g., CMMC, CIS, HIPAA)
- **Configuration Storage**: Delivers configuration snapshots and history to a centralized S3 bucket (typically in the `audit` account)
- **Organization-wide Conformance Packs**: Deploys organization conformance packs from the management account that automatically apply to all member accounts
- **SNS Topic Encryption**: Creates encrypted SNS topics for AWS Config notifications (required for CMMC compliance)
## New Features
This version includes several enhancements:
- **Local Conformance Pack Support**: Load conformance packs from local files in addition to remote URLs. This enables
custom packs, air-gapped deployments, and version-controlled compliance rules.
- **Organization Conformance Packs**: Deploy conformance packs organization-wide from the management account using the
`scope: organization` setting.
- **SNS Topic Encryption**: Built-in support for KMS encryption of AWS Config SNS topics (`sns_encryption_key_id`
variable) for CMMC compliance.
- **Flexible Component Naming**: The `global_collector_component_name_pattern` variable allows customization of how
the component looks up the global collector region's remote state.
- **GovCloud Support**: Full support for AWS GovCloud regions and partitions.
## Key AWS Config Capabilities
- **Configuration History**: Maintains a detailed history of changes to AWS resources, showing when changes were made, who made them, and what the changes were
- **Configuration Snapshots**: Takes periodic snapshots of resource configurations for point-in-time views
- **Compliance Monitoring**: Provides pre-built rules and checks for compliance with best practices and industry standards
- **Relationship Mapping**: Maps relationships between AWS resources to understand change impacts
- **Notifications and Alerts**: Sends notifications when configuration changes impact compliance or security posture
## Architecture
The component deploys a multi-account, multi-region architecture:
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ AWS Organization │
│ │
│ ┌────────────────────────────────────────────────────────────────────────┐ │
│ │ Management Account (Organization Conformance Packs) │ │
│ │ - Deploys organization-wide conformance packs │ │
│ │ - Packs automatically apply to all member accounts │ │
│ └────────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────────────────┐ │
│ │ Security Account (Central Aggregator) │ │
│ │ - AWS Config Aggregator (collects from ALL accounts) │ │
│ │ - Centralized compliance dashboard │ │
│ └────────────────────────────────────────────────────────────────────────┘ │
│ ▲ ▲ ▲ │
│ │ │ │ Aggregate Authorizations │
│ │ │ │ │
│ ┌────────────────────────────────────────────────────────────────────────┐ │
│ │ Audit Account │ │
│ │ - S3 Bucket (aws-config-bucket) │ │
│ │ - Stores ALL Config data from all accounts ◄───────────────┐ │ │
│ └──────────────────────────────────────────────────────────────│──────────┘ │
│ │ │
│ ┌────────────────────────────────────────────────────────────┐ │ │
│ │ Each Member Account │ │ │
│ │ │ │ │
│ │ Global Collector Region (e.g., us-east-1): │ │ │
│ │ ✓ Configuration Recorder │ │ │
│ │ ✓ IAM Role (created once per account) │ │ │
│ │ ✓ Tracks global resources (IAM, Route53, etc.) │ │ │
│ │ ✓ Aggregate Authorization → Security Account │─┘ │
│ │ ✓ Delivery Channel → S3 Bucket (audit) ────────────────────────────────┘
│ │ │ │
│ │ Additional Regions (e.g., us-west-2): │ │
│ │ ✓ Configuration Recorder │ │
│ │ ✓ References IAM Role from global collector region │ │
│ │ ✓ Tracks regional resources (EC2, VPC, RDS, etc.) │ │
│ │ ✓ Delivery Channel → S3 Bucket (audit) ────────────────────────────────┘
│ └────────────────────────────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────────────────┘
```
### Architecture Benefits
- **Centralized Compliance**: Security team can view all resource configurations from one account
- **Cost Efficiency**: Single S3 bucket for all AWS Config data (in audit account)
- **Security Best Practices**: Aggregation in security account aligns with AWS Well-Architected Framework
- **Scalability**: Easy to add new accounts and regions without changing the aggregation setup
- **GovCloud Compatible**: Supports AWS GovCloud regions and partitions
:::warning
#### AWS Config Limitations
Be aware of these AWS Config limitations:
- **Maximum 1000 AWS Config rules** per account can be evaluated
- Mitigate by removing duplicate rules across packs
- Remove rules that don't apply to any resources
- Consider scheduling pack deployment with Lambda for more than 1000 rules
- See the [Audit Manager docs](https://aws.amazon.com/blogs/mt/integrate-across-the-three-lines-model-part-2-transform-aws-config-conformance-packs-into-aws-audit-manager-assessments/) for converting conformance packs to custom Audit Manager assessments
- **Maximum 50 conformance packs** per account
:::
## Usage
## Prerequisites
Before deploying this AWS Config component:
1. **AWS Config Bucket**: The `aws-config-bucket` component must be provisioned first in the audit account:
```bash
atmos terraform apply aws-config-bucket -s core-ue1-audit
```
2. **Support IAM Role** (CIS AWS Foundations 1.20): A designated support IAM role should be deployed to every account:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSupport",
"Effect": "Allow",
"Action": ["support:*"],
"Resource": "*"
},
{
"Sid": "AllowTrustedAdvisor",
"Effect": "Allow",
"Action": "trustedadvisor:Describe*",
"Resource": "*"
}
]
}
```
3. **Service Access Principals** (for organization-level conformance packs): Enable trusted access for AWS Config in
your organization:
**How to Verify:**
```bash
aws organizations list-aws-service-access-for-organization | grep config
```
**Enable if Disabled:**
```bash
aws organizations enable-aws-service-access --service-principal config.amazonaws.com
aws organizations enable-aws-service-access --service-principal config-multiaccountsetup.amazonaws.com
```
Or if using our `account` component, add these principals to `aws_service_access_principals`.
## Usage
**Stack Level**: Regional
AWS Config is a regional service. The component must be deployed to each region where you want to track resources.
### Scope Configuration
The `default_scope` variable controls how conformance packs are deployed:
| Scope | Description | Use Case |
|-------|-------------|----------|
| `account` | Conformance packs deployed per-account | Member accounts |
| `organization` | Conformance packs deployed organization-wide | Management account only |
:::tip
#### Using Account Scope (Member Accounts)
For member accounts, use `default_scope: account`. The component will:
- Create a Configuration Recorder in each region
- Create an IAM role only in the global collector region
- Authorize the central aggregator account to collect data
- Deploy account-level conformance packs
:::
:::tip
#### Using Organization Scope (Management Account)
For the management account, use `default_scope: organization`. The component will:
- Deploy organization-wide conformance packs that apply to ALL member accounts
- Require the `config-multiaccountsetup.amazonaws.com` service access principal
:::
### Key Configuration Variables
| Variable | Description | Example |
|----------|-------------|---------|
| `global_resource_collector_region` | Region that tracks global resources (IAM, Route53) | `us-east-1` |
| `central_resource_collector_account` | Account that aggregates all Config data | `security` |
| `create_iam_role` | Set to `true` - component auto-detects global collector region | `true` |
| `config_bucket_*` | References the S3 bucket in audit account | See example below |
| `sns_encryption_key_id` | KMS key for SNS topic encryption (CMMC compliance) | `alias/aws/sns` |
### Catalog Configuration
#### Default Configuration (`stacks/catalog/aws-config/defaults.yaml`)
```yaml
components:
terraform:
aws-config/defaults:
metadata:
type: abstract
component: "aws-config"
vars:
enabled: true
default_scope: account
create_iam_role: true
az_abbreviation_type: fixed
account_map_component_name: "account-map"
account_map_tenant: core
root_account_stage: root
global_environment: gbl
global_resource_collector_region: "us-east-1"
central_resource_collector_account: security
config_bucket_component_name: "aws-config-bucket"
config_bucket_tenant: core
config_bucket_env: ue1
config_bucket_stage: audit
sns_encryption_key_id: "alias/aws/sns"
conformance_packs: []
```
#### Member Account Configuration (`stacks/catalog/aws-config/member-account.yaml`)
```yaml
import:
- catalog/aws-config/defaults
components:
terraform:
aws-config:
metadata:
component: "aws-config"
inherits:
- "aws-config/defaults"
```
#### Organization Account Configuration (`stacks/catalog/aws-config/organization.yaml`)
```yaml
import:
- catalog/aws-config/defaults
components:
terraform:
aws-config:
metadata:
component: "aws-config"
inherits:
- "aws-config/defaults"
vars:
default_scope: organization
conformance_packs:
- name: "Operational-Best-Practices-for-CIS-AWS-v1.4-Level2"
conformance_pack: "https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level2.yaml"
parameter_overrides: {}
```
### Conformance Packs
Conformance packs define a collection of AWS Config rules for compliance monitoring. This component supports loading
conformance packs from **both remote URLs and local files**.
#### Local File Support (New Feature)
The component now supports loading conformance packs from the local filesystem in addition to remote URLs. This enables:
- **Custom conformance packs**: Create organization-specific compliance rules
- **Modified AWS packs**: Customize AWS-provided packs for your requirements
- **Air-gapped environments**: Deploy in environments without internet access
- **Version control**: Track conformance pack changes alongside infrastructure code
The component automatically detects whether the `conformance_pack` value is a URL (starts with `http://` or `https://`)
or a local file path. Local paths are resolved relative to the component's root directory.
#### Conformance Pack Examples
```yaml
conformance_packs:
# Remote URL (AWS Labs managed packs)
- name: "CIS-AWS-v1.4-Level2"
conformance_pack: "https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level2.yaml"
parameter_overrides:
AccessKeysRotatedParamMaxAccessKeyAge: "45"
# Local file (relative to component directory)
- name: "Custom-CMMC-Pack"
conformance_pack: "conformance-packs/custom-cmmc-pack.yaml"
parameter_overrides: {}
# Another local file example
- name: "CMMC-Level2-Best-Practices"
conformance_pack: "conformance-packs/cmmc-l2-v2-AWS-Best-Practices.yaml"
parameter_overrides:
IamPasswordPolicyParamMaxPasswordAge: "60"
# Override scope for specific pack
- name: "Org-Wide-Security-Pack"
conformance_pack: "https://example.com/pack.yaml"
scope: "organization" # Override default_scope
parameter_overrides: {}
```
#### Creating Custom Conformance Packs
To create a custom conformance pack:
1. Create a `conformance-packs/` directory in your component:
```
components/terraform/aws-config/
├── conformance-packs/
│ ├── custom-security-rules.yaml
│ └── cmmc-l2-v2-customized.yaml
├── main.tf
├── variables.tf
└── ...
```
2. Define rules in CloudFormation format:
```yaml
# conformance-packs/custom-security-rules.yaml
Parameters:
MaxAccessKeyAge:
Default: '90'
Type: String
Resources:
AccessKeysRotated:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: custom-access-keys-rotated
InputParameters:
maxAccessKeyAge:
Ref: MaxAccessKeyAge
Source:
Owner: AWS
SourceIdentifier: ACCESS_KEYS_ROTATED
```
3. Reference the local file in your configuration:
```yaml
conformance_packs:
- name: "Custom-Security-Rules"
conformance_pack: "conformance-packs/custom-security-rules.yaml"
parameter_overrides:
MaxAccessKeyAge: "45"
```
### SNS Topic Encryption
AWS Config creates an SNS topic for notifications. For CMMC compliance, this topic must be encrypted:
```yaml
# Option 1: AWS Managed Key (Recommended)
sns_encryption_key_id: "alias/aws/sns"
# Option 2: Customer Managed KMS Key
sns_encryption_key_id: "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
```
## Deployment
### Provisioning Order
:::important
#### Critical: Deploy Member Accounts BEFORE Organization Account
Organization conformance packs require all member accounts to have configuration recorders already set up.
Always deploy member accounts first, then the organization/management account last.
:::
#### Step 1: Deploy to Member Accounts (Global Collector Region First)
All member accounts can be deployed in parallel:
```bash
# Core tenant accounts
atmos terraform apply aws-config -s core-ue1-audit
atmos terraform apply aws-config -s core-ue1-security
atmos terraform apply aws-config -s core-ue1-network
atmos terraform apply aws-config -s core-ue1-identity
atmos terraform apply aws-config -s core-ue1-dns
atmos terraform apply aws-config -s core-ue1-automation
# Platform tenant accounts (if applicable)
atmos terraform apply aws-config -s plat-ue1-dev
atmos terraform apply aws-config -s plat-ue1-staging
atmos terraform apply aws-config -s plat-ue1-prod
```
#### Step 2: Deploy to Organization/Management Account (LAST)
```bash
atmos terraform apply aws-config -s core-ue1-root
```
### Multi-Region Deployment
AWS Config is regional. For multi-region coverage, deploy to each region:
#### How Multi-Region Works
- **Global Collector Region** (e.g., `us-east-1`): Creates the IAM role, tracks global resources
- **Additional Regions** (e.g., `us-west-2`): References IAM role via remote state, tracks regional resources only
#### Prerequisites for Additional Regions
Add the aws-config import to regional baseline files:
```yaml
# stacks/orgs/acme/core/security/us-west-2/baseline.yaml
import:
- orgs/acme/core/security/_defaults
- mixins/region/us-west-2
- catalog/aws-config/member-account # Add this
```
#### Deploy Additional Regions
Follow the same order: member accounts first, then organization account.
```bash
# Step 1: Member accounts in us-west-2
atmos terraform apply aws-config -s core-uw2-audit
atmos terraform apply aws-config -s core-uw2-security
# ... all other member accounts
# Step 2: Organization account in us-west-2 (LAST)
atmos terraform apply aws-config -s core-uw2-root
```
## Known Issues and False Positives
### IAM Inline Policy Check - Service-Linked Roles
The `IAM_NO_INLINE_POLICY_CHECK` rule flags AWS Service-Linked Roles (SLRs) as NON_COMPLIANT. This is a **known false
positive**.
**Why This Happens:**
- AWS Service-Linked Roles are automatically created and managed by AWS services
- These roles **must** have inline policies by AWS design
- The rule cannot distinguish between user-created roles and AWS-managed SLRs
**Common SLRs That Trigger This Finding:**
| Service-Linked Role | Service |
|---------------------|---------|
| `AWSServiceRoleForAmazonGuardDuty` | GuardDuty |
| `AWSServiceRoleForConfig` | AWS Config |
| `AWSServiceRoleForSecurityHub` | Security Hub |
| `AWSServiceRoleForAccessAnalyzer` | IAM Access Analyzer |
| `AWSServiceRoleForAmazonMacie` | Macie |
| `AWSServiceRoleForInspector2` | Inspector |
**Recommended Action:**
- Document these as accepted false positives
- Focus remediation on NON_COMPLIANT findings for user-created roles (not starting with `AWSServiceRole`)
- Validate findings with: `aws iam get-role --role-name --query 'Role.Path'`
- Service-linked roles have path: `/aws-service-role//`
**For CMMC/Compliance Auditors:**
- Service-linked roles are AWS-managed and out of customer control
- CMMC framework recognizes AWS-managed resources as acceptable exceptions
- Document the exception with proper justification
### Verification Commands
```bash
# Verify SNS topic encryption
aws sns get-topic-attributes \
--topic-arn arn:aws:sns:us-east-1:123456789012:config-topic \
--query 'Attributes.KmsMasterKeyId'
# List service-linked roles
aws iam list-roles --query 'Roles[?starts_with(RoleName, `AWSServiceRole`)].RoleName'
# Check if role is service-linked
aws iam get-role --role-name AWSServiceRoleForAmazonGuardDuty --query 'Role.Path'
```
## Variables
### Required Variables
The name of the config-bucket component
**Default value:** `"config-bucket"`
`config_bucket_tenant` (`string`) optional
(Optional) The tenant of the AWS Config S3 Bucket
**Default value:** `""`
`config_component_name` (`string`) optional
The name of the aws config component (i.e., this component)
**Default value:** `"aws-config"`
`conformance_packs` optional
List of conformance packs. Each conformance pack is a map with the following keys: name, conformance_pack, parameter_overrides.
For example:
conformance_packs = [
\{
name = "Operational-Best-Practices-for-CIS-AWS-v1.4-Level1"
conformance_pack = "https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level1.yaml"
parameter_overrides = \{
"AccessKeysRotatedParamMaxAccessKeyAge" = "45"
\}
\},
\{
name = "Operational-Best-Practices-for-CIS-AWS-v1.4-Level2"
conformance_pack = "https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level2.yaml"
parameter_overrides = \{
"IamPasswordPolicyParamMaxPasswordAge" = "45"
\}
\}
]
Complete list of AWS Conformance Packs managed by AWSLabs can be found here:
https://github.com/awslabs/aws-config-rules/tree/master/aws-config-conformance-packs
**Type:**
```hcl
list(object({
name = string
conformance_pack = string
parameter_overrides = map(any)
scope = optional(string, null)
}))
```
**Default value:** `[ ]`
`create_iam_role` (`bool`) optional
Flag to indicate whether an IAM Role should be created to grant the proper permissions for AWS Config
**Default value:** `false`
`default_scope` (`string`) optional
The default scope of the conformance pack. Valid values are `account` and `organization`.
**Default value:** `"account"`
`delegated_accounts` (`set(string)`) optional
The account IDs of other accounts that will send their AWS Configuration or Security Hub data to this account
**Default value:** `null`
A string formatting pattern used to construct or look up the name of the
global AWS Config collector region component.
This pattern should align with the regional naming convention of the
aws-config component. For example, if the pattern is "%s-%s" and you pass
("aws-config", "use1"), the resulting component name will be "aws-config-use1".
Adjust this pattern if your environment uses a different naming convention
for regional AWS Config components.
**Default value:** `"%s-%s"`
`global_environment` (`string`) optional
Global environment name
**Default value:** `"gbl"`
`iam_role_arn` (`string`) optional
The ARN for an IAM Role AWS Config uses to make read or write requests to the delivery channel and to describe the
AWS resources associated with the account. This is only used if create_iam_role is false.
If you want to use an existing IAM Role, set the variable to the ARN of the existing role and set create_iam_role to `false`.
See the AWS Docs for further information:
http://docs.aws.amazon.com/config/latest/developerguide/iamrole-permissions.html
**Default value:** `null`
`managed_rules` optional
A list of AWS Managed Rules that should be enabled on the account.
See the following for a list of possible rules to enable:
https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html
Example:
```
managed_rules = {
access-keys-rotated = {
identifier = "ACCESS_KEYS_ROTATED"
description = "Checks whether the active access keys are rotated within the number of days specified in maxAccessKeyAge. The rule is NON_COMPLIANT if the access keys have not been rotated for more than maxAccessKeyAge number of days."
input_parameters = {
maxAccessKeyAge : "90"
}
enabled = true
tags = {}
}
}
```
**Type:**
```hcl
map(object({
description = string
identifier = string
input_parameters = any
tags = map(string)
enabled = bool
}))
```
**Default value:** `{ }`
`privileged` (`bool`) optional
True if the default provider already has access to the backend
**Default value:** `false`
`root_account_stage` (`string`) optional
The stage name for the Organization root (master) account
**Default value:** `"root"`
`sns_encryption_key_id` (`string`) optional
The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK.
Use "alias/aws/sns" for AWS managed key (recommended for compliance).
Use a custom KMS key ARN or alias for organization-specific encryption requirements.
IMPORTANT: This is required for CMMC compliance (cmmc-2-v2-sns-encrypted-kms rule).
The SNS topic created by AWS Config must be encrypted with KMS.
**Default value:** `"alias/aws/sns"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`aws_config_configuration_recorder_id`
The ID of the AWS Config Recorder
`aws_config_iam_role`
The ARN of the IAM Role used for AWS Config
`storage_bucket_arn`
Storage Config bucket ARN
`storage_bucket_id`
Storage Config bucket ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `awsutils`, version: `>= 0.16.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`aws_config` | 1.5.3 | [`cloudposse/config/aws`](https://registry.terraform.io/modules/cloudposse/config/aws/1.5.3) | n/a
`aws_config_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`config_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`conformance_pack` | 1.5.3 | [`cloudposse/config/aws//modules/conformance-pack`](https://registry.terraform.io/modules/cloudposse/config/aws/modules/conformance-pack/1.5.3) | n/a
`global_collector_region` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`org_conformance_pack` | latest | [`./modules/org-conformance-pack`](https://registry.terraform.io/modules/./modules/org-conformance-pack/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`utils` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_partition.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_region.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) (data source)
---
## org-conformance-pack
# AWS Config Conformance Pack
This module deploys a
[Conformance Pack](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html). A conformance pack
is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account
and a Region or across an organization in AWS Organizations. Conformance packs are created by authoring a YAML template
that contains the list of AWS Config managed or custom rules and remediation actions.
The Conformance Pack cannot be deployed until AWS Config is deployed, which can be deployed using the
[aws-config](../../) component.
## Usage
First, make sure your root `account` allows the service access principal `config-multiaccountsetup.amazonaws.com` to
update child organizations. You can see the docs on the account module here:
[aws_service_access_principals](/components/library/aws/account/#aws_service_access_principals)
Then you have two options:
- Set the `default_scope` of the parent `aws-config` component to be `organization` (can be overridden by the `scope` of
each `conformance_packs` item)
- Set the `scope` of the `conformance_packs` item to be `organization`
### Conformance Pack Sources
The module supports both remote URLs and local file paths for conformance packs:
- **Remote URL**: Use `http://` or `https://` URLs to download conformance packs from remote sources
- **Local File**: Use relative paths (from the component root) to reference local conformance pack files
An example YAML stack config for Atmos follows. Note, that both options are shown for demonstration purposes. In
practice you should only have one `aws-config` per account:
```yaml
components:
terraform:
account:
vars:
aws_service_access_principals:
- config-multiaccountsetup.amazonaws.com
aws-config/cis/level-1:
vars:
conformance_packs:
- name: Operational-Best-Practices-for-CIS-AWS-v1.4-Level1
conformance_pack: https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level1.yaml
scope: organization
aws-config/cis/level-2:
vars:
default_scope: organization
conformance_packs:
# Remote conformance pack (downloaded from URL)
- name: Operational-Best-Practices-for-CIS-AWS-v1.4-Level2
conformance_pack: https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level2.yaml
# Local conformance pack (relative to component root)
- name: CMMC-Level-2
conformance_pack: conformance-packs/cmmc-l2-v2-AWS-Best-Practices.yaml
```
## Variables
### Required Variables
`conformance_pack` (`string`) required
The URL to a Conformance Pack (http:// or https://) or a local file path relative to the component root
### Optional Variables
`parameter_overrides` (`map(any)`) optional
A map of parameters names to values to override from the template
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
ARN for the AWS Config Organization Conformance Pack
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `http`, version: `>= 2.1.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `http`, version: `>= 2.1.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_config_organization_conformance_pack.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/config_organization_conformance_pack) (resource)
## Data Sources
The following data sources are used by this module:
- [`http_http.conformance_pack`](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) (data source)
None
---
## aws-inspector
This component is responsible for provisioning an
[AWS Inspector](https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html) by installing the
[Inspector agent](https://repost.aws/knowledge-center/set-up-amazon-inspector) across all EC2 instances and applying the
Inspector rules.
AWS Inspector is a security assessment service offered by Amazon Web Services (AWS). It helps you analyze and evaluate
the security and compliance of your applications and infrastructure deployed on AWS. AWS Inspector automatically
assesses the resources within your AWS environment, such as Amazon EC2 instances, for potential security vulnerabilities
and deviations from security best practices.
Here are some key features and functionalities of AWS Inspector:
- **Security Assessments:** AWS Inspector performs security assessments by analyzing the behavior of your resources and
identifying potential security vulnerabilities. It examines the network configuration, operating system settings, and
installed software to detect common security issues.
- **Vulnerability Detection:** AWS Inspector uses a predefined set of rules to identify common vulnerabilities,
misconfigurations, and security exposures. It leverages industry-standard security best practices and continuously
updates its knowledge base to stay current with emerging threats.
- **Agent-Based Architecture:** AWS Inspector utilizes an agent-based approach, where you install an Inspector agent on
your EC2 instances. The agent collects data about the system and its configuration, securely sends it to AWS
Inspector, and allows for more accurate and detailed assessments.
- **Security Findings:** After performing an assessment, AWS Inspector generates detailed findings that highlight
security vulnerabilities, including their severity level, impact, and remediation steps. These findings can help you
prioritize and address security issues within your AWS environment.
- **Integration with AWS Services:** AWS Inspector seamlessly integrates with other AWS services, such as AWS
CloudFormation, AWS Systems Manager, and AWS Security Hub. This allows you to automate security assessments, manage
findings, and centralize security information across your AWS infrastructure.
## Usage
Stack Level: Regional
Example stack snippet:
```yaml
components:
terraform:
aws-inspector:
vars:
enabled: true
enabled_rules:
- cis
```
The `aws-inspector` component can be included in a Terraform stack configuration. In the example, it is enabled with `enabled: true`. The `enabled_rules` variable specifies a list of rules to enable and uses short forms (e.g., `cis`) that automatically resolve to the correct rule package ARN for the target region. See the `var.enabled_rules` input for available short forms.
For a comprehensive list of rules and their corresponding ARNs, refer to the Amazon Inspector ARNs for rules packages documentation. Customize the configuration and enabled rules to tailor security assessments to your requirements and compliance standards.
## Variables
### Required Variables
`region` (`string`) required
AWS region
### Optional Variables
`enabled_rules` (`list(string)`) optional
A list of AWS Inspector rules that should run on a periodic basis.
Valid values are `cve`, `cis`, `nr`, `sbp` which map to the appropriate [Inspector rule arns by region](https://docs.aws.amazon.com/inspector/latest/userguide/inspector_rules-arns.html).
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`inspector`
The AWS Inspector module outputs
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`inspector` | 0.4.0 | [`cloudposse/inspector/aws`](https://registry.terraform.io/modules/cloudposse/inspector/aws/0.4.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_association.install_agent`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_association) (resource)
## Data Sources
The following data sources are used by this module:
---
## aws-inspector2
This component is responsible for configuring Inspector V2 within an AWS Organization.
## Usage
**Stack Level**: Regional
## Deployment Overview
The deployment of this component requires multiple runs with different variable settings to properly configure the AWS
Organization. First, you delegate Inspector V2 central management to the Administrator account (usually `security`
account). After the Administrator account is delegated, we configure it to manage Inspector V2 across all the
Organization accounts and send all their findings to that account.
In the examples below, we assume that the AWS Organization Management account is `root` and the AWS Organization
Delegated Administrator account is `security`.
### Deploy to Organization Management Account
First, the component is deployed to the AWS Organization Management account `root` in each region in order to configure
the [AWS Delegated Administrator account](https://docs.aws.amazon.com/inspector/latest/user/designating-admin.html) that
operates Amazon Inspector V2.
```yaml
# ue1-root
components:
terraform:
aws-inspector2/delegate-orgadmin/ue1:
metadata:
component: aws-inspector2
vars:
enabled: true
region: us-east-1
```
### Deploy Organization Settings in Delegated Administrator Account
Now the component can be deployed to the Delegated Administrator Account `security` to create the organization-wide
configuration for all the Organization accounts. Note that `var.admin_delegated` set to `true` indicates that the
delegation has already been performed from the Organization Management account, and only the resources required for
organization-wide configuration will be created.
```yaml
# ue1-security
components:
terraform:
aws-inspector2/orgadmin-configuration/ue1:
metadata:
component: aws-inspector2
vars:
enabled: true
region: us-east-1
admin_delegated: true
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_map_tenant` (`string`) optional
The tenant where the `account_map` component required by remote-state is deployed
**Default value:** `"core"`
`admin_delegated` (`bool`) optional
A flag to indicate if the AWS Organization-wide settings should be created. This can only be done after the GuardDuty
Administrator account has already been delegated from the AWS Org Management account (usually 'root'). See the
Deployment section of the README for more information.
**Default value:** `false`
`auto_enable_ec2` (`bool`) optional
Whether Amazon EC2 scans are automatically enabled for new members of the Amazon Inspector organization.
**Default value:** `true`
`auto_enable_ecr` (`bool`) optional
Whether Amazon ECR scans are automatically enabled for new members of the Amazon Inspector organization.
**Default value:** `true`
`auto_enable_lambda` (`bool`) optional
Whether Lambda Function scans are automatically enabled for new members of the Amazon Inspector organization.
**Default value:** `true`
The name of the AWS Organization management account
**Default value:** `null`
`privileged` (`bool`) optional
true if the default provider already has access to the backend
**Default value:** `false`
`root_account_stage` (`string`) optional
The stage name for the Organization root (management) account. This is used to lookup account IDs from account names
using the `account-map` component.
**Default value:** `"root"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`aws_inspector2_member_association`
The Inspector2 member association resource.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 5.0, < 6.0.0`
- `awsutils`, version: `>= 0.16.0, < 6.0.0`
### Providers
- `aws`, version: `>= 5.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_inspector2_delegated_admin_account.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/inspector2_delegated_admin_account) (resource)
- [`aws_inspector2_enabler.delegated_admin`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/inspector2_enabler) (resource)
- [`aws_inspector2_enabler.member_accounts`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/inspector2_enabler) (resource)
- [`aws_inspector2_member_association.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/inspector2_member_association) (resource)
- [`aws_inspector2_organization_configuration.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/inspector2_organization_configuration) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
---
## aws-saml
This component provisions SAML metadata into AWS IAM as new SAML providers. For Okta integrations (when `okta` is
included in the key provided to the `saml_providers` input), it also creates an Okta API user and an associated Access
Key pair, and stores the credentials in AWS SSM Parameter Store.
## Usage
Stack Level: Global, in the account to which users will log in, typically only `identity`.
Here's an example snippet for how to use this component.
IMPORTANT: The given SAML metadata files must exist at the root of the module.
```yaml
components:
terraform:
aws-saml:
vars:
enabled: true
saml_providers:
enabled: true
example-okta: Okta_metadata_example.xml
example-gsuite: GoogleIDPMetadata-example.com.xml
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`saml_providers` (`map(string)`) required
Map of provider names to XML data filenames
### Optional Variables
`attach_permissions_to_group` (`bool`) optional
If true, attach IAM permissions to a group rather than directly to the API user
**Default value:** `false`
`import_role_arn` (`string`) optional
IAM Role ARN to use when importing a resource
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`okta_api_users`
Map of OKTA API Users
`saml_provider_arns`
Map of SAML provider names to provider ARNs
`saml_provider_assume_role_policy`
JSON "assume role" policy document to use for roles allowed to log in via SAML
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`okta_api_user` | latest | [`./modules/okta-user`](https://registry.terraform.io/modules/./modules/okta-user/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_saml_provider.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_saml_provider) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.saml_provider_assume`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
---
## okta-user
# Standard VPC Attachment
## Variables
### Required Variables
### Optional Variables
`attach_permissions_to_group` (`bool`) optional
If true, attach IAM permissions to a group rather than directly to the API user
**Default value:** `false`
`kms_alias_name` (`string`) optional
The name of the KMS alias used for encryption/decryption of SSM parameters (API key)
**Default value:** `"alias/aws/ssm"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`ssm_prefix`
Where to find the AWS API key information for the user
`user_arn`
User ARN
`user_name`
User name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_access_key.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key) (resource)
- [`aws_iam_group.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_group) (resource)
- [`aws_iam_group_membership.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_group_membership) (resource)
- [`aws_iam_group_policy_attachment.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_group_policy_attachment) (resource)
- [`aws_iam_policy.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_user.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user) (resource)
- [`aws_iam_user_policy_attachment.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user_policy_attachment) (resource)
- [`aws_ssm_parameter.okta_user_access_key_id`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.okta_user_secret_access_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
None
---
## aws-shield
This component is responsible for enabling AWS Shield Advanced Protection for the following resources:
- Application Load Balancers (ALBs)
- CloudFront Distributions
- Elastic IPs (NAT Gateways, EC2 instances)
- Route53 Hosted Zones
## About AWS Shield
AWS Shield is a managed DDoS (Distributed Denial of Service) protection service that safeguards applications running on AWS.
**AWS Shield has two tiers:**
| Feature | Shield Standard | Shield Advanced |
|---------|-----------------|-----------------|
| **Cost** | Free (included with AWS) | $3,000/month per organization |
| **Protection** | Layer 3/4 (network/transport) | Layer 3/4/7 (includes application layer) |
| **Resources** | All AWS resources | Specific protected resources |
| **DRT Access** | No | Yes (24/7 DDoS Response Team) |
| **Cost Protection** | No | Yes (credits for DDoS-related scaling) |
| **Advanced Metrics** | No | Yes (CloudWatch metrics) |
| **WAF Integration** | Basic | Advanced (custom rules during attacks) |
This component configures **AWS Shield Advanced** protection for specific resources.
## Prerequisites
This component requires that the account where it is being provisioned has been
[subscribed to AWS Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/enable-ddos-prem.html).
**Important:** The Shield Advanced subscription is a **manual step** that must be completed before deploying this component:
```shell
# Subscribe via AWS CLI
aws shield create-subscription
# Or subscribe via AWS Console:
# AWS Shield → Getting started → Subscribe to Shield Advanced
```
This component assumes that resources it is configured to protect are not already protected by other components that
have their `xxx_aws_shield_protection_enabled` variable set to `true`.
## Usage
**Stack Level**: Global or Regional
AWS Shield Advanced protects both global and regional resources. Deploy this component to the appropriate stack level
based on the resources you want to protect:
| Resource Type | Stack Level | Example Stack |
|---------------|-------------|---------------|
| Route53 Hosted Zones | Global | `plat-gbl-prod-shield` |
| CloudFront Distributions | Global | `plat-gbl-prod-shield` |
| Application Load Balancers | Regional | `plat-use1-prod-shield` |
| Elastic IPs | Regional | `plat-use1-prod-shield` |
### Complete Example (All Resources)
The following snippet shows how to use all of this component's features in a stack configuration:
```yaml
components:
terraform:
aws-shield:
metadata:
component: aws-shield
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
# Global resources
route53_zone_names:
- example.com
- api.example.com
cloudfront_distribution_ids:
- E1ABCDEFG12345
- E2BCDEFGH23456
# Regional resources
alb_protection_enabled: true
alb_names:
- k8s-common-2c5f23ff99
- api-gateway-alb
eips:
- 3.214.128.240 # NAT Gateway AZ-a
- 35.172.208.150 # NAT Gateway AZ-b
- 35.171.70.50 # Bastion host
```
### Global Stack Configuration
A typical global configuration includes Route53 hosted zones and CloudFront distributions.
Global stacks typically don't have a VPC, so `alb_names` and `eips` should not be defined:
```yaml
# stacks/catalog/aws-shield/global.yaml
components:
terraform:
aws-shield:
metadata:
component: aws-shield
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
route53_zone_names:
- example.com
- internal.example.com
cloudfront_distribution_ids:
- E1ABCDEFG12345
```
### Regional Stack Configuration
Regional configurations protect ALBs and Elastic IPs. CloudFront distributions should not be defined
in regional stacks (they are global resources):
```yaml
# stacks/catalog/aws-shield/regional.yaml
components:
terraform:
aws-shield:
metadata:
component: aws-shield
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
# Protect ALBs by name
alb_protection_enabled: true
alb_names:
- k8s-common-2c5f23ff99
# Protect Elastic IPs (NAT Gateways, EC2 instances)
eips:
- 3.214.128.240
- 35.172.208.150
# Regional Route53 zones (if any)
route53_zone_names:
- us-east-1.example.com
```
### Auto-Discovery from EKS ALB Controller
When `alb_protection_enabled` is `true` and `alb_names` is empty, the component automatically discovers
ALB names from the `eks/alb-controller-ingress-group` component via remote state:
```yaml
components:
terraform:
aws-shield:
vars:
enabled: true
# Enable ALB protection with auto-discovery
alb_protection_enabled: true
# alb_names is intentionally empty - will be discovered from EKS ALB controller
```
### Catalog Defaults Pattern
Create a catalog defaults file that can be imported and customized per environment:
```yaml
# stacks/catalog/aws-shield/defaults.yaml
components:
terraform:
aws-shield:
metadata:
component: aws-shield
vars:
enabled: true
alb_protection_enabled: false
alb_names: []
eips: []
route53_zone_names: []
cloudfront_distribution_ids: []
```
Then import and override in your stack:
```yaml
# stacks/orgs/acme/platform/prod/us-east-1/shield.yaml
import:
- catalog/aws-shield/defaults
components:
terraform:
aws-shield:
vars:
alb_protection_enabled: true
alb_names:
- prod-api-alb
eips:
- 52.1.2.3
```
### Integration with Other Components
Stack configurations that rely on components with a `xxx_aws_shield_protection_enabled` variable should set that
variable to `true` and leave the corresponding variable for this component empty, relying on that component's AWS
Shield Advanced functionality instead. This simplifies inter-component dependencies and minimizes the need
for maintaining the provisioning order during a cold-start.
### Finding Resource Identifiers
Use the following AWS CLI commands to find resource identifiers:
```shell
# List ALB names
aws elbv2 describe-load-balancers --query 'LoadBalancers[*].LoadBalancerName' --output table
# List Elastic IPs
aws ec2 describe-addresses --query 'Addresses[*].[PublicIp,AllocationId,Tags[?Key==`Name`].Value|[0]]' --output table
# List Route53 hosted zones
aws route53 list-hosted-zones --query 'HostedZones[*].[Name,Id]' --output table
# List CloudFront distributions
aws cloudfront list-distributions --query 'DistributionList.Items[*].[Id,DomainName,Origins.Items[0].DomainName]' --output table
```
### Verifying Protection Status
After deployment, verify resources are protected:
```shell
# List all protected resources
aws shield list-protections --query 'Protections[*].[Name,ResourceArn]' --output table
# Describe a specific protection
aws shield describe-protection --resource-arn
# Check subscription status
aws shield describe-subscription
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`alb_names` (`list(string)`) optional
list of ALB names which will be protected with AWS Shield Advanced
**Default value:** `[ ]`
`alb_protection_enabled` (`bool`) optional
Enable ALB protection. By default, ALB names are read from the EKS cluster ALB control group
**Default value:** `false`
list of CloudFront Distribution IDs which will be protected with AWS Shield Advanced
**Default value:** `[ ]`
`eips` (`list(string)`) optional
List of Elastic IPs which will be protected with AWS Shield Advanced
**Default value:** `[ ]`
`route53_zone_names` (`list(string)`) optional
List of Route53 Hosted Zone names which will be protected with AWS Shield Advanced
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`application_load_balancer_protections`
AWS Shield Advanced Protections for ALBs
`cloudfront_distribution_protections`
AWS Shield Advanced Protections for CloudFront Distributions
`elastic_ip_protections`
AWS Shield Advanced Protections for Elastic IPs
`route53_hosted_zone_protections`
AWS Shield Advanced Protections for Route53 Hosted Zones
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0`
### Providers
- `aws`, version: `>= 4.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`alb` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_shield_protection.alb_shield_protection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/shield_protection) (resource)
- [`aws_shield_protection.cloudfront_shield_protection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/shield_protection) (resource)
- [`aws_shield_protection.eip_shield_protection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/shield_protection) (resource)
- [`aws_shield_protection.route53_zone_protection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/shield_protection) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_alb.alb`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/alb) (data source)
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_cloudfront_distribution.cloudfront_distribution`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/cloudfront_distribution) (data source)
- [`aws_eip.eip`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eip) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_route53_zone.route53_zone`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/route53_zone) (data source)
---
## aws-ssosync
# Component: `ssosync`
Deploys [AWS ssosync](https://github.com/awslabs/ssosync) to sync Google Groups with AWS SSO.
AWS `ssosync` is a Lambda application that regularly manages Identity Store users.
This component requires manual deployment by a privileged user because it deploys a role in the root or identity
management account.
## Usage
You should be able to deploy the `ssosync` component to the same account as `aws-sso`. Typically that is the `core-gbl-root` or `gbl-root` stack.
**Stack Level**: Global **Deployment**: Must be deployed by `managers` team-member or SuperAdmin using `atmos` CLI (since this is a root account deployment). This could also be deployed in an identity management account.
The following is an example snippet for how to use this component:
(`stacks/catalog/aws-ssosync.yaml`)
```yaml
components:
terraform:
ssosync:
vars:
enabled: true
name: ssosync
google_admin_email: admin@acme.com
log_format: text
log_level: warn
schedule_expression: "rate(15 minutes)"
# Filter the groups that will be synced and is optional (default: all groups)
# This supports wild cards `*`
google_group_match:
- "email='developer@acme.com'"
- "email='aws@acme.com'"
- "name='Acme Team'"
```
We recommend following a similar process to what the [AWS ssosync](https://github.com/awslabs/ssosync) documentation
recommends.
### Deployment
Overview of steps:
1. Configure AWS IAM Identity Center
1. Configure Google Cloud console
1. Configure Google Admin console
1. Deploy the `aws-ssosync` component
1. Deploy the `aws-sso` component
#### 1. Configure AWS IAM Identity Center (AWS SSO)
Follow
[AWS documentation to configure SAML and SCIM with Google Workspace and IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/gs-gwp.html).
Do steps 1-4. **Step 5: Google Workspace: Configure auto provisioning is impossible.**
As part of this process, save the SCIM endpoint token and URL. Then in AWS SSM Parameter Store, create two
`SecureString` parameters in the same account used for AWS SSO. This is usually the root account in the primary region.
These can be found by clicking `Enable Automatic provisioning` in the AWS IAM Identity Center console.
```
# Typically looks like `https://scim.us-east-2.amazonaws.com/.../scim/v2`
/ssosync/scim_endpoint_url
# Typically looks like a base64 encoded value
/ssosync/scim_endpoint_access_token
```
Select `Settings`, under the `Identity Source` section, copy the `Identity Store ID` and create the following parameter:
```
# Typically looks like `d-000000aaaa`
/ssosync/identity_store_id
```
#### 2. Configure Google Cloud console
Within the [Google Cloud console](https://console.cloud.google.com), we need to create a new Google Project and Service Account and enable the Admin SDK
API. Follow these steps:
2. Create a new project. Give the project a descriptive name such as `AWS SSO Sync`
3. Enable Admin SDK in APIs: `APIs & Services > Enabled APIs & Services > + ENABLE APIS AND SERVICES`

4. Create Service Account: `IAM & Admin > Service Accounts > Create Service Account`
[(ref)](https://cloud.google.com/iam/docs/service-accounts-create).

5. Download credentials for the new Service Account:
`IAM & Admin > Service Accounts > select Service Account > Keys > ADD KEY > Create new key > JSON`

6. Save the JSON credentials as a new `SecureString` AWS SSM parameter in the same account used for AWS SSO. Use the
full JSON string as the value for the parameter.
```
/ssosync/google_credentials
```
#### 3. Configure Google Admin console
- Open the [Google Admin console](https://admin.google.com/)
- From your domain’s Admin console, go to `Main menu menu > Security > Access and data control > API controls`
- In the Domain wide delegation pane, select `Manage Domain Wide Delegation`.
- Click `Add new`.
- In the Client ID field, enter the `Unique ID` of the Service Account created in step 2, this should be a 22 number string.
- In the OAuth Scopes field, enter
```console
https://www.googleapis.com/auth/admin.directory.group.readonly,https://www.googleapis.com/auth/admin.directory.group.member.readonly,https://www.googleapis.com/auth/admin.directory.user.readonly
```
#### 4. Deploy the `aws-ssosync` component
Make sure that all four of the following SSM parameters exist in the target account and region:
- `/ssosync/scim_endpoint_url`
- `/ssosync/scim_endpoint_access_token`
- `/ssosync/identity_store_id`
- `/ssosync/google_credentials`
If deployed successfully, Groups and Users should be programmatically copied from the Google Workspace into AWS IAM
Identity Center on the given schedule.
If these Groups are not showing up, check the CloudWatch logs for the new Lambda function and refer the [FAQs](#FAQ)
included below.
#### 5. Deploy the `aws-sso` component
Use the names of the Groups now provisioned programmatically in the `aws-sso` component catalog. Follow the
[aws-sso](https://github.com/cloudposse-terraform-components/aws-ssosync/tree/main/aws-ssosync/../aws-sso/) component documentation to deploy the `aws-sso` component.
### FAQ
#### Why is the tool forked by `Benbentwo`?
The `awslabs` tool requires AWS Secrets Managers for the Google Credentials. However, we would prefer to use AWS SSM to
store all credentials consistency and not require AWS Secrets Manager. Therefore we've created a Pull Request and will
point to a fork until the PR is merged.
Ref:
- https://github.com/awslabs/ssosync/pull/133
- https://github.com/awslabs/ssosync/issues/93
#### What should I use for the Google Admin Email Address?
The Service Account created will assume the User given by `--google-admin` / `SSOSYNC_GOOGLE_ADMIN` /
`var.google_admin_email`. Therefore, this user email must be a valid Google admin user in your organization.
This is not the same email as the Service Account.
If Google fails to query Groups, you may see the following error:
```console
Notifying Lambda and mark this execution as Failure: googleapi: Error 404: Domain not found., notFound
```
#### Common Group Name Query Error
If filtering group names using query strings, make sure the provided string is valid. For example,
`google_group_match: "name:aws*"` is incorrect. Instead use `google_group_match: "Name:aws*"`
If not, you may again see the same error message:
```console
Notifying Lambda and mark this execution as Failure: googleapi: Error 404: Domain not found., notFound
```
Ref:
> The specific error you are seeing is because the google api doesn't like the query string you provided for the -g
> parameter. try -g "Name:Fuel\*"
https://github.com/awslabs/ssosync/issues/91
## Variables
### Required Variables
`google_admin_email` (`string`) required
Google Admin email
`region` (`string`) required
AWS Region where AWS SSO is enabled
### Optional Variables
`architecture` (`string`) optional
Architecture of the Lambda function
**Default value:** `"x86_64"`
`google_credentials_ssm_path` (`string`) optional
SSM Path for `ssosync` secrets
**Default value:** `"/ssosync"`
`google_group_match` (`list(string)`) optional
Google Workspace group filter query parameter, example: 'name:Admin* email:aws-*', see: https://developers.google.com/admin-sdk/directory/v1/guides/search-groups
**Default value:** `[ ]`
`google_user_match` (`list(string)`) optional
Google Workspace user filter query parameter, example: 'name:John* email:admin*', see: https://developers.google.com/admin-sdk/directory/v1/guides/search-users
**Default value:** `[ ]`
`ignore_groups` (`string`) optional
Ignore these Google Workspace groups
**Default value:** `""`
`ignore_users` (`string`) optional
Ignore these Google Workspace users
**Default value:** `""`
`include_groups` (`string`) optional
Include only these Google Workspace groups. (Only applicable for sync_method user_groups)
**Default value:** `""`
`log_format` (`string`) optional
Log format for Lambda function logging
**Default value:** `"json"`
`log_level` (`string`) optional
Log level for Lambda function logging
**Default value:** `"warn"`
`schedule_expression` (`string`) optional
Schedule for trigger the execution of ssosync (see CloudWatch schedule expressions)
**Default value:** `"rate(15 minutes)"`
`ssosync_url_prefix` (`string`) optional
URL prefix for ssosync binary
**Default value:** `"https://github.com/cloudposse/ssosync/releases/download"`
`ssosync_version` (`string`) optional
Version of ssosync to use
**Default value:** `"v3.0.0"`
`sync_method` (`string`) optional
Sync method to use
**Default value:** `"groups"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
ARN of the lambda function
`invoke_arn`
Invoke ARN of the lambda function
`qualified_arn`
ARN identifying your Lambda Function Version (if versioning is enabled via publish = true)
`ssosync_artifact_url`
URL of the ssosync artifact
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `archive`, version: `>= 2.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `null`, version: `>= 3.0`
- `random`, version: `>= 1.4.1`
### Providers
- `archive`, version: `>= 2.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `null`, version: `>= 3.0`
- `random`, version: `>= 1.4.1`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`ssosync_artifact` | 0.8.0 | [`cloudposse/module-artifact/external`](https://registry.terraform.io/modules/cloudposse/module-artifact/external/0.8.0) | This module is the resource that actually downloads the artifact from GitHub as a tar.gz
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`archive_file.lambda`](https://registry.terraform.io/providers/hashicorp/archive/latest/docs/resources/file) (resource)
- [`aws_cloudwatch_event_rule.ssosync`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_rule) (resource)
- [`aws_cloudwatch_event_target.ssosync`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_target) (resource)
- [`aws_iam_role.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_lambda_function.ssosync`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function) (resource)
- [`aws_lambda_permission.allow_cloudwatch_execution`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_permission) (resource)
- [`null_resource.extract_my_tgz`](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) (resource)
- [`random_pet.zip_recreator`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.ssosync_lambda_assume_role`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.ssosync_lambda_identity_center`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_ssm_parameter.google_credentials`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.identity_store_id`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.scim_endpoint_access_token`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.scim_endpoint_url`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## aws-team-roles
This component is responsible for provisioning user and system IAM roles outside the `identity` account. It sets them up
to be assumed from the "team" roles defined in the `identity` account by [the `aws-teams` component](https://github.com/cloudposse-terraform-components/aws-team-roles/tree/main/aws-team-roles/../aws-teams)
and/or the AWS SSO permission sets defined in [the `aws-sso` component](https://github.com/cloudposse-terraform-components/aws-team-roles/tree/main/aws-team-roles/../aws-sso), and/or be directly accessible via
SAML logins.
### Privileges are Granted to Users via IAM Policies
Each role is granted permissions by attaching a list of IAM policies to the IAM role via its `role_policy_arns` list.
You can configure AWS managed policies by entering the ARNs of the policies directly into the list, or you can create a
custom policy as follows:
1. Give the policy a name, e.g. `eks-admin`. We will use `NAME` as a placeholder for the name in the instructions below.
2. Create a file in the `aws-teams` directory with the name `policy-NAME.tf`.
3. In that file, create a policy as follows:
```hcl
data "aws_iam_policy_document" "NAME" {
# Define the policy here
}
resource "aws_iam_policy" "NAME" {
name = format("%s-NAME", module.this.id)
policy = data.aws_iam_policy_document.NAME.json
tags = module.this.tags
}
```
4. Create a file named `additional-policy-map_override.tf` in the `aws-team-roles` directory (if it does not already
exist). This is a [terraform override file](https://developer.hashicorp.com/terraform/language/files/override),
meaning its contents will be merged with the main terraform file, and any locals defined in it will override locals
defined in other files. Having your code in this separate override file makes it possible for the component to
provide a placeholder local variable so that it works without customization, while allowing you to customize the
component and still update it without losing your customizations.
5. In that file, redefine the local variable `overridable_additional_custom_policy_map` map as follows:
```hcl
locals {
overridable_additional_custom_policy_map = {
NAME = aws_iam_policy.NAME.arn
}
}
```
If you have multiple custom policies, add each one to the map in the form `NAME = aws_iam_policy.NAME.arn`.
6. With that done, you can now attach that policy by adding the name to the `role_policy_arns` list. For example:
```yaml
role_policy_arns:
- "arn:aws:iam::aws:policy/job-function/ViewOnlyAccess"
- "NAME"
```
## Usage
**Stack Level**: Global
**Deployment**: Must be deployed by _SuperAdmin_ using `atmos` CLI
Here's an example snippet for how to use this component. This specific usage is an example only, and not intended for
production use. You set the defaults in one YAML file, and import that file into each account's Global stack (except for
the `identity` account itself). If desired, you can make account-specific changes by overriding settings, for example
- Disable entire roles in the account by setting `enabled: false`
- Limit who can access the role by setting a different value for `trusted_teams`
- Change the permissions available to that role by overriding the `role_policy_arns` (not recommended, limit access to
the role or create a different role with the desired set of permissions instead).
Note that when overriding, **maps are deep merged, but lists are replaced**. This means, for example, that your setting
of `trusted_primary_roles` in an override completely replaces the default, it does not add to it, so if you want to
allow an extra "primary" role to have access to the role, you have to include all the default "primary" roles in the
list, too, or they will lose access.
```yaml
components:
terraform:
aws-team-roles:
backend:
s3:
# Override the default Role for accessing the backend, because SuperAdmin is not allowed to assume that role
role_arn: null
vars:
enabled: true
roles:
# `template` serves as the default configuration for other roles via the YAML anchor.
# However, `atmos` does not support "import" of YAML anchors, so if you define a new role
# in another file, you will not be able to reference this anchor.
template: &user-template # If `enabled: false`, the role will not be created in this account
enabled: false
# `max_session_duration` set the maximum session duration (in seconds) for the IAM roles.
# This setting can have a value from 3600 (1 hour) to 43200 (12 hours).
# For roles people log into via SAML, a long duration is convenient to prevent them
# from having to frequently re-authenticate.
# For roles assumed from some other role, the setting is practically irrelevant, because
# the AssumeRole API limits the duration to 1 hour in any case.
# References:
# - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html
# - https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
max_session_duration: 3600 # 1 hour in seconds
# role_policy_arns are the IAM Policy ARNs to attach to this policy. In addition to real ARNs,
# you can use keys in the `custom_policy_map` in `main.tf` to select policies defined in the component.
# If you are using keys from the map, plans look better if you put them after the real role ARNs.
role_policy_arns: []
role_description: "Template role, should not exist"
# If `aws_saml_login_enabled: true` then the role will be available via SAML logins,
# but only via the SAML IDPs configured for this account.
# Otherwise, it will only be accessible via `assume role`.
aws_saml_login_enabled: false
## The following attributes control access to this role via `assume role`.
## `trusted_*` grants access, `denied_*` denies access.
## If a role is both trusted and denied, it will not be able to access this role.
# Permission sets specify users operating from the given AWS SSO permission set in this account.
trusted_permission_sets: []
denied_permission_sets: []
# Primary roles specify the short role names of roles in the primary (identity)
# account that are allowed to assume this role.
# BE CAREFUL: This is setting the default access for other roles.
trusted_teams: []
denied_teams: []
# Role ARNs specify Role ARNs in any account that are allowed to assume this role.
# BE CAREFUL: there is nothing limiting these Role ARNs to roles within our organization.
trusted_role_arns: []
denied_role_arns: []
##
## admin and terraform are the core team roles
##
admin:
<<: *user-template
enabled: true
role_policy_arns:
- "arn:aws:iam::aws:policy/AdministratorAccess"
role_description: "Full administration of this account"
trusted_teams: ["admin"]
terraform:
<<: *user-template
enabled: true
# We require Terraform to be allowed to create and modify IAM roles
# and policies (e.g. for EKS service accounts), so there is no use trying to restrict it.
# For better security, we could segregate components that needed
# administrative permissions and use a more restrictive role
# for Terraform, such as PowerUser (further restricted to deny AWS SSO changes).
role_policy_arns:
- "arn:aws:iam::aws:policy/AdministratorAccess"
role_description: "Role for Terraform administration of this account"
trusted_teams: ["admin", "spacelift"]
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`roles` required
A map of roles to configure the accounts.
**Type:**
```hcl
map(object({
enabled = bool
denied_teams = list(string)
denied_permission_sets = list(string)
denied_role_arns = list(string)
max_session_duration = number # in seconds 3600 <= max <= 43200 (12 hours)
role_description = string
role_policy_arns = list(string)
aws_saml_login_enabled = bool
trusted_teams = list(string)
trusted_permission_sets = list(string)
trusted_role_arns = list(string)
}))
```
### Optional Variables
`aws_saml_component_name` (`string`) optional
The name of the aws-saml component
**Default value:** `"aws-saml"`
`import_role_arn` (`string`) optional
IAM Role ARN to use when importing a resource
**Default value:** `null`
Map where keys are role names (same keys as `roles`) and values are lists of
GitHub repositories allowed to assume those roles. See `account-map/modules/github-assume-role-policy.mixin.tf`
for specifics about repository designations.
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`role_name_role_arn_map`
Map of role names to role ARNs
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `local`, version: `>= 1.3`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `local`, version: `>= 1.3`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`assume_role` | latest | [`../account-map/modules/team-assume-role-policy`](https://registry.terraform.io/modules/../account-map/modules/team-assume-role-policy/) | n/a
`aws_saml` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_policy.eks_viewer`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_policy.kms_planner`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_policy.vpn_planner`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_iam_role_policy_attachment.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`local_file.account_info`](https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.assume_role_aggregated`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.eks_view_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.eks_viewer_access_aggregated`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.kms_planner_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.kms_planner_access_aggregated`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.vpn_planner_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.vpn_planner_access_aggregated`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
---
## aws-teams
This component is responsible for provisioning all primary user and system roles into the centralized identity account.
This is expected to be used alongside [the `aws-team-roles` component](https://github.com/cloudposse-terraform-components/aws-teams/tree/main/aws-teams/../aws-team-roles) to provide fine-grained role
delegation across the account hierarchy.
### Teams Function Like Groups and are Implemented as Roles
The "teams" created in the `identity` account by this module can be thought of as access control "groups": a user who is
allowed access one of these teams gets access to a set of roles (and corresponding permissions) across a set of
accounts. Generally, there is nothing else provisioned in the `identity` account, so the teams have limited access to
resources in the `identity` account by design.
Teams are implemented as IAM Roles in each account. Access to the "teams" in the `identity` account is controlled by the
`aws-saml` and `aws-sso` components. Access to the roles in all the other accounts is controlled by the "assume role"
policies of those roles, which allow the "team" or AWS SSO Permission set to assume the role (or not).
### Privileges are Defined for Each Role in Each Account by `aws-team-roles`
Every account besides the `identity` account has a set of IAM roles created by the `aws-team-roles` component. In that
component, the account's roles are assigned privileges, and those privileges ultimately determine what a user can do in
that account.
Access to the roles can be granted in a number of ways. One way is by listing "teams" created by this component as
"trusted" (`trusted_teams`), meaning that users who have access to the team role in the `identity` account are allowed
(trusted) to assume the role configured in the target account. Another is by listing an AWS SSO Permission Set in the
account (`trusted_permission_sets`).
### Role Access is Enabled by SAML and/or AWS SSO configuration
Users can again access to a role in the `identity` account through either (or both) of 2 mechanisms:
#### SAML Access
- SAML access is globally configured via the `aws-saml` component, enabling an external SAML Identity Provider (IdP) to
control access to roles in the `identity` account. (SAML access can be separately configured for other accounts, see
the `aws-saml` and `aws-team-roles` components for more on that.)
- Individual roles are enabled for SAML access by setting `aws_saml_login_enabled: true` in the role configuration.
- Individual users are granted access to these roles by configuration in the SAML IdP.
#### AWS SSO Access
The `aws-sso` component can create AWS Permission Sets that allow users to assume specific roles in the `identity`
account. See the `aws-sso` component for details.
## Known Problems
### Error: `assume role policy: LimitExceeded: Cannot exceed quota for ACLSizePerRole: 2048`
The `aws-teams` architecture, when enabling access to a role via lots of AWS SSO Profiles, can create large "assume
role" policies, large enough to exceed the default quota of 2048 characters. If you run into this limitation, you will
get an error like this:
```
Error: error updating IAM Role (acme-gbl-root-tfstate-backend-analytics-ro) assume role policy: LimitExceeded: Cannot exceed quota for ACLSizePerRole: 2048
```
This can happen in either/both the `identity` and `root` accounts (for Terraform state access). So far, we have always
been able to resolve this by requesting a quota increase, which is automatically granted a few minutes after making the
request. To request the quota increase:
- Log in to the AWS Web console as admin in the affected account
- Set your region to N. Virginia `us-east-1`
- Navigate to the Service Quotas page via the account dropdown menu
- Click on AWS Services in the left sidebar
- Search for "IAM" and select "AWS Identity and Access Management (IAM)". (If you don't find that option, make sure you
have selected the `us-east-1` region.
- Find and select "Role trust policy length"
- Request an increase to 4096 characters
- Wait for the request to be approved, usually less than a few minutes
## Usage
**Stack Level**: Global **Deployment**: Must be deployed by SuperAdmin using `atmos` CLI
Here's an example snippet for how to use this component. The component should only be applied once, which is typically
done via the identity stack (e.g. `gbl-identity.yaml`).
```yaml
components:
terraform:
aws-teams:
backend:
s3:
role_arn: null
vars:
teams_config:
# Viewer has the same permissions as Observer but only in this account. It is not allowed access to other accounts.
# Viewer also serves as the default configuration for all roles via the YAML anchor.
viewer: &user-template
# `max_session_duration` set the maximum session duration (in seconds) for the IAM roles.
# This setting can have a value from 3600 (1 hour) to 43200 (12 hours).
# For roles people log into via SAML, a long duration is convenient to prevent them
# from having to frequently re-authenticate.
# For roles assumed from some other role, the setting is practically irrelevant, because
# the AssumeRole API limits the duration to 1 hour in any case.
# References:
# - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html
# - https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
max_session_duration: 43200 # 12 hours in seconds
# role_policy_arns are the IAM Policy ARNs to attach to this policy. In addition to real ARNs,
# you can use keys in the `custom_policy_map` in `main.tf` to select policies defined in the component.
# If you are using keys from the map, plans look better if you put them after the real role ARNs.
role_policy_arns:
- "arn:aws:iam::aws:policy/job-function/ViewOnlyAccess"
role_description: "Team restricted to viewing resources in the identity account"
# If `aws_saml_login_enabled: true` then the role will be available via SAML logins.
# Otherwise, it will only be accessible via `assume role`.
aws_saml_login_enabled: false
# The following attributes control access to this role via `assume role`.
# `trusted_*` grants access, `denied_*` denies access.
# If a role is both trusted and denied, it will not be able to access this role.
# Permission sets specify users operating from the given AWS SSO permission set in this account.
trusted_permission_sets: []
denied_permission_sets: []
# Primary roles specify the short role names of roles in the primary (identity)
# account that are allowed to assume this role.
trusted_teams: []
denied_teams: ["viewer"]
# Role ARNs specify Role ARNs in any account that are allowed to assume this role.
# BE CAREFUL: there is nothing limiting these Role ARNs to roles within our organization.
trusted_role_arns: []
denied_role_arns: []
admin:
<<: *user-template
role_description:
"Team with PowerUserAccess permissions in `identity` and AdministratorAccess to all other accounts except
`root`"
# Limit `admin` to Power User to prevent accidentally destroying the admin role itself
# Use SuperAdmin to administer IAM access
role_policy_arns: ["arn:aws:iam::aws:policy/PowerUserAccess"]
# TODO Create a "security" team with AdministratorAccess to audit and security, remove "admin" write access to those accounts
aws_saml_login_enabled: true
# list of roles in primary that can assume into this role in delegated accounts
# primary admin can assume delegated admin
trusted_teams: ["admin"]
# GH runner should be moved to its own `ghrunner` role
trusted_permission_sets: ["IdentityAdminTeamAccess"]
spacelift:
<<: *user-template
role_description: Team for our privileged Spacelift server
role_policy_arns:
- team_role_access
aws_saml_login_enabled: false
trusted_teams:
- admin
trusted_role_arns: ["arn:aws:iam::123456789012:role/eg-ue2-auto-spacelift-worker-pool-admin"]
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`teams_config` required
A roles map to configure the accounts.
**Type:**
```hcl
map(object({
denied_teams = list(string)
denied_permission_sets = list(string)
denied_role_arns = list(string)
max_session_duration = number # in seconds 3600 <= max <= 43200 (12 hours)
role_description = string
role_policy_arns = list(string)
aws_saml_login_enabled = bool
allowed_roles = optional(map(list(string)), {})
trusted_teams = list(string)
trusted_permission_sets = list(string)
trusted_role_arns = list(string)
}))
```
### Optional Variables
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
Map where keys are role names (same keys as `teams_config`) and values are lists of
GitHub repositories allowed to assume those roles. See `account-map/modules/github-assume-role-policy.mixin.tf`
for specifics about repository designations.
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`role_arns`
List of role ARNs
`team_name_role_arn_map`
Map of team names to role ARNs
`team_names`
List of team names
`teams_config`
Map of team config with name, target arn, and description
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `local`, version: `>= 1.3`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `local`, version: `>= 1.3`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`assume_role` | latest | [`../account-map/modules/team-assume-role-policy`](https://registry.terraform.io/modules/../account-map/modules/team-assume-role-policy/) | n/a
`aws_saml` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_policy.team_role_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_iam_role_policy_attachment.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`local_file.account_info`](https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.assume_role_aggregated`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.team_role_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
---
## Terraform Components(Aws)
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
This is a library of reusable Terraform "root module" components.
---
## bastion
This component provisions a generic Bastion host within an Auto Scaling Group (ASG) with parameterized `user_data` and
supports AWS SSM Session Manager for remote access with IAM authentication.
If a special `container.sh` script is desired to run, set `container_enabled` to `true`, and set the `image_repository`
and `image_container` variables.
By default, this component acts as an "SSM Bastion", which is deployed to a private subnet and has SSM enabled, allowing
access via the AWS Console, AWS CLI, or SSM Session tools such as [aws-gate](https://github.com/xen0l/aws-gate).
Alternatively, this component can be used as a regular SSH Bastion, deployed to a public subnet with Security Group
rules allowing inbound traffic over port 22.
## Usage
**Stack Level**: Regional
By default, this component can be used as an "SSM Bastion" (deployed to a private subnet, accessed via SSM):
```yaml
components:
terraform:
bastion:
vars:
enabled: true
name: bastion-ssm
# Your choice of availability zones. If not specified, all private subnets are used.
availability_zones: ["us-east-1a", "us-east-1b", "us-east-1c"]
instance_type: t3.micro
image_container: infrastructure:latest
image_repository: "111111111111.dkr.ecr.us-east-1.amazonaws.com/example/infrastructure"
```
The following is an example snippet for how to use this component as a traditional bastion:
```yaml
components:
terraform:
bastion:
vars:
enabled: true
name: bastion-traditional
image_container: infrastructure:latest
image_repository: "111111111111.dkr.ecr.us-east-1.amazonaws.com/example/infrastructure"
associate_public_ip_address: true # deploy to public subnet and associate public IP with instance
custom_bastion_hostname: bastion
vanity_domain: example.com
security_group_rules:
- type: "ingress"
from_port: 22
to_port: 22
protocol: tcp
cidr_blocks: ["1.2.3.4/32"]
- type: "egress"
from_port: 0
to_port: 0
protocol: -1
cidr_blocks: ["0.0.0.0/0"]
```
## Variables
### Required Variables
`region` (`string`) required
AWS region
### Optional Variables
`associate_public_ip_address` (`bool`) optional
Whether to associate public IP to the instance.
**Default value:** `false`
`availability_zones` (`list(string)`) optional
AWS Availability Zones in which to deploy multi-AZ resources.
If not provided, resources will be provisioned in every private subnet in the VPC.
**Default value:** `[ ]`
`container_command` (`string`) optional
The container command passed in after `docker run --rm -it <image> bash -c`.
**Default value:** `"bash"`
`image_container` (`string`) optional
The image container to use in `container.sh`.
**Default value:** `""`
`image_repository` (`string`) optional
The image repository to use in `container.sh`.
**Default value:** `""`
`instance_type` (`string`) optional
Bastion instance type
**Default value:** `"t2.micro"`
`kms_alias_name_ssm` (`string`) optional
KMS alias name for SSM
**Default value:** `"alias/aws/ssm"`
`security_group_rules` (`list(any)`) optional
A list of maps of Security Group rules.
The values of map is fully completed with `aws_security_group_rule` resource.
To get more info see https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule .
**Default value:**
```hcl
[
{
"cidr_blocks": [
"0.0.0.0/0"
],
"from_port": 0,
"protocol": -1,
"to_port": 0,
"type": "egress"
},
{
"cidr_blocks": [
"0.0.0.0/0"
],
"from_port": 22,
"protocol": "tcp",
"to_port": 22,
"type": "ingress"
}
]
```
`vpc_component_name` (`string`) optional
Name of the VPC component to look up via remote state
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`autoscaling_group_id`
The AutoScaling Group ID
`iam_instance_profile`
Name of AWS IAM Instance Profile
`security_group_id`
ID on the AWS Security Group associated with the ASG
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `cloudinit`, version: `>= 2.2`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `cloudinit`, version: `>= 2.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`bastion_autoscale_group` | 0.43.1 | [`cloudposse/ec2-autoscale-group/aws`](https://registry.terraform.io/modules/cloudposse/ec2-autoscale-group/aws/0.43.1) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`sg` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`ssm_tls_ssh_key_pair` | 0.10.2 | [`cloudposse/ssm-tls-ssh-key-pair/aws`](https://registry.terraform.io/modules/cloudposse/ssm-tls-ssh-key-pair/aws/0.10.2) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_instance_profile.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile) (resource)
- [`aws_iam_role.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_iam_role_policy.main`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ami.bastion_image`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami) (data source)
- [`aws_iam_policy_document.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.main`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`cloudinit_config.config`](https://registry.terraform.io/providers/hashicorp/cloudinit/latest/docs/data-sources/config) (data source)
---
## budget
Description of this component 55
## Usage
**Stack Level**: Regional or Global
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
foo:
vars:
enabled: true
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`budgets` (`any`) optional
A list of Budgets to be managed by this module, see https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/budgets_budget#argument-reference
for a list of possible attributes. For a more specific example, see `https://github.com/cloudposse/terraform-aws-budgets/blob/master/examples/complete/fixtures.us-east-2.tfvars`.
**Default value:** `[ ]`
`notifications_enabled` (`bool`) optional
Whether or not to setup Slack notifications for Budgets. Set to `true` to create an SNS topic and Lambda function to send alerts to a Slack channel.
**Default value:** `false`
`slack_channel` (`string`) optional
The name of the channel in Slack for notifications. Only used when `notifications_enabled` is `true`
**Default value:** `""`
`slack_username` (`string`) optional
The username that will appear on Slack messages. Only used when `notifications_enabled` is `true`
**Default value:** `""`
`slack_webhook_url` (`string`) optional
The URL of Slack webhook. Only used when `notifications_enabled` is `true`
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
Type of the Cloud Map Namespace
**Default value:** `"http"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
The ARN of the namespace
`id`
The ID of the namespace
`name`
The name of the namespace
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_service_discovery_http_namespace.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/service_discovery_http_namespace) (resource)
- [`aws_service_discovery_private_dns_namespace.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/service_discovery_private_dns_namespace) (resource)
- [`aws_service_discovery_public_dns_namespace.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/service_discovery_public_dns_namespace) (resource)
## Data Sources
The following data sources are used by this module:
---
## cloudtrail
This component is responsible for provisioning CloudTrail auditing in an individual AWS account. It's expected to be used alongside
[the `cloudtrail-bucket` component](https://github.com/cloudposse/terraform-aws-components/tree/main/modules/cloudtrail-bucket)
as it utilizes that bucket via remote state.
This component can either be deployed selectively to various accounts with `is_organization_trail=false`, or
alternatively created in all accounts if deployed to the management account with `is_organization_trail=true`.
## Usage
**Stack Level**: Global
The following is an example snippet for how to use this component:
(`gbl-root.yaml`)
```yaml
components:
terraform:
cloudtrail:
vars:
enabled: true
cloudtrail_bucket_environment_name: "ue1"
cloudtrail_bucket_stage_name: "audit"
cloudwatch_logs_retention_in_days: 730
is_organization_trail: true
# Encrypt the CloudWatch Log Group with the CloudTrail KMS key
kms_key_enabled: true
```
## Variables
### Required Variables
The stage name where the CloudTrail bucket is provisioned
`region` (`string`) required
AWS Region
### Optional Variables
`account_map` optional
Static account map used when account_map_enabled is false.
Provides account name to account ID mapping without requiring the account-map component.
**Type:**
```hcl
object({
full_account_map = map(string)
audit_account_account_name = optional(string, "")
root_account_account_name = optional(string, "")
})
```
**Default value:**
```hcl
{
"audit_account_account_name": "",
"full_account_map": {},
"root_account_account_name": ""
}
```
`account_map_component_name` (`string`) optional
The name of a account-map component
**Default value:** `"account-map"`
`account_map_enabled` (`bool`) optional
When true, uses the account-map component to look up account IDs dynamically.
When false, uses the static account_map variable instead. Set to false when
using Atmos Auth profiles and static account mappings.
**Default value:** `true`
`audit_access_enabled` (`bool`) optional
If `true`, allows the Audit account access to read Cloudtrail logs directly from S3. This is a requirement for running Athena queries in the Audit account.
**Default value:** `false`
Number of days to retain logs for. CIS recommends 365 days. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. Set to 0 to keep logs indefinitely.
**Default value:** `365`
`enable_log_file_validation` (`bool`) optional
Specifies whether log file integrity validation is enabled. Creates signed digest for validated contents of logs
**Default value:** `true`
`enable_logging` (`bool`) optional
Enable logging for the trail
**Default value:** `true`
`include_global_service_events` (`bool`) optional
Specifies whether the trail is publishing events from global services such as IAM to the log files
**Default value:** `true`
`is_multi_region_trail` (`bool`) optional
Specifies whether the trail is created in the current region or in all regions
**Default value:** `true`
`is_organization_trail` (`bool`) optional
Specifies whether the trail is created for all accounts in an organization in AWS Organizations, or only for the current AWS account.
The default is false, and cannot be true unless the call is made on behalf of an AWS account that is the management account
for an organization in AWS Organizations.
**Default value:** `false`
`kms_abac_statements` optional
A list of ABAC statements which are placed in an IAM policy.
Each statement must have the following attributes:
- `sid` (optional): A unique identifier for the statement.
- `effect`: The effect of the statement. Valid values are `Allow` and `Deny`.
- `actions`: A list of actions to allow or deny.
- `conditions`: A list of conditions to evaluate when the statement is applied.
**Type:**
```hcl
list(object({
sid = optional(string)
effect = string
actions = list(string)
principals = map(list(string))
conditions = list(object({
test = string
variable = string
values = list(string)
}))
}))
```
**Default value:** `[ ]`
`kms_key_alias` (`string`) optional
The alias for the KMS key. If not set, the alias will be set to `alias/<module.this.id>`
**Default value:** `null`
`kms_key_enabled` (`bool`) optional
If `true`, encrypts the CloudWatch Log Group with the CloudTrail KMS key and adds the required KMS key policy for CloudWatch Logs
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cloudtrail_arn`
CloudTrail ARN
`cloudtrail_home_region`
The region in which CloudTrail was created
`cloudtrail_id`
CloudTrail ID
`cloudtrail_logs_log_group_arn`
CloudTrail Logs log group ARN
`cloudtrail_logs_log_group_name`
CloudTrail Logs log group name
`cloudtrail_logs_role_arn`
CloudTrail Logs role ARN
`cloudtrail_logs_role_name`
CloudTrail Logs role name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 5.30.0, < 6.0.0`
### Providers
- `aws`, version: `>= 5.30.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | Remote state lookup for the account-map component (or fallback to static mapping). When account_map_enabled is true: - Performs remote state lookup to retrieve account mappings from the account-map component - Uses global tenant/environment/stage from iam_roles module for the lookup When account_map_enabled is false: - Bypasses the remote state lookup (bypass = true) - Returns the static account_map variable as defaults instead - Allows the component to function without the account-map dependency
`cloudtrail` | 0.24.0 | [`cloudposse/cloudtrail/aws`](https://registry.terraform.io/modules/cloudposse/cloudtrail/aws/0.24.0) | n/a
`cloudtrail_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key_cloudtrail` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cloudwatch_log_group.cloudtrail_cloudwatch_logs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group) (resource)
- [`aws_iam_policy.cloudtrail_cloudwatch_logs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role.cloudtrail_cloudwatch_logs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_iam_role_policy_attachment.cloudtrail_cloudwatch_logs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.cloudtrail_cloudwatch_logs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.cloudtrail_cloudwatch_logs_assume_role`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.kms_key_cloudtrail`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
---
## cloudtrail-bucket
This component is responsible for provisioning a bucket for storing
CloudTrail logs for auditing purposes. It's expected to be used alongside
[the `cloudtrail` component](https://github.com/cloudposse/terraform-aws-components/tree/main/modules/cloudtrail).
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component. It's suggested to apply this component to only the centralized
`audit` account.
```yaml
components:
terraform:
cloudtrail-bucket:
vars:
enabled: true
name: "cloudtrail"
noncurrent_version_expiration_days: 180
noncurrent_version_transition_days: 30
standard_transition_days: 60
glacier_transition_days: 180
expiration_days: 365
```
### S3 Object Lock Configuration
For PCI compliance, you can enable S3 Object Lock to store objects using a write-once-read-many (WORM) model.
> **Important**: S3 Object Lock can only be enabled at bucket creation time. It cannot be added to existing buckets.
```yaml
components:
terraform:
cloudtrail-bucket:
vars:
enabled: true
name: "cloudtrail"
object_lock_configuration:
mode: "GOVERNANCE" # Valid values: GOVERNANCE, COMPLIANCE
days: 365
years: null
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`access_log_bucket_name` (`string`) optional
If var.create_access_log_bucket is false, this is the name of the S3 bucket where s3 access logs will be sent to.
**Default value:** `""`
`acl` (`string`) optional
The canned ACL to apply. We recommend log-delivery-write for
compatibility with AWS services. Valid values are private, public-read,
public-read-write, aws-exec-read, authenticated-read, bucket-owner-read,
bucket-owner-full-control, log-delivery-write.
Due to https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-faq.html, this
will need to be set to 'private' during creation, but you can update normally after.
**Default value:** `"log-delivery-write"`
`create_access_log_bucket` (`bool`) optional
Whether or not to create an access log bucket.
**Default value:** `false`
`expiration_days` (`number`) optional
Number of days after which to expunge the objects
**Default value:** `90`
`force_destroy` (`bool`) optional
(Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable
**Default value:** `false`
`glacier_transition_days` (`number`) optional
Number of days after which to move the data to the glacier storage tier
**Default value:** `60`
`lifecycle_rule_enabled` (`bool`) optional
Enable lifecycle events on this bucket
**Default value:** `true`
Specifies when noncurrent object versions transition to a different storage tier
**Default value:** `30`
`object_lock_configuration` optional
A configuration for S3 object locking. With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. Object lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
**Type:**
```hcl
object({
mode = string # Valid values are GOVERNANCE and COMPLIANCE.
days = optional(number) # Retention period in days. Specify either days or years, not both.
years = optional(number) # Retention period in years. Specify either days or years, not both.
})
```
**Default value:** `null`
`policy` (`string`) optional
A valid bucket policy JSON document. This policy will be merged with the
default CloudTrail bucket policies (AWSCloudTrailAclCheck and AWSCloudTrailWrite).
**Default value:** `""`
`sse_algorithm` (`string`) optional
The server-side encryption algorithm to use. Valid values are AES256, aws:kms, or aws:kms:dsse
**Default value:** `"AES256"`
`standard_transition_days` (`number`) optional
Number of days to persist in the standard storage tier before moving to the infrequent access tier
**Default value:** `30`
`versioning_enabled` (`bool`) optional
A state of versioning. Versioning is a means of keeping multiple variants of an object in the same bucket
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cloudtrail_bucket_arn`
CloudTrail S3 bucket ARN
`cloudtrail_bucket_domain_name`
CloudTrail S3 bucket domain name
`cloudtrail_bucket_id`
CloudTrail S3 bucket ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`cloudtrail_s3_bucket` | 0.32.0 | [`cloudposse/cloudtrail-s3-bucket/aws`](https://registry.terraform.io/modules/cloudposse/cloudtrail-s3-bucket/aws/0.32.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## cloudwatch-logs
This component is responsible for creation of CloudWatch Log Streams and Log Groups.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
cloudwatch-logs:
vars:
enabled: true
name: cloudwatch-logs
retention_in_days: 15
stream_names:
- app-1
- app-2
```
## Variables
### Required Variables
Additional permissions granted to assumed role
**Default value:**
```hcl
[
"logs:CreateLogStream",
"logs:DeleteLogStream"
]
```
`principals` (`map(any)`) optional
Map of service name as key and a list of ARNs to allow assuming the role as value. (e.g. map(`AWS`, list(`arn:aws:iam:::role/admin`)))
**Default value:**
```hcl
{
"Service": [
"ecs.amazonaws.com"
]
}
```
`retention_in_days` (`string`) optional
Number of days you want to retain log events in the log group
**Default value:** `"30"`
`stream_names` (`list(string)`) optional
Names of streams
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`log_group_arn`
ARN of the log group
`log_group_name`
Name of log group
`role_arn`
ARN of role to assume
`role_name`
Name of role to assume
`stream_arns`
ARN of the log stream
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 3.0, < 6.0.0`
### Providers
- `aws`, version: `>= 3.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key_logs` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | n/a
`logs` | 0.6.9 | [`cloudposse/cloudwatch-logs/aws`](https://registry.terraform.io/modules/cloudposse/cloudwatch-logs/aws/0.6.9) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.kms`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
---
## cognito
This component is responsible for provisioning and managing AWS Cognito resources.
This component can provision the following resources:
- [Cognito User Pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html)
- [Cognito User Pool Clients](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-client-apps.html)
- [Cognito User Pool Domains](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-add-custom-domain.html)
- [Cognito User Pool Identity Providers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-identity-provider.html)
- [Cognito User Pool Resource Servers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-define-resource-servers.html)
- [Cognito User Pool User Groups](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-user-groups.html)
- [Cognito Risk Configuration](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-risk-configuration.html)
## Usage
**Stack Level**: Global
Here's an example snippet for how to use this component:
```yaml
components:
terraform:
cognito:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
# The full name of the User Pool will be: ---
name: cognito
schemas:
- name: "email"
attribute_data_type: "String"
developer_only_attribute: false
mutable: false
required: true
```
### Risk Configuration Examples
#### Basic Account Takeover Protection
```yaml
components:
terraform:
cognito:
vars:
enabled: true
name: cognito
# Configure account takeover risk protection
account_takeover_risk_configuration:
notify_configuration:
block_email:
html_body: "Your account has been blocked due to suspicious activity."
subject: "Account Security Alert"
text_body: "Your account has been blocked due to suspicious activity."
from: "security@[company].com"
# Replace with your SES verified identity ARN for the correct region/account
source_arn: "" # e.g., arn:aws:ses:REGION:ACCOUNT:identity/email@domain.com
actions:
high_action:
event_action: "BLOCK"
notify: true
medium_action:
event_action: "MFA_REQUIRED"
notify: true
low_action:
event_action: "NO_ACTION"
notify: false
```
#### Compromised Credentials Detection
```yaml
components:
terraform:
cognito:
vars:
enabled: true
name: cognito
# Configure compromised credentials detection
compromised_credentials_risk_configuration:
event_filter: ["SIGN_IN", "PASSWORD_CHANGE"]
actions:
event_action: "BLOCK"
```
#### IP-Based Risk Exceptions
```yaml
components:
terraform:
cognito:
vars:
enabled: true
name: cognito
# Configure IP-based risk exceptions
risk_exception_configuration:
blocked_ip_range_list:
- "192.0.2.0/24" # Block this IP range
- "203.0.113.0/24" # Block this IP range
skipped_ip_range_list:
- "10.0.0.0/8" # Skip risk detection for internal network
- "172.16.0.0/12" # Skip risk detection for private network
```
#### Client-Specific Risk Configuration
```yaml
components:
terraform:
cognito:
vars:
enabled: true
name: cognito
clients:
- name: "web-app"
generate_secret: false
- name: "mobile-app"
generate_secret: true
# Configure risk settings for specific clients
# Note: client_id must be the actual App Client ID, not the client name
risk_configurations:
- client_id: "1a2b3c4d5e6f7g8h9i0j1k2l3m" # Actual App Client ID for web-app
account_takeover_risk_configuration:
actions:
high_action:
event_action: "BLOCK"
notify: false
medium_action:
event_action: "MFA_IF_CONFIGURED"
notify: false
low_action:
event_action: "NO_ACTION"
notify: false
- client_id: "9z8y7x6w5v4u3t2s1r0q9p8o7n" # Actual App Client ID for mobile-app
compromised_credentials_risk_configuration:
event_filter: ["SIGN_IN"]
actions:
event_action: "BLOCK"
```
**Important:** The `client_id` field requires the actual AWS Cognito App Client ID, not the client name. To reference the App Client ID from the module outputs, use `module.cognito.client_ids_map["client-name"]` where `client-name` is the name you defined in the `clients` configuration.
#### Comprehensive Risk Configuration
```yaml
components:
terraform:
cognito:
vars:
enabled: true
name: cognito
# Comprehensive risk configuration with all features
risk_configurations:
- # Global User Pool configuration
account_takeover_risk_configuration:
notify_configuration:
block_email:
html_body: "Security AlertYour account has been temporarily blocked due to suspicious activity."
subject: "Account Security Alert - Action Required"
text_body: "Your account has been temporarily blocked due to suspicious activity. Please contact support."
mfa_email:
html_body: "Additional Verification RequiredWe detected unusual activity and require additional verification."
subject: "Additional Verification Required"
text_body: "We detected unusual activity and require additional verification."
no_action_email:
html_body: "Security NoticeWe detected some unusual activity but no action is required."
subject: "Security Notice"
text_body: "We detected some unusual activity but no action is required."
from: "security@[company].com"
reply_to: "noreply@[company].com"
# Replace with your SES verified identity ARN for the correct region/account
source_arn: "" # e.g., arn:aws:ses:REGION:ACCOUNT:identity/email@domain.com
actions:
high_action:
event_action: "BLOCK"
notify: true
medium_action:
event_action: "MFA_REQUIRED"
notify: true
low_action:
event_action: "NO_ACTION"
notify: true
compromised_credentials_risk_configuration:
event_filter: ["SIGN_IN", "PASSWORD_CHANGE", "SIGN_UP"]
actions:
event_action: "BLOCK"
risk_exception_configuration:
blocked_ip_range_list:
- "192.0.2.0/24"
- "203.0.113.0/24"
skipped_ip_range_list:
- "10.0.0.0/8"
- "172.16.0.0/12"
- "192.168.0.0/16"
```
#### Using Module Outputs for Client IDs
When you need to reference App Client IDs from the same module (e.g., in a separate resource or data source), use the `client_ids_map` output:
```yaml
# Example: Using the cognito module's client_ids_map output in another resource
components:
terraform:
cognito:
vars:
enabled: true
name: cognito
clients:
- name: "web-app"
generate_secret: false
- name: "mobile-app"
generate_secret: true
# Separate resource that needs the client IDs
cognito-risk-config:
vars:
user_pool_id: "${module.cognito.id}"
web_app_client_id: "${module.cognito.client_ids_map['web-app']}"
mobile_app_client_id: "${module.cognito.client_ids_map['mobile-app']}"
```
## Variables
### Required Variables
Set to `true` if only the administrator is allowed to create user profiles. Set to `false` if users can sign themselves up via an app
**Default value:** `true`
The message template for email messages. Must contain `{username}` and `{####}` placeholders, for username and temporary password, respectively
**Default value:** `"{username}, your temporary password is `\{####\}`"`
The message template for SMS messages. Must contain `{username}` and `{####}` placeholders, for username and temporary password, respectively
**Default value:** `"Your username is {username} and temporary password is `\{####\}`"`
`alias_attributes` (`list(string)`) optional
Attributes supported as an alias for this user pool. Possible values: phone_number, email, or preferred_username. Conflicts with `username_attributes`
**Default value:** `null`
Time limit, between 5 minutes and 1 day, after which the access token is no longer valid and cannot be used. This value will be overridden if you have entered a value in `token_validity_units`.
**Default value:** `60`
List of authentication flows (ADMIN_NO_SRP_AUTH, CUSTOM_AUTH_FLOW_ONLY, USER_PASSWORD_AUTH)
**Default value:** `[ ]`
`client_generate_secret` (`bool`) optional
Should an application secret be generated
**Default value:** `true`
`client_id_token_validity` (`number`) optional
Time limit, between 5 minutes and 1 day, after which the ID token is no longer valid and cannot be used. Must be between 5 minutes and 1 day. Cannot be greater than refresh token expiration. This value will be overridden if you have entered a value in `token_validity_units`.
**Default value:** `60`
`client_logout_urls` (`list(string)`) optional
List of allowed logout URLs for the identity providers
**Default value:** `[ ]`
`client_name` (`string`) optional
The name of the application client
**Default value:** `null`
Choose which errors and responses are returned by Cognito APIs during authentication, account confirmation, and password recovery when the user does not exist in the user pool. When set to ENABLED and the user does not exist, authentication returns an error indicating either the username or password was incorrect, and account confirmation and password recovery return a response indicating a code was sent to a simulated destination. When set to LEGACY, those APIs will return a UserNotFoundException exception if the user does not exist in the user pool.
**Default value:** `null`
The time limit in days refresh tokens are valid for. Must be between 60 minutes and 3650 days. This value will be overridden if you have entered a value in `token_validity_units`
**Default value:** `30`
List of provider names for the identity providers that are supported on this client
**Default value:** `[ ]`
`client_token_validity_units` (`any`) optional
Configuration block for units in which the validity times are represented in. Valid values for the following arguments are: `seconds`, `minutes`, `hours` or `days`.
**Default value:**
```hcl
{
"access_token": "minutes",
"id_token": "minutes",
"refresh_token": "days"
}
```
(Optional) When active, DeletionProtection prevents accidental deletion of your user pool. Before you can delete a user pool that you have protected against deletion, you must deactivate this feature. Valid values are ACTIVE and INACTIVE, Default value is INACTIVE.
**Default value:** `"INACTIVE"`
`device_configuration` (`map(any)`) optional
The configuration for the user pool's device tracking
**Default value:** `{ }`
Instruct Cognito to either use its built-in functionality or Amazon SES to send out emails. Allowed values: `COGNITO_DEFAULT` or `DEVELOPER`
**Default value:** `"COGNITO_DEFAULT"`
Sender’s email address or sender’s display name with their email address (e.g. `john@example.com`, `John Smith <john@example.com>` or `"John Smith Ph.D." <john@example.com>)`. Escaped double quotes are required around display names that contain certain characters as specified in RFC 5322
**Default value:** `null`
The Amazon Resource Name of Key Management Service Customer master keys. Amazon Cognito uses the key to encrypt codes and temporary passwords sent to CustomEmailSender and CustomSMSSender.
**Default value:** `null`
If `true`, and if `mfa_configuration` is also enabled, multi-factor authentication by software TOTP generator will be enabled
**Default value:** `false`
`string_schemas` (`list(any)`) optional
A container with the string schema attributes of a user pool. Maximum of 50 attributes
**Default value:** `[ ]`
`user_group_description` (`string`) optional
The description of the user group
**Default value:** `null`
`user_group_name` (`string`) optional
The name of the user group
**Default value:** `null`
`user_group_precedence` (`number`) optional
The precedence of the user group
**Default value:** `null`
`user_group_role_arn` (`string`) optional
The ARN of the IAM role to be associated with the user group
**Default value:** `null`
`user_groups` (`list(any)`) optional
User groups configuration
**Default value:** `[ ]`
`user_pool_add_ons` (`map(any)`) optional
Configuration block for user pool add-ons to enable user pool advanced security mode features
**Default value:** `{ }`
The mode for advanced security, must be one of `OFF`, `AUDIT` or `ENFORCED`
**Default value:** `null`
`user_pool_name` (`string`) optional
User pool name. If not provided, the name will be generated from the context
**Default value:** `null`
`username_attributes` (`list(string)`) optional
Specifies whether email addresses or phone numbers can be specified as usernames when a user signs up. Conflicts with `alias_attributes`
**Default value:** `null`
`username_configuration` (`map(any)`) optional
The Username Configuration. Setting `case_sensitive` specifies whether username case sensitivity will be applied for all users in the user pool through Cognito APIs
**Default value:** `{ }`
The subject line for the email message template for sending a confirmation link to the user
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
The ARN of the User Pool
`client_ids`
The ids of the User Pool clients
`client_ids_map`
The IDs map of the User Pool clients
`client_secrets`
The client secrets of the User Pool clients
`client_secrets_map`
The client secrets map of the User Pool clients
`creation_date`
The date the User Pool was created
`domain_app_version`
The app version for the domain
`domain_aws_account_id`
The AWS account ID for the User Pool domain
`domain_cloudfront_distribution_arn`
The URL of the CloudFront distribution
`domain_s3_bucket`
The S3 bucket where the static files for the domain are stored
`endpoint`
The endpoint name of the User Pool. Example format: cognito-idp.REGION.amazonaws.com/xxxx_yyyyy
`id`
The ID of the User Pool
`last_modified_date`
The date the User Pool was last modified
`resource_servers_scope_identifiers`
A list of all scopes configured in the format identifier/scope_name
`risk_configuration_ids`
The IDs of the risk configurations
`risk_configuration_ids_map`
Map of risk configuration IDs by client ID (or 'global' for User Pool-wide)
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.51.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.51.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cognito_identity_provider.identity_provider`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_identity_provider) (resource)
- [`aws_cognito_resource_server.resource`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_resource_server) (resource)
- [`aws_cognito_risk_configuration.risk_config`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_risk_configuration) (resource)
- [`aws_cognito_user_group.main`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user_group) (resource)
- [`aws_cognito_user_pool.pool`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user_pool) (resource)
- [`aws_cognito_user_pool_client.client`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user_pool_client) (resource)
- [`aws_cognito_user_pool_domain.domain`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user_pool_domain) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_region.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) (data source)
---
## config-bucket
This module creates an S3 bucket suitable for storing `AWS Config` data.
It implements a configurable log retention policy, which allows you to efficiently manage logs across different storage
classes (_e.g._ `Glacier`) and ultimately expire the data altogether.
It enables server-side encryption by default.
It blocks public access to the bucket by default.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component. It's suggested to apply this component to only the centralized
`audit` account.
```yaml
components:
terraform:
config-bucket:
vars:
enabled: true
name: "config"
noncurrent_version_expiration_days: 180
noncurrent_version_transition_days: 30
standard_transition_days: 60
glacier_transition_days: 180
expiration_days: 365
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`access_log_bucket_name` (`string`) optional
Name of the S3 bucket where s3 access log will be sent to
**Default value:** `""`
`acl` (`string`) optional
The canned ACL to apply. We recommend log-delivery-write for compatibility with AWS services
**Default value:** `"log-delivery-write"`
`enable_glacier_transition` (`bool`) optional
Enables the transition to AWS Glacier (note that this can incur unnecessary costs for huge amount of small files
**Default value:** `true`
`expiration_days` (`number`) optional
Number of days after which to expunge the objects
**Default value:** `90`
`glacier_transition_days` (`number`) optional
Number of days after which to move the data to the glacier storage tier
**Default value:** `60`
`lifecycle_rule_enabled` (`bool`) optional
Enable lifecycle events on this bucket
**Default value:** `true`
Specifies when noncurrent object versions transition to a different storage tier
**Default value:** `30`
`standard_transition_days` (`number`) optional
Number of days to persist in the standard storage tier before moving to the infrequent access tier
**Default value:** `30`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`config_bucket_arn`
Config bucket ARN
`config_bucket_domain_name`
Config bucket FQDN
`config_bucket_id`
Config bucket ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`config_bucket` | 1.0.2 | [`cloudposse/config-storage/aws`](https://registry.terraform.io/modules/cloudposse/config-storage/aws/1.0.2) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## datadog-child-organization
Terraform component to provision a Datadog child organization using the Datadog provider.
Datadog API/App keys and API URL are sourced via the `aws-datadog-credentials` component module; you only need to provide the child organization name and AWS region.
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
**Stack Level**: Regional or Global
Example Atmos component configuration:
```yaml
components:
terraform:
aws-datadog-child-organization:
vars:
enabled: true
region: us-east-1
organization_name: your-child-organization-name
```
## Variables
### Required Variables
`organization_name` (`string`) required
Datadog organization name
`region` (`string`) required
AWS Region
### Optional Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`api_key`
Information about Datadog API key
`application_key`
Datadog application key with its associated metadata
`description`
Description of the organization
`id`
Organization ID
`public_id`
Public ID of the organization
`settings`
Organization settings
`user`
Information about organization users
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`datadog_child_organization` | 1.7.0 | [`cloudposse/platform/datadog//modules/child_organization`](https://registry.terraform.io/modules/cloudposse/platform/datadog/modules/child_organization/1.7.0) | n/a
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## datadog-credentials
This component is responsible for provisioning SSM or ASM entries for Datadog API keys.
It's required that the DataDog API and APP secret keys are available in the `var.datadog_secrets_source_store_account`
account in AWS SSM Parameter Store at the `/datadog/%v/datadog_app_key` paths (where `%v` are the corresponding account
names).
This component copies keys from the source account (e.g. `auto`) to the destination account where this is being
deployed. The purpose of using this formatted copying of keys handles a couple of problems.
1. The keys are needed in each account where datadog resources will be deployed.
1. The keys might need to be different per account or tenant, or any subset of accounts.
1. If the keys need to be rotated they can be rotated from a single management account.
This module also has a submodule which allows other resources to quickly use it to create a datadog provider.
See Datadog's [documentation about provisioning keys](https://docs.datadoghq.com/account_management/api-app-keys) for
more information.
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
**Stack Level**: Global
:::warning
This is subject to change from a **Global** to a **Regional** stack level. This is because we need the keys
in each region where we deploy datadog resources - so that we don't need to configure extra AWS Providers (which would
need to be dynamic - which we cannot do). This is a limitation of Terraform.
:::
This component should be deployed to every account where you want to provision datadog resources. This is usually every
account except `root` and `identity`
Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts which
you want to track AWS metrics with DataDog. In this example we use the key paths `/datadog/%v/datadog_api_key` and
`/datadog/%v/datadog_app_key` where `%v` is `default`, this can be changed through `datadog_app_secret_key` &
`datadog_api_secret_key` variables. The output Keys in the deployed account will be `/datadog/datadog_api_key` and
`/datadog/datadog_app_key`.
```yaml
components:
terraform:
datadog-configuration:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: datadog-configuration
datadog_secrets_store_type: SSM
datadog_secrets_source_store_account_stage: auto
datadog_secrets_source_store_account_region: "us-east-2"
```
Here is a snippet of using the `datadog_keys` submodule:
```terraform
module "datadog_configuration" {
source = "../datadog-configuration/modules/datadog_keys"
enabled = true
context = module.this.context
}
provider "datadog" {
api_key = module.datadog_configuration.datadog_api_key
app_key = module.datadog_configuration.datadog_app_key
api_url = module.datadog_configuration.datadog_api_url
validate = local.enabled
}
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`datadog_api_secret_key` (`string`) optional
The name of the Datadog API secret
**Default value:** `"default"`
The format string (%v will be replaced by the var.datadog_api_secret_key) for the key of the Datadog API secret in the source account
**Default value:** `"/datadog/%v/datadog_api_key"`
The format string (%v will be replaced by the var.datadog_api_secret_key) for the key of the Datadog API secret in the target account
**Default value:** `"/datadog/datadog_api_key"`
`datadog_app_secret_key` (`string`) optional
The name of the Datadog APP secret
**Default value:** `"default"`
The format string (%v will be replaced by the var.datadog_app_secret_key) for the key of the Datadog APP secret in the source account
**Default value:** `"/datadog/%v/datadog_app_key"`
The format string (%v will be replaced by the var.datadog_api_secret_key) for the key of the Datadog APP secret in the target account
**Default value:** `"/datadog/datadog_app_key"`
Tenant holding Secret Store for Datadog API and app keys.
**Default value:** `"core"`
`datadog_secrets_store_type` (`string`) optional
Secret Store type for Datadog API and app keys. Valid values: `SSM`, `ASM`
**Default value:** `"SSM"`
`datadog_site_url` (`string`) optional
The Datadog Site URL, https://docs.datadoghq.com/getting_started/site/
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`datadog_api_key_location`
The Datadog API key in the secrets store
`datadog_api_url`
The URL of the Datadog API
`datadog_app_key_location`
The Datadog APP key location in the secrets store
`datadog_secrets_store_type`
The type of the secrets store to use for Datadog API and APP keys
`datadog_site`
The Datadog site to use
`region`
The region where the keys will be created
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`iam_roles_datadog_secrets` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`store_write` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_secretsmanager_secret.datadog_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret) (data source)
- [`aws_secretsmanager_secret.datadog_app_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret) (data source)
- [`aws_secretsmanager_secret_version.datadog_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret_version) (data source)
- [`aws_secretsmanager_secret_version.datadog_app_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret_version) (data source)
- [`aws_ssm_parameter.datadog_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.datadog_app_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## datadog_keys
Useful submodule for other modules to quickly configure the datadog provider
## Usage
```hcl
module "datadog_configuration" {
source = "../datadog-configuration/modules/datadog_keys"
enabled = true
context = module.this.context
}
provider "datadog" {
api_key = module.datadog_configuration.datadog_api_key
app_key = module.datadog_configuration.datadog_app_key
api_url = module.datadog_configuration.datadog_api_url
validate = local.enabled
}
```
## Variables
### Required Variables
### Optional Variables
`global_environment_name` (`string`) optional
Global environment name
**Default value:** `"gbl"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`api_key_ssm_arn`
Datadog API Key SSM ARN
`datadog_api_key`
Datadog API Key
`datadog_api_key_location`
The Datadog API key in the secrets store
`datadog_api_url`
Datadog API URL
`datadog_app_key`
Datadog APP Key
`datadog_app_key_location`
The Datadog APP key location in the secrets store
`datadog_secrets_store_type`
The type of the secrets store to use for Datadog API and APP keys
`datadog_site`
Datadog Site
`datadog_tags`
The Context Tags in datadog tag format (list of strings formatted as 'key:value')
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`always` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`datadog_configuration` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`utils_example_complete` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.datadog_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.datadog_app_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
None
---
## datadog-integration
This component is responsible for provisioning Datadog AWS integrations. It depends on the `datadog-configuration`
component to get the Datadog API keys.
See Datadog's [documentation about provisioning keys](https://docs.datadoghq.com/account_management/api-app-keys) for
more information.
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
**Stack Level**: Global
Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts which
you want to track AWS metrics with DataDog.
```yaml
components:
terraform:
datadog-integration:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
```
## Variables
### Required Variables
An object, (in the form \{"namespace1":true/false, "namespace2":true/false\} ), that enables or disables metric collection for specific AWS namespaces for this AWS account only
**Default value:** `{ }`
Enable Datadog Cloud Security Posture Management scanning of your AWS account.
See [announcement](https://www.datadoghq.com/product/cloud-security-management/cloud-security-posture-management/) for details.
**Default value:** `null`
`datadog_aws_account_id` (`string`) optional
The AWS account ID Datadog's integration servers use for all integrations
**Default value:** `"464622532012"`
`excluded_regions` (`list(string)`) optional
An array of AWS regions to exclude from metrics collection
**Default value:** `[ ]`
`filter_tags` (`list(string)`) optional
An array of EC2 tags (in the form `key:value`) that defines a filter that Datadog use when collecting metrics from EC2. Wildcards, such as ? (for single characters) and * (for multiple characters) can also be used
**Default value:** `[ ]`
`host_tags` (`list(string)`) optional
An array of tags (in the form `key:value`) to add to all hosts and metrics reporting through this integration
**Default value:** `[ ]`
`included_regions` (`list(string)`) optional
An array of AWS regions to include in metrics collection
**Default value:** `[ ]`
`integrations` (`list(string)`) optional
List of AWS permission names to apply for different integrations (e.g. 'all', 'core')
**Default value:**
```hcl
[
"all"
]
```
`metrics_collection_enabled` (`bool`) optional
When enabled, a metric-by-metric crawl of the CloudWatch API pulls data and sends it
to Datadog. New metrics are pulled every ten minutes, on average.
**Default value:** `null`
`resource_collection_enabled` (`bool`) optional
Some Datadog products leverage information about how your AWS resources
(such as S3 Buckets, RDS snapshots, and CloudFront distributions) are configured.
When `resource_collection_enabled` is `true`, Datadog collects this information
by making read-only API calls into your AWS account.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`aws_account_id`
AWS Account ID of the IAM Role for the Datadog integration
`aws_role_name`
Name of the AWS IAM Role for the Datadog integration
`datadog_external_id`
Datadog integration external ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`datadog_integration` | 2.1.1 | [`cloudposse/datadog-integration/aws`](https://registry.terraform.io/modules/cloudposse/datadog-integration/aws/2.1.1) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`store_write` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_regions.all`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/regions) (data source)
---
## datadog-lambda-forwarder
This component provisions all infrastructure required to deploy
[Datadog Lambda forwarders](https://github.com/DataDog/datadog-serverless-functions/tree/master/aws/logs_monitoring).
It depends on the `datadog-configuration` component to obtain the Datadog API keys.
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component:
```yaml
components:
terraform:
datadog-lambda-forwarder:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: datadog-lambda-forwarder
# Set `forwarder_rds_enabled` to `true` and configure `rds-enhanced-monitoring` Log Group when:
# 1. The account has RDS instances provisioned
# 2. RDS Enhanced Monitoring is enabled
# 3. CloudWatch Log Group `RDSOSMetrics` exists (it will be created by AWS automatically when RDS Enhanced Monitoring is enabled)
forwarder_rds_enabled: true
forwarder_log_enabled: true
forwarder_vpc_logs_enabled: true
cloudwatch_forwarder_log_groups:
rds-enhanced-monitoring:
name: "RDSOSMetrics"
filter_pattern: ""
eks-cluster:
# Use either `name` or `name_prefix` with `name_suffix`
# If `name_prefix` with `name_suffix` are used, the final `name` will be constructed using `name_prefix` + context + `name_suffix`,
# e.g. "/aws/eks/eg-ue2-prod-eks-cluster/cluster"
name_prefix: "/aws/eks/"
name_suffix: "eks-cluster/cluster"
filter_pattern: ""
transfer-sftp:
name: "/aws/transfer/s-xxxxxxxxxxxx"
filter_pattern: ""
```
Note for other regions: you need to deploy the `datadog-configuration` component in the respective region — the Datadog
configuration is moving to a regional implementation.
For example, if you usually deploy to `us-west-2` (and DD Configuration is `gbl`), deploy it to the new region and then
deploy the lambda forwarder.
```yaml
import:
- orgs/acme/plat/dev/_defaults
- mixins/region/us-east-1
- catalog/datadog/configuration
- catalog/datadog/lambda-forwarder
components:
terraform:
datadog-configuration:
vars:
datadog_secrets_store_type: SSM
datadog_secrets_source_store_account_stage: auto
datadog_secrets_source_store_account_region: "us-west-2"
datadog-lambda-forwarder:
vars:
datadog_configuration_environment: "use1"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`cloudwatch_forwarder_event_patterns` optional
Map of title to CloudWatch Event patterns to forward to Datadog. Event structure from here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEventsandEventPatterns.html#CloudWatchEventsPatterns
Example:
```hcl
cloudwatch_forwarder_event_rules = {
"guardduty" = {
source = ["aws.guardduty"]
detail-type = ["GuardDuty Finding"]
}
"ec2-terminated" = {
source = ["aws.ec2"]
detail-type = ["EC2 Instance State-change Notification"]
detail = {
state = ["terminated"]
}
}
}
```
**Type:**
```hcl
map(object({
version = optional(list(string))
id = optional(list(string))
detail-type = optional(list(string))
source = optional(list(string))
account = optional(list(string))
time = optional(list(string))
region = optional(list(string))
resources = optional(list(string))
detail = optional(map(list(string)))
}))
```
**Default value:** `{ }`
Map of CloudWatch Log Groups with a filter pattern that the Lambda forwarder will send logs from. For example: \{ mysql1 = \{ name = "/aws/rds/maincluster", filter_pattern = "" \}
**Default value:** `{ }`
`context_tags` (`set(string)`) optional
List of context tags to add to each monitor
**Default value:**
```hcl
[
"namespace",
"tenant",
"environment",
"stage"
]
```
`context_tags_enabled` (`bool`) optional
Whether to add context tags to add to each monitor
**Default value:** `true`
CiphertextBlob stored in environment variable DD_KMS_API_KEY used by the lambda function, along with the KMS key, to decrypt Datadog API key
**Default value:** `""`
`dd_artifact_filename` (`string`) optional
The Datadog artifact filename minus extension
**Default value:** `"aws-dd-forwarder"`
`dd_forwarder_version` (`string`) optional
Version tag of Datadog lambdas to use. https://github.com/DataDog/datadog-serverless-functions/releases
**Default value:** `"3.116.0"`
`dd_module_name` (`string`) optional
The Datadog GitHub repository name
**Default value:** `"datadog-serverless-functions"`
`dd_tags_map` (`map(string)`) optional
A map of Datadog tags to apply to all logs forwarded to Datadog
**Default value:** `{ }`
Amount of reserved concurrent executions for the lambda function. A value of 0 disables Lambda from being triggered and -1 removes any concurrency limitations. Defaults to Unreserved Concurrency Limits -1
**Default value:** `-1`
`lambda_runtime` (`string`) optional
Runtime environment for Datadog Lambda
**Default value:** `"python3.11"`
List of S3 events to trigger the Lambda notification
**Default value:** `[ ]`
`security_group_ids` (`list(string)`) optional
List of security group IDs to use when the Lambda Function runs in a VPC
**Default value:** `null`
`subnet_ids` (`list(string)`) optional
List of subnet IDs to use when deploying the Lambda Function in a VPC
**Default value:** `null`
`tracing_config_mode` (`string`) optional
Can be either PassThrough or Active. If PassThrough, Lambda will only trace the request from an upstream service if it contains a tracing header with 'sampled=1'. If Active, Lambda will respect any tracing header it receives from an upstream service
**Default value:** `"PassThrough"`
The name of the CloudWatch Log Group for VPC flow logs
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`lambda_forwarder_log_function_arn`
Datadog Lambda forwarder CloudWatch/S3 function ARN
`lambda_forwarder_log_function_name`
Datadog Lambda forwarder CloudWatch/S3 function name
Datadog Lambda forwarder RDS Enhanced Monitoring function name
`lambda_forwarder_rds_function_arn`
Datadog Lambda forwarder RDS Enhanced Monitoring function ARN
`lambda_forwarder_vpc_log_function_arn`
Datadog Lambda forwarder VPC Flow Logs function ARN
`lambda_forwarder_vpc_log_function_name`
Datadog Lambda forwarder VPC Flow Logs function name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
### Providers
- `datadog`, version: `>= 3.3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`datadog-integration` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`datadog_lambda_forwarder` | 1.10.0 | [`cloudposse/datadog-lambda-forwarder/aws`](https://registry.terraform.io/modules/cloudposse/datadog-lambda-forwarder/aws/1.10.0) | n/a
`iam_roles` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`log_group_prefix` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`datadog_integration_aws_lambda_arn.log_collector`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/integration_aws_lambda_arn) (resource)
- [`datadog_integration_aws_lambda_arn.rds_collector`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/integration_aws_lambda_arn) (resource)
- [`datadog_integration_aws_lambda_arn.vpc_logs_collector`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/integration_aws_lambda_arn) (resource)
- [`datadog_integration_aws_log_collection.main`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/integration_aws_log_collection) (resource)
## Data Sources
The following data sources are used by this module:
---
## datadog-logs-archive
This component provisions Datadog Log Archives. It creates a single log archive pipeline for each AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
Each log archive filters for the tag `env:$env` where `$env` is the environment/account name (e.g. `sbx`, `prd`, `tools`), as well as any tags identified in the `additional_query_tags` key. The `catchall` archive, as the name implies, filters for `*`.
A second bucket is created for CloudTrail, and a CloudTrail is configured to monitor the log archive bucket and log activity to the CloudTrail bucket. To forward these CloudTrail logs to Datadog, the CloudTrail bucket's ID must be added to the `s3_buckets` key for our `datadog-lambda-forwarder` component.
Both buckets support object lock, with overridable defaults of COMPLIANCE mode and a duration of 7 days.
Prerequisites
- Datadog integration set up in the target environment
- Relies on the Datadog API and App keys added by our Datadog integration component
Issues, Gotchas, Good-to-Knows
- Destroy/reprovision process
- Because of the protections for S3 buckets, destroying/replacing the bucket may require two passes or a manual bucket delete followed by Terraform cleanup. If the bucket has a full day or more of logs, deleting it manually first helps avoid Terraform timeouts.
- Two-step process to destroy via Terraform:
1) Set `s3_force_destroy` to `true` and apply
2) Set `enabled` to `false` and apply, or run `terraform destroy`
## CloudTrail KMS Encryption
By default, this component creates a KMS key to encrypt CloudTrail logs for compliance and security. The KMS encryption can be configured using these variables:
- `cloudtrail_enable_kms_encryption` (default: `true`) - Enable/disable KMS encryption for CloudTrail logs
- `cloudtrail_kms_key_arn` (default: `null`) - Provide an existing KMS key ARN to use instead of creating a new one
- `cloudtrail_create_kms_key` (default: `true`) - Create a new KMS key when `cloudtrail_kms_key_arn` is not provided
- `cloudtrail_kms_key_deletion_window_in_days` (default: `10`) - KMS key deletion window (7-30 days)
- `cloudtrail_kms_key_enable_rotation` (default: `true`) - Enable automatic KMS key rotation
The created KMS key includes the required policy statements for CloudTrail to encrypt logs and for authorized principals to decrypt them.
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
Stack Level: Global
It's suggested to apply this component to all accounts from which Datadog receives logs.
Example Atmos snippet:
```yaml
components:
terraform:
datadog-logs-archive:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
# additional_query_tags:
# - "forwardername:*-dev-datadog-lambda-forwarder-logs"
# - "account:123456789012"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`access_log_bucket_enabled` (`bool`) optional
Whether to create a dedicated S3 bucket for CloudTrail bucket access logs
**Default value:** `false`
`access_log_bucket_name` (`string`) optional
Name of existing S3 bucket to use for CloudTrail bucket access logs. Only used when access_log_bucket_enabled is false
**Default value:** `""`
`additional_query_tags` (`list(any)`) optional
Additional tags to be used in the query for this archive
**Default value:** `[ ]`
Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account
**Default value:** `false`
`cloudtrail_create_kms_key` (`bool`) optional
Create a new KMS key for CloudTrail encryption. Only used if cloudtrail_kms_key_arn is not provided and cloudtrail_enable_kms_encryption is true
**Default value:** `true`
Enable KMS encryption for CloudTrail logs
**Default value:** `true`
`cloudtrail_kms_key_arn` (`string`) optional
ARN of an existing KMS key to use for CloudTrail log encryption. If not provided and cloudtrail_enable_kms_encryption is true, a new key will be created
**Default value:** `null`
Object lock duration for archive buckets in days
**Default value:** `7`
`object_lock_days_cloudtrail` (`number`) optional
Object lock duration for cloudtrail buckets in days
**Default value:** `7`
`object_lock_mode_archive` (`string`) optional
Object lock mode for archive bucket. Possible values are COMPLIANCE or GOVERNANCE
**Default value:** `"COMPLIANCE"`
`object_lock_mode_cloudtrail` (`string`) optional
Object lock mode for cloudtrail bucket. Possible values are COMPLIANCE or GOVERNANCE
**Default value:** `"COMPLIANCE"`
`query_override` (`string`) optional
Override query for datadog archive. If null would be used query 'env:\{stage\} OR account:\{aws account id\} OR \{additional_query_tags\}'
**Default value:** `null`
`s3_force_destroy` (`bool`) optional
Set to true to delete non-empty buckets when enabled is set to false
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`access_log_bucket_arn`
The ARN of the bucket used for CloudTrail bucket access logs
`access_log_bucket_domain_name`
The FQDN of the bucket used for CloudTrail bucket access logs
`access_log_bucket_id`
The ID (name) of the bucket used for CloudTrail bucket access logs
`archive_id`
The ID of the environment-specific log archive
`bucket_arn`
The ARN of the bucket used for log archive storage
`bucket_domain_name`
The FQDN of the bucket used for log archive storage
`bucket_id`
The ID (name) of the bucket used for log archive storage
`bucket_region`
The region of the bucket used for log archive storage
`catchall_id`
The ID of the catchall log archive
`cloudtrail_bucket_arn`
The ARN of the bucket used for access logging via cloudtrail
`cloudtrail_bucket_domain_name`
The FQDN of the bucket used for access logging via cloudtrail
`cloudtrail_bucket_id`
The ID (name) of the bucket used for access logging via cloudtrail
`cloudtrail_kms_key_alias`
The alias of the KMS key used for CloudTrail log encryption (only if created by this module)
`cloudtrail_kms_key_arn`
The ARN of the KMS key used for CloudTrail log encryption
`cloudtrail_kms_key_id`
The ID of the KMS key used for CloudTrail log encryption (only if created by this module)
## Dependencies
### Requirements
- `terraform`, version: `>= 0.13.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `datadog`, version: `>= 3.19`
- `http`, version: `>= 2.1.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `datadog`, version: `>= 3.19`
- `http`, version: `>= 2.1.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`archive_bucket` | 4.11.0 | [`cloudposse/s3-bucket/aws`](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/4.11.0) | n/a
`bucket_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`cloudtrail` | 0.24.0 | [`cloudposse/cloudtrail/aws`](https://registry.terraform.io/modules/cloudposse/cloudtrail/aws/0.24.0) | n/a
`cloudtrail_access_log_bucket` | 4.11.0 | [`cloudposse/s3-bucket/aws`](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/4.11.0) | n/a
`cloudtrail_access_log_bucket_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`cloudtrail_bucket_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`cloudtrail_s3_bucket` | 4.11.0 | [`cloudposse/s3-bucket/aws`](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/4.11.0) | n/a
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`iam_roles` | v1.536.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.536.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_kms_alias.cloudtrail`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kms_alias) (resource)
- [`aws_kms_key.cloudtrail`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kms_key) (resource)
- [`datadog_logs_archive.catchall_archive`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/logs_archive) (resource)
- [`datadog_logs_archive.logs_archive`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/logs_archive) (resource)
- [`datadog_logs_archive_order.archive_order`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/logs_archive_order) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.cloudtrail_kms_key_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_ssm_parameter.datadog_aws_role_name`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`http_http.current_order`](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) (data source)
---
## datadog-monitor
This component provisions Datadog monitors and assigns Datadog roles to those monitors.
It depends on the `datadog-configuration` component to obtain Datadog API keys.
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component:
```yaml
components:
terraform:
datadog-monitor:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
local_datadog_monitors_config_paths:
- "catalog/monitors/dev/*.yaml"
```
## Conventions
- Treat Datadog like a separate cloud provider with integrations
([datadog-integration](https://docs.cloudposse.com/components/library/aws/datadog-integration)) into your accounts.
- Use the `catalog` convention to define a set of alerts. You can use ours or define your own:
https://github.com/cloudposse/terraform-datadog-platform/tree/master/catalog/monitors
- The monitors catalog for this component supports Datadog monitor exports. You can use
[the status page of a monitor to export it from 'settings'](https://docs.datadoghq.com/monitors/manage/status/#settings).
You can add the export to existing files or make new ones. Because the export is JSON formatted, it's also YAML
compatible. If you prefer, you can convert the export to YAML using your text editor or a CLI tool like `yq`.
## Adjust Thresholds per Stack
Since there are many parameters that may be adjusted for a given monitor, we define all monitors through YAML. By
convention, we define the default monitors that should apply to all environments, and then adjust the thresholds per
environment. This is accomplished using the `datadog-monitor` component variable `local_datadog_monitors_config_paths`,
which defines the path to the YAML configuration files. By passing a path for `dev` and `prod`, we can define
configurations that are different per environment.
For example, you might have the following settings defined for `prod` and `dev` stacks that override the defaults.
For the `dev` stack:
```
components:
terraform:
datadog-monitor:
vars:
# Located in the components/terraform/datadog-monitor directory
local_datadog_monitors_config_paths:
- catalog/monitors/*.yaml
- catalog/monitors/dev/*.yaml # note this line
```
For the `prod` stack:
```
components:
terraform:
datadog-monitor:
vars:
# Located in the components/terraform/datadog-monitor directory
local_datadog_monitors_config_paths:
- catalog/monitors/*.yaml
- catalog/monitors/prod/*.yaml # note this line
```
Behind the scenes (with `atmos`) we fetch all files from these glob patterns, template them, and merge them by key. If
we peek into the `*.yaml` and `dev/*.yaml` files above you could see an example like this:
**components/terraform/datadog-monitor/catalog/monitors/elb.yaml**
```
elb-lb-httpcode-5xx-notify:
name: "(ELB) {{ env }} HTTP 5XX client error detected"
type: query alert
query: |
avg(last_15m):max:aws.elb.httpcode_elb_5xx{${context_dd_tags}} by {env,host} > 20
message: |
[${ dd_env }] [ {{ env }} ] lb:[ {{host}} ]
{{#is_warning}}
Number of HTTP 5XX client error codes generated by the load balancer > {{warn_threshold}}%
{{/is_warning}}
{{#is_alert}}
Number of HTTP 5XX client error codes generated by the load balancer > {{threshold}}%
{{/is_alert}}
Check LB
escalation_message: ""
tags: {}
options:
renotify_interval: 60
notify_audit: false
require_full_window: true
include_tags: true
timeout_h: 0
evaluation_delay: 60
new_host_delay: 300
new_group_delay: 0
groupby_simple_monitor: false
renotify_occurrences: 0
renotify_statuses: []
validate: true
notify_no_data: false
no_data_timeframe: 5
priority: 3
threshold_windows: {}
thresholds:
critical: 50
warning: 20
priority: 3
restricted_roles: null
```
**components/terraform/datadog-monitor/catalog/monitors/dev/elb.yaml**
```
elb-lb-httpcode-5xx-notify:
query: |
avg(last_15m):max:aws.elb.httpcode_elb_5xx{${context_dd_tags}} by {env,host} > 30
priority: 2
options:
thresholds:
critical: 30
warning: 10
```
## Key Notes
### Inheritance
The default YAML is applied to every stage that it's deployed to. For `dev`, we override the thresholds and priority
for this monitor. This merging is done by key of the monitor, in this case `elb-lb-httpcode-5xx-notify`.
### Templating
The `${ dd_env }` syntax is Terraform templating. While double braces (`{{ env }}`) refer to Datadog templating, `${ dd_env }`
is a template variable we pass into our monitors. In this example we use it to specify a grouping in the message. This
value is passed in and can be overridden via stacks.
We pass a value via:
```
components:
terraform:
datadog-monitor:
vars:
# Located in the components/terraform/datadog-monitor directory
local_datadog_monitors_config_paths:
- catalog/monitors/*.yaml
- catalog/monitors/dev/*.yaml
# templatefile() is used for all yaml config paths with these variables.
datadog_monitors_config_parameters:
dd_env: "dev"
```
This allows us to further use inheritance from stack configuration to keep our monitors DRY but configurable.
Another available option is to use our catalog as base monitors and then override them with your specific fine tuning.
```
components:
terraform:
datadog-monitor:
vars:
local_datadog_monitors_config_paths:
- https://raw.githubusercontent.com/cloudposse/terraform-datadog-platform/0.27.0/catalog/monitors/ec2.yaml
- catalog/monitors/ec2.yaml
```
## Other Gotchas
Our integration action that checks for `'source_type_name' equals 'Monitor Alert'` will also be true for synthetics.
Whereas if we check for `'event_type' equals 'query_alert_monitor'`, that's only true for monitors, because synthetics
will only be picked up by an integration action when `event_type` is `synthetics_alert`.
This is important if we need to distinguish between monitors and synthetics in OpsGenie, which is the case when we want
to ensure clean messaging on OpsGenie incidents in Statuspage.
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`alert_tags` (`list(string)`) optional
List of alert tags to add to all alert messages, e.g. `["@opsgenie"]` or `["@devops", "@opsgenie"]`
**Default value:** `null`
`alert_tags_separator` (`string`) optional
Separator for the alert tags. All strings from the `alert_tags` variable will be joined into one string using the separator and then added to the alert message
**Default value:** `"\n"`
List of paths to remote Datadog monitor configurations
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`datadog_monitor_names`
Names of the created Datadog monitors
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`datadog_monitors` | 1.7.0 | [`cloudposse/platform/datadog//modules/monitors`](https://registry.terraform.io/modules/cloudposse/platform/datadog/modules/monitors/1.7.0) | n/a
`datadog_monitors_merge` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`local_datadog_monitors_yaml_config` | 1.0.2 | [`cloudposse/config/yaml`](https://registry.terraform.io/modules/cloudposse/config/yaml/1.0.2) | n/a
`remote_datadog_monitors_yaml_config` | 1.0.2 | [`cloudposse/config/yaml`](https://registry.terraform.io/modules/cloudposse/config/yaml/1.0.2) | Convert all Datadog Monitors from YAML config to Terraform map with token replacement using `parameters`
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## datadog-private-location-ecs
This component creates a Datadog Private Location and deploys it to ECS (EC2 or Fargate).
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
**Note** The app key required for this component requires admin level permissions if you are using the default roles.
Admin's have permissions to Write to private locations, which is needed for this component.
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
# stacks/catalog/datadog/private-location.yaml
components:
terraform:
datadog-private-location:
metadata:
component: datadog-private-location-ecs
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: datadog-private-location
task:
task_memory: 512
task_cpu: 256
launch_type: FARGATE
# capacity_provider_strategies takes precedence over launch_type
capacity_provider_strategies:
- capacity_provider: FARGATE_SPOT
weight: 100
base: null
network_mode: awsvpc
desired_count: 1
ignore_changes_desired_count: true
ignore_changes_task_definition: false
use_alb_security_group: false
assign_public_ip: false
propagate_tags: SERVICE
wait_for_steady_state: true
circuit_breaker_deployment_enabled: true
circuit_breaker_rollback_enabled: true
containers:
datadog:
name: datadog-private-location
image: public.ecr.aws/datadog/synthetics-private-location-worker:latest
compatibilities:
- EC2
- FARGATE
- FARGATE_SPOT
log_configuration:
logDriver: awslogs
options: {}
port_mappings: []
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`alb_configuration` (`string`) optional
The configuration to use for the ALB, specifying which cluster alb configuration to use
**Default value:** `"default"`
`containers` (`any`) optional
Feed inputs into container definition module
**Default value:** `{ }`
`ecs_cluster_component` (`string`) optional
Component name used to lookup the ECS cluster remote state
**Default value:** `"ecs"`
The description of the private location.
**Default value:** `null`
`task` (`any`) optional
Feed inputs into ecs_alb_service_task module
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`ecs_cluster_arn`
Selected ECS cluster ARN
`lb_arn`
Selected LB ARN
`lb_listener_https`
Selected LB HTTPS Listener
`lb_sg_id`
Selected LB SG ID
`subnet_ids`
Selected subnet IDs
`vpc_id`
Selected VPC ID
`vpc_sg_id`
Selected VPC SG ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
### Providers
- `datadog`, version: `>= 3.3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`container_definition` | 0.61.2 | [`cloudposse/ecs-container-definition/aws`](https://registry.terraform.io/modules/cloudposse/ecs-container-definition/aws/0.61.2) | n/a
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`ecs_alb_service_task` | 0.78.0 | [`cloudposse/ecs-alb-service-task/aws`](https://registry.terraform.io/modules/cloudposse/ecs-alb-service-task/aws/0.78.0) | n/a
`ecs_cluster` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`roles_to_principals` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/roles-to-principals`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/roles-to-principals/v1.537.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`datadog_synthetics_private_location.private_location`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/synthetics_private_location) (resource)
## Data Sources
The following data sources are used by this module:
---
## datadog-synthetics
This component provides the ability to implement
[Datadog synthetic tests](https://docs.datadoghq.com/synthetics/guide/).
Synthetic tests allow you to observe how your systems and applications are performing using simulated requests and
actions from the AWS managed locations around the globe, and to monitor internal endpoints from
[Private Locations](https://docs.datadoghq.com/synthetics/private_locations).
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component:
### Stack Configuration
```yaml
components:
terraform:
datadog-synthetics:
metadata:
component: "datadog-synthetics"
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: "datadog-synthetics"
locations:
- "all"
# List of paths to Datadog synthetic test configurations
synthetics_paths:
- "catalog/synthetics/examples/*.yaml"
synthetics_private_location_component_name: "datadog-synthetics-private-location"
private_location_test_enabled: true
```
### Synthetics Configuration Examples
Below are examples of Datadog browser and API synthetic tests.
The synthetic tests are defined in YAML using either the
[Datadog Terraform provider](https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/synthetics_test)
schema or the [Datadog Synthetics API](https://docs.datadoghq.com/api/latest/synthetics) schema. See the
`terraform-datadog-platform` Terraform module
[README](https://github.com/cloudposse/terraform-datadog-platform/blob/main/modules/synthetics/README.md) for more
details. We recommend using the API schema so you can more create and edit tests using the Datadog web API and then
import them into this module by downloading the test using the Datadog REST API. (See the Datadog API documentation for
the appropriate `curl` commands to use.)
```yaml
# API schema
my-browser-test:
name: My Browser Test
status: live
type: browser
config:
request:
method: GET
headers: {}
url: https://example.com/login
setCookie: |-
DatadogTest=true
message: "My Browser Test Failed"
options:
device_ids:
- chrome.laptop_large
- edge.tablet
- firefox.mobile_small
ignoreServerCertificateError: false
disableCors: false
disableCsp: false
noScreenshot: false
tick_every: 86400
min_failure_duration: 0
min_location_failed: 1
retry:
count: 0
interval: 300
monitor_options:
renotify_interval: 0
ci:
executionRule: non_blocking
rumSettings:
isEnabled: false
enableProfiling: false
enableSecurityTesting: false
locations:
- aws:us-east-1
- aws:us-west-2
# Terraform schema
my-api-test:
name: "API Test"
message: "API Test Failed"
type: api
subtype: http
tags:
- "managed-by:Terraform"
status: "live"
request_definition:
url: "CHANGEME"
method: GET
request_headers:
Accept-Charset: "utf-8, iso-8859-1;q=0.5"
Accept: "text/json"
options_list:
tick_every: 1800
no_screenshot: false
follow_redirects: true
retry:
count: 2
interval: 10
monitor_options:
renotify_interval: 300
assertion:
- type: statusCode
operator: is
target: "200"
- type: body
operator: validatesJSONPath
targetjsonpath:
operator: is
targetvalue: true
jsonpath: foo.bar
```
These configuration examples are defined in the YAML files in the
[catalog/synthetics/examples](https://github.com/cloudposse/terraform-aws-components/tree/main/modules/datadog-synthetics/catalog/synthetics/examples)
folder.
You can use different subfolders for your use-case. For example, you can have `dev` and `prod` subfolders to define
different synthetic tests for the `dev` and `prod` environments.
Then use the `synthetic_paths` variable to point the component to the synthetic test configuration files.
The configuration files are processed and transformed in the following order:
- The `datadog-synthetics` component loads the YAML configuration files from the filesystem paths specified by the
`synthetics_paths` variable
- Then, in the
[synthetics](https://github.com/cloudposse/terraform-datadog-platform/blob/master/modules/synthetics/main.tf) module,
the YAML configuration files are merged and transformed from YAML into the
[Datadog Terraform provider](https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/synthetics_test)
schema
- And finally, the Datadog Terraform provider uses the
[Datadog Synthetics API](https://docs.datadoghq.com/api/latest/synthetics) specifications to call the Datadog API and
provision the synthetic tests
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`synthetics_paths` (`list(string)`) required
List of paths to Datadog synthetic test configurations
### Optional Variables
`alert_tags` (`list(string)`) optional
List of alert tags to add to all alert messages, e.g. `["@opsgenie"]` or `["@devops", "@opsgenie"]`
**Default value:** `null`
`alert_tags_separator` (`string`) optional
Separator for the alert tags. All strings from the `alert_tags` variable will be joined into one string using the separator and then added to the alert message
**Default value:** `"\n"`
`config_parameters` (`map(any)`) optional
Map of parameter values to interpolate into Datadog Synthetic configurations
**Default value:** `{ }`
`context_tags` (`set(string)`) optional
List of context tags to add to each synthetic check
**Default value:**
```hcl
[
"namespace",
"tenant",
"environment",
"stage"
]
```
`context_tags_enabled` (`bool`) optional
Whether to add context tags to add to each synthetic check
**Default value:** `true`
`datadog_synthetics_globals` (`any`) optional
Map of keys to add to every monitor
**Default value:** `{ }`
`locations` (`list(string)`) optional
Array of locations used to run synthetic tests
**Default value:** `[ ]`
`private_location_test_enabled` (`bool`) optional
Use private locations or the public locations provided by datadog
**Default value:** `false`
The name of the Datadog synthetics private location component
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`datadog_synthetics_test_ids`
IDs of the created Datadog synthetic tests
`datadog_synthetics_test_maps`
Map (name: id) of the created Datadog synthetic tests
`datadog_synthetics_test_monitor_ids`
IDs of the monitors associated with the Datadog synthetics tests
`datadog_synthetics_test_names`
Names of the created Datadog synthetic tests
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`datadog_synthetics` | 1.7.0 | [`cloudposse/platform/datadog//modules/synthetics`](https://registry.terraform.io/modules/cloudposse/platform/datadog/modules/synthetics/1.7.0) | n/a
`datadog_synthetics_merge` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
`datadog_synthetics_private_location` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`datadog_synthetics_yaml_config` | 1.0.2 | [`cloudposse/config/yaml`](https://registry.terraform.io/modules/cloudposse/config/yaml/1.0.2) | Convert all Datadog synthetics from YAML config to Terraform map
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## datadog-synthetics-private-location
This component provisions a Datadog synthetics private location on Datadog and a private location agent on EKS cluster.
Private locations allow you to monitor internal-facing applications or any private URLs that are not accessible from the public internet.
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
```yaml
components:
terraform:
datadog-synthetics-private-location:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: "datadog-synthetics-private-location"
description: "Datadog Synthetics Private Location Agent"
kubernetes_namespace: "monitoring"
create_namespace: true
# https://github.com/DataDog/helm-charts/tree/main/charts/synthetics-private-location
repository: "https://helm.datadoghq.com"
chart: "synthetics-private-location"
chart_version: "0.15.15"
timeout: 180
wait: true
atomic: true
cleanup_on_fail: true
```
## Synthetics Private Location Config
```shell
docker run --rm datadog/synthetics-private-location-worker --help
```
```
The Datadog Synthetics Private Location Worker runs tests on privately accessible websites and brings results to Datadog
Access keys:
--accessKey Access Key for Datadog API authentication [string]
--secretAccessKey Secret Access Key for Datadog API authentication [string]
--datadogApiKey Datadog API key to send browser tests artifacts (e.g. screenshots) [string]
--privateKey Private Key used to decrypt test configurations [array]
--publicKey Public Key used by Datadog to encrypt test results. Composed of --publicKey.pem and --publicKey.fingerprint
Worker configuration:
--site Datadog site (datadoghq.com, us3.datadoghq.com, datadoghq.eu or ddog-gov.com) [string] [required] [default: "datadoghq.com"]
--concurrency Maximum number of tests executed in parallel [number] [default: 10]
--maxNumberMessagesToFetch Maximum number of tests that can be fetched at the same time [number] [default: 10]
--proxyDatadog Proxy URL used to send requests to Datadog [string] [default: none]
--dumpConfig Display non-secret worker configuration parameters [boolean]
--enableStatusProbes Enable the probes system for Kubernetes [boolean] [default: false]
--statusProbesPort The port for the probes server to listen on [number] [default: 8080]
--config Path to JSON config file [default: "/etc/datadog/synthetics-check-runner.json"]
Tests configuration:
--maxTimeout Maximum test execution duration, in milliseconds [number] [default: 60000]
--proxyTestRequests Proxy URL used to send test requests [string] [default: none]
--proxyIgnoreSSLErrors Discard SSL errors when using a proxy [boolean] [default: false]
--dnsUseHost Use local DNS config for API tests and HTTP steps in browser tests (currently ["192.168.65.5"]) [boolean] [default: true]
--dnsServer DNS server IPs used in given order for API tests and HTTP steps in browser tests (--dnsServer="1.0.0.1" --dnsServer="9.9.9.9") and after local DNS config, if --dnsUseHost is present [array] [default: ["8.8.8.8","1.1.1.1"]]
Network filtering:
--allowedIPRanges Grant access to IP ranges (has precedence over --blockedIPRanges) [default: none]
--blockedIPRanges Deny access to IP ranges (e.g. --blockedIPRanges.4="127.0.0.0/8" --blockedIPRanges.6="::1/128") [default: none]
--enableDefaultBlockedIpRanges Deny access to all reserved IP ranges, except for those explicitly set in --allowedIPRanges [boolean] [default: false]
--allowedDomainNames Grant access to domain names for API tests (has precedence over --blockedDomainNames, e.g. --allowedDomainNames="*.example.com") [array] [default: none]
--blockedDomainNames Deny access to domain names for API tests (e.g. --blockedDomainNames="example.org" --blockedDomainNames="*.com") [array] [default: none]
Options:
--enableIPv6 Use IPv6 to perform tests. (Warning: IPv6 in Docker is only supported with Linux host) [boolean] [default: false]
--version Show version number [boolean]
-f, --logFormat Format log output [choices: "pretty", "pretty-compact", "json"] [default: "pretty"]
-h, --help Show help [boolean]
Volumes:
/etc/datadog/certs/ .pem certificates present in this directory will be imported and trusted as certificate authorities for API and browser tests
Environment variables:
Command options can also be set via environment variables (DATADOG_API_KEY="...", DATADOG_WORKER_CONCURRENCY="15", DATADOG_DNS_USE_HOST="true")
For options that accept multiple arguments, JSON string array notation should be used (DATADOG_TESTS_DNS_SERVER='["8.8.8.8", "1.1.1.1"]')
Supported environment variables:
DATADOG_ACCESS_KEY,
DATADOG_API_KEY,
DATADOG_PRIVATE_KEY,
DATADOG_PUBLIC_KEY_FINGERPRINT,
DATADOG_PUBLIC_KEY_PEM,
DATADOG_SECRET_ACCESS_KEY,
DATADOG_SITE,
DATADOG_WORKER_CONCURRENCY,
DATADOG_WORKER_LOG_FORMAT,
DATADOG_WORKER_MAX_NUMBER_MESSAGES_TO_FETCH,
DATADOG_WORKER_PROXY,
DATADOG_TESTS_DNS_SERVER,
DATADOG_TESTS_DNS_USE_HOST,
DATADOG_TESTS_PROXY,
DATADOG_TESTS_PROXY_IGNORE_SSL_ERRORS,
DATADOG_TESTS_TIMEOUT,
DATADOG_ALLOWED_IP_RANGES_4,
DATADOG_ALLOWED_IP_RANGES_6,
DATADOG_BLOCKED_IP_RANGES_4,
DATADOG_BLOCKED_IP_RANGES_6,
DATADOG_ENABLE_DEFAULT_WINDOWS_FIREWALL_RULES,
DATADOG_ALLOWED_DOMAIN_NAMES,
DATADOG_BLOCKED_DOMAIN_NAMES,
DATADOG_WORKER_ENABLE_STATUS_PROBES,
DATADOG_WORKER_STATUS_PROBES_PORT
```
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended
`kubernetes_namespace` (`string`) required
Kubernetes namespace to install the release into
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used
**Default value:** `true`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
`description` (`string`) optional
Release description attribute (visible in the history)
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`private_location_tags` (`set(string)`) optional
List of static tags to associate with the synthetics private location
**Default value:** `[ ]`
`repository` (`string`) optional
Repository URL where to locate the requested chart
**Default value:** `null`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`verify` (`bool`) optional
Verify the package before installing it. Helm uses a provenance file to verify the integrity of the chart; this must be hosted alongside the chart
**Default value:** `false`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
`synthetics_private_location_id`
Synthetics private location ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
- `helm`, version: `>= 2.3.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
- `local`, version: `>= 1.3`
- `template`, version: `>= 2.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`datadog_synthetics_private_location` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`datadog_synthetics_private_location.this`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/synthetics_private_location) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## delegated-administrator
Description of this component 55
## Usage
**Stack Level**: Regional or Global
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
foo:
vars:
enabled: true
```
## Variables
### Required Variables
### Optional Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`mock`
Mock output example for the Cloud Posse Terraform component template
Type of endpoint. Valid values are `source`, `target`
`engine_name` (`string`) required
Type of engine for the endpoint. Valid values are `aurora`, `aurora-postgresql`, `azuredb`, `db2`, `docdb`, `dynamodb`, `elasticsearch`, `kafka`, `kinesis`, `mariadb`, `mongodb`, `mysql`, `opensearch`, `oracle`, `postgres`, `redshift`, `s3`, `sqlserver`, `sybase`
`region` (`string`) required
AWS Region
### Optional Variables
`certificate_arn` (`string`) optional
Certificate ARN
**Default value:** `null`
`database_name` (`string`) optional
Name of the endpoint database
**Default value:** `null`
`elasticsearch_settings` (`map(any)`) optional
Configuration block for OpenSearch settings
**Default value:** `null`
`extra_connection_attributes` (`string`) optional
Additional attributes associated with the connection to the source database
**Default value:** `""`
`kafka_settings` (`map(any)`) optional
Configuration block for Kafka settings
**Default value:** `null`
`kinesis_settings` (`map(any)`) optional
Configuration block for Kinesis settings
**Default value:** `null`
`kms_key_arn` (`string`) optional
(Required when engine_name is `mongodb`, optional otherwise). ARN for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key
**Default value:** `null`
`mongodb_settings` (`map(any)`) optional
Configuration block for MongoDB settings
**Default value:** `null`
`password` (`string`) optional
Password to be used to login to the endpoint database
**Default value:** `""`
`password_path` (`string`) optional
If set, the path in AWS SSM Parameter Store to fetch the password for the DMS admin user
**Default value:** `""`
`port` (`number`) optional
Port used by the endpoint database
**Default value:** `null`
`redshift_settings` (`map(any)`) optional
Configuration block for Redshift settings
**Default value:** `null`
`s3_settings` (`map(any)`) optional
Configuration block for S3 settings
**Default value:** `null`
ARN of the IAM role that specifies AWS DMS as the trusted entity and has the required permissions to access the value in SecretsManagerSecret
**Default value:** `null`
`secrets_manager_arn` (`string`) optional
Full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the endpoint connection details. Supported only for engine_name as aurora, aurora-postgresql, mariadb, mongodb, mysql, oracle, postgres, redshift or sqlserver
**Default value:** `null`
`server_name` (`string`) optional
Host name of the database server
**Default value:** `null`
`service_access_role` (`string`) optional
ARN used by the service access IAM role for DynamoDB endpoints
**Default value:** `null`
`ssl_mode` (`string`) optional
The SSL mode to use for the connection. Can be one of `none`, `require`, `verify-ca`, `verify-full`
**Default value:** `"none"`
`username` (`string`) optional
User name to be used to login to the endpoint database
**Default value:** `""`
`username_path` (`string`) optional
If set, the path in AWS SSM Parameter Store to fetch the username for the DMS admin user
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`dms_endpoint_arn`
DMS endpoint ARN
`dms_endpoint_id`
DMS endpoint ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `aws`, version: `>= 4.26.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.26.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dms_endpoint` | 2.0.0 | [`cloudposse/dms/aws//modules/dms-endpoint`](https://registry.terraform.io/modules/cloudposse/dms/aws/modules/dms-endpoint/2.0.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.username`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## iam
This component provisions IAM roles required for DMS.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
```yaml
components:
terraform:
dms/iam:
metadata:
component: dms/iam
settings:
spacelift:
workspace_enabled: true
autodeploy: false
vars:
enabled: true
name: dms
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`dms_cloudwatch_logs_role_arn`
DMS CloudWatch Logs role ARN
`dms_redshift_s3_role_arn`
DMS Redshift S3 role ARN
`dms_vpc_management_role_arn`
DMS VPC management role ARN
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `aws`, version: `>= 4.26.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dms_iam` | 2.0.0 | [`cloudposse/dms/aws//modules/dms-iam`](https://registry.terraform.io/modules/cloudposse/dms/aws/modules/dms-iam/2.0.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## replication-instance
This component provisions DMS replication instances.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
```yaml
components:
terraform:
dms/replication-instance/defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
autodeploy: false
vars:
enabled: true
allocated_storage: 50
apply_immediately: true
auto_minor_version_upgrade: true
allow_major_version_upgrade: false
availability_zone: null
engine_version: "3.4"
multi_az: false
preferred_maintenance_window: "sun:10:30-sun:14:30"
publicly_accessible: false
dms-replication-instance-t2-small:
metadata:
component: dms/replication-instance
inherits:
- dms/replication-instance/defaults
vars:
# Replication instance name must start with a letter, only contain alphanumeric characters and hyphens
name: "t2-small"
replication_instance_class: "dms.t2.small"
allocated_storage: 50
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`allocated_storage` (`number`) optional
The amount of storage (in gigabytes) to be initially allocated for the replication instance. Default: 50, Min: 5, Max: 6144
**Default value:** `50`
`allow_major_version_upgrade` (`bool`) optional
Indicates that major version upgrades are allowed
**Default value:** `false`
`apply_immediately` (`bool`) optional
Indicates whether the changes should be applied immediately or during the next maintenance window. Only used when updating an existing resource
**Default value:** `true`
`auto_minor_version_upgrade` (`bool`) optional
Indicates that major version upgrades are allowed
**Default value:** `true`
`availability_zone` (`any`) optional
The EC2 Availability Zone that the replication instance will be created in
**Default value:** `null`
`engine_version` (`string`) optional
The engine version number of the replication instance
**Default value:** `"3.5.4"`
`multi_az` (`bool`) optional
Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to true
**Default value:** `false`
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC)
**Default value:** `"sun:10:30-sun:14:30"`
`publicly_accessible` (`bool`) optional
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address
**Default value:** `false`
`replication_instance_class` (`string`) optional
The compute and memory capacity of the replication instance as specified by the replication instance class
**Default value:** `"dms.t3.small"`
A convenience that adds to the rules a rule that allows all egress.
If this is false and no egress rules are specified via `rules` or `rule-matrix`, then no egress will be allowed.
**Default value:** `true`
Set `true` to enable terraform `create_before_destroy` behavior on the created security group.
We only recommend setting this `false` if you are importing an existing security group
that you do not want replaced and therefore need full control over its name.
Note that changing this value will always cause the security group to be replaced.
**Default value:** `true`
End port on which the Glue connection accepts incoming connections.
**Default value:** `65535`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
DMS source endpoint component name (used to get the ARN of the DMS source endpoint)
`table_mappings_file` (`string`) required
Path to the JSON file that contains the table mappings. See https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html for more details
DMS target endpoint component name (used to get the ARN of the DMS target endpoint)
### Optional Variables
`cdc_start_position` (`string`) optional
Indicates when you want a change data capture (CDC) operation to start. The value can be in date, checkpoint, or LSN/SCN format depending on the source engine, Conflicts with `cdc_start_time`
**Default value:** `null`
`cdc_start_time` (`string`) optional
The Unix timestamp integer for the start of the Change Data Capture (CDC) operation. Conflicts with `cdc_start_position`
**Default value:** `null`
`migration_type` (`string`) optional
The migration type. Can be one of `full-load`, `cdc`, `full-load-and-cdc`
**Default value:** `"full-load-and-cdc"`
Path to the JSON file that contains the task settings. See https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html for more details
**Default value:** `null`
`start_replication_task` (`bool`) optional
If set to `true`, the created replication tasks will be started automatically
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`dms_replication_task_arn`
DMS replication task ARN
`dms_replication_task_id`
DMS replication task ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.2.0`
- `aws`, version: `>= 4.26.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dms_endpoint_source` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`dms_endpoint_target` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`dms_replication_instance` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`dms_replication_task` | 2.0.0 | [`cloudposse/dms/aws//modules/dms-replication-task`](https://registry.terraform.io/modules/cloudposse/dms/aws/modules/dms-replication-task/2.0.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## dns-delegated
This component provisions a delegated DNS zone for managing subdomains delegated from a primary DNS account.
The primary DNS zone must already exist via the dns-primary component. Use dns-delegated when you need a
subdomain (e.g. prod.example.com) managed in a different account from the primary root zone (e.g. example.com).
If you are deploying a root zone (e.g. example.com) and only a single account needs to manage or update the
zone, use the dns-primary component instead. See “Why not use dns-delegated for all vanity domains?” for details.
This component also provisions a wildcard ACM certificate for the delegated subdomain. Deploy it globally (once per
account) rather than regionally; see “Why should the dns-delegated component be deployed globally rather than
regionally?” for rationale.
Note: After delegating a subdomain (e.g. prod.example.com) to an account, that account can create deeper
subdomains (e.g. api.use1.prod.example.com) without additional delegation, but additional TLS certificates may be
required because a wildcard certificate only matches a single level. Use the acm component for additional certs.
Limitations
Switching a hosted zone from public to private can cause issues because the provider will try to perform an update
instead of a ForceNew. It is not possible to toggle between public and private. If changing from public to private
and downtime is acceptable, delete records and the hosted zone, destroy the Terraform component, and re-deploy with
new settings.
If downtime is acceptable (workaround):
1. Delete anything using ACMs connected to previous hosted zones
2. Delete ACMs
3. Delete entries in the public hosted zone
4. Delete the hosted zone
5. Use atmos to destroy dns-delegated to remove the public hosted zone
6. Use atmos to deploy dns-delegated for the private hosted zone
7. Re-deploy dependent components (aurora-postgres, msk, external-dns, echo-server, etc.) to the new hosted zone
If downtime is not acceptable (workaround):
1. Create a new virtual component of dns-delegated with the correct private inputs
2. Deploy the new dns-delegated-private component
3. Re-deploy dependent components to the new hosted zone
Caveats
- Do not create an NS delegation for a subdomain within a zone that is not authoritative for that subdomain (e.g. if
a parent subdomain is already delegated). Route 53 Public DNS allows conflicting delegations, which can cause
inconsistent resolution depending on the resolver’s strategy (see RFC7816 “QName Minimization”). Verify proper
resolution with multiple resolvers (e.g. 8.8.8.8 and 1.1.1.1).
## Usage
Stack Level: Global
Use this component in global stacks for any accounts where you host services that need DNS records on a delegated
subdomain of the root domain.
Public hosted zone example: devplatform.example.net is created and the example.net zone in the primary DNS account
contains a record delegating DNS to the new hosted zone. This also creates an ACM record.
```yaml
components:
terraform:
dns-delegated:
vars:
zone_config:
- subdomain: devplatform
zone_name: example.net
request_acm_certificate: true
dns_private_zone_enabled: false
# dns_soa_config configures the SOA record for the zone::
# - awsdns-hostmaster.amazon.com. ; AWS default value for administrator email address
# - 1 ; serial number, not used by AWS
# - 7200 ; refresh time in seconds for secondary DNS servers to refresh SOA record
# - 900 ; retry time in seconds for secondary DNS servers to retry failed SOA record update
# - 1209600 ; expire time in seconds (1209600 is 2 weeks) for secondary DNS servers to remove SOA record if they cannot refresh it
# - 60 ; nxdomain TTL, or time in seconds for secondary DNS servers to cache negative responses
# See SOA Record Documentation for more information.
dns_soa_config: "awsdns-hostmaster.amazon.com. 1 7200 900 1209600 60"
```
Private hosted zone example: devplatform.example.net is created and the example.net zone in the primary DNS account
contains a record delegating DNS to the new hosted zone. This creates an ACM record using a Private CA.
```yaml
components:
terraform:
dns-delegated:
vars:
zone_config:
- subdomain: devplatform
zone_name: example.net
request_acm_certificate: true
dns_private_zone_enabled: true
vpc_region_abbreviation_type: short
vpc_primary_environment_name: use2
certificate_authority_component_name: private-ca-subordinate
certificate_authority_stage_name: pca
certificate_authority_environment_name: use2
certificate_authority_component_key: subordinate
```
## Variables
### Required Variables
Enable or disable AWS Shield Advanced protection for Route53 Zones. If set to 'true', a subscription to AWS Shield Advanced must exist in this account.
**Default value:** `false`
Use this component key e.g. `root` or `mgmt` to read from the remote state to get the certificate_authority_arn if using an authority type of SUBORDINATE
**Default value:** `null`
Use this component name to read from the remote state to get the certificate_authority_arn if using an authority type of SUBORDINATE
**Default value:** `null`
`certificate_authority_enabled` (`bool`) optional
Whether to use the certificate authority or not
**Default value:** `false`
Use this environment name to read from the remote state to get the certificate_authority_arn if using an authority type of SUBORDINATE
**Default value:** `null`
Use this stage name to read from the remote state to get the certificate_authority_arn if using an authority type of SUBORDINATE
**Default value:** `null`
`dns_private_zone_enabled` (`bool`) optional
Whether to set the zone to public or private
**Default value:** `false`
`dns_soa_config` (`string`) optional
Root domain name DNS SOA record:
- awsdns-hostmaster.amazon.com. ; AWS default value for administrator email address
- 1 ; serial number, not used by AWS
- 7200 ; refresh time in seconds for secondary DNS servers to refresh SOA record
- 900 ; retry time in seconds for secondary DNS servers to retry failed SOA record update
- 1209600 ; expire time in seconds (1209600 is 2 weeks) for secondary DNS servers to remove SOA record if they cannot refresh it
- 60 ; nxdomain TTL, or time in seconds for secondary DNS servers to cache negative responses
See [SOA Record Documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html) for more information.
**Default value:** `"awsdns-hostmaster.amazon.com. 1 7200 900 1209600 60"`
`request_acm_certificate` (`bool`) optional
Whether or not to create an ACM certificate
**Default value:** `true`
`vpc_component_name` (`string`) optional
The name of a VPC component
**Default value:** `"vpc"`
Type of VPC abbreviation (either `fixed` or `short`) to use in names. See https://github.com/cloudposse/terraform-aws-utils for details.
**Default value:** `"fixed"`
The names of the environments where secondary VPCs are deployed
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`acm_ssm_parameter`
The SSM parameter for the ACM cert.
`certificate`
The ACM certificate information.
`default_dns_zone_id`
Default root DNS zone ID for the cluster
`default_domain_name`
Default root domain name (e.g. dev.example.net) for the cluster
`route53_hosted_zone_protections`
List of AWS Shield Advanced Protections for Route53 Hosted Zones.
`zones`
Subdomain and zone config
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`acm` | 0.18.0 | [`cloudposse/acm-request-certificate/aws`](https://registry.terraform.io/modules/cloudposse/acm-request-certificate/aws/0.18.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`private_ca` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`utils` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_route53_record.root_ns`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
- [`aws_route53_record.soa`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
- [`aws_route53_zone.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_zone) (resource)
- [`aws_route53_zone.private`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_zone) (resource)
- [`aws_route53_zone_association.secondary`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_zone_association) (resource)
- [`aws_shield_protection.shield_protection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/shield_protection) (resource)
- [`aws_ssm_parameter.acm_arn`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_route53_zone.root_zone`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/route53_zone) (data source)
---
## dns-primary
This component is responsible for provisioning the primary DNS zones into an AWS account. By convention, we typically
provision the primary DNS zones in the `dns` account. The primary account for branded zones (e.g. `example.com`),
however, would be in the `prod` account, while staging zone (e.g. `example.qa`) might be in the `staging` account.
The zones from the primary DNS zone are then expected to be delegated to other accounts via
[the `dns-delegated` component](https://github.com/cloudposse/terraform-aws-components/tree/main/modules/dns-delegated).
Additionally, external records can be created on the primary DNS zones via the `record_config` variable.
## Architecture
### Summary
The `dns` account gets a single `dns-primary` component deployed. Every other account that needs DNS entries gets a
single `dns-delegated` component, chaining off the domains in the `dns` account. Optionally, accounts can have a single
`dns-primary` component of their own, to have apex domains (which Cloud Posse calls "vanity domains"). Typically, these
domains are configured with CNAME (or apex alias) records to point to service domain entries.
### Details
The purpose of the `dns` account is to host root domains shared by several accounts (with each account being delegated
its own subdomain) and to be the owner of domain registrations purchased from Amazon.
The purpose of the `dns-primary` component is to provision AWS Route53 zones for the root domains. These zones, once
provisioned, must be manually configured into the Domain Name Registrar's records as name servers. A single component
can provision multiple domains and, optionally, associated ACM (SSL) certificates in a single account.
Cloud Posse's architecture expects root domains shared by several accounts to be provisioned in the `dns` account with
`dns-primary` and delegated to other accounts using the `dns-delegated` component, with each account getting its own
subdomain corresponding to a Route 53 zone in the delegated account. Cloud Posse's architecture requires at least one
such domain, called "the service domain", be provisioned. The service domain is not customer facing, and is provisioned
to allow fully automated construction of host names without any concerns about how they look. Although they are not
secret, the public will never see them.
Root domains used by a single account are provisioned with the `dns-primary` component directly in that account. Cloud
Posse calls these "vanity domains". These can be whatever the marketing or PR or other stakeholders want to be.
After a domain is provisioned in the `dns` account, the `dns-delegated` component can provision one or more subdomains
for each account, and, optionally, associated ACM certificates. For the service domain, Cloud Posse recommends using the
account name as the delegated subdomain (either directly, e.g. "plat-dev", or as multiple subdomains, e.g. "dev.plat")
because that allows `dns-delegated` to automatically provision any required host name in that zone.
There is no automated support for `dns-primary` to provision root domains outside of the `dns` account that are to be
shared by multiple accounts, and such usage is not recommended. If you must, `dns-primary` can provision a subdomain of
a root domain that is provisioned in another account (not `dns`). In this case, the delegation of the subdomain must be
done manually by entering the name servers into the parent domain's records (instead of in the Registrar's records).
The architecture does not support other configurations, or non-standard component names.
## Usage
**Stack Level**: Global
Here's an example snippet for how to use this component. This component should only be applied once as the DNS zones it
creates are global. This is typically done via the DNS stack (e.g. `gbl-dns.yaml`).
```yaml
components:
terraform:
dns-primary:
vars:
domain_names:
- example.net
record_config:
- root_zone: example.net
name: ""
type: A
ttl: 60
records:
- 53.229.170.215
# using a period at the end of a name
- root_zone: example.net
name: www.
type: CNAME
ttl: 60
records:
- example.net
# using numbers as name requires quotes
- root_zone: example.net
name: "123456."
type: CNAME
ttl: 60
records:
- example.net
# strings that are very long, this could be a DKIM key
- root_zone: example.net
name: service._domainkey.
type: CNAME
ttl: 60
records:
- !!str |-
YourVeryLongStringGoesHere
```
:::tip
Use the [acm](https://docs.cloudposse.com/components/library/aws/acm) component for more advanced certificate
requirements.
:::
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`alias_record_config` optional
DNS Alias Record config
**Type:**
```hcl
list(object({
root_zone = string
name = string
type = string
zone_id = string
record = string
evaluate_target_health = bool
}))
```
**Default value:** `[ ]`
`dns_soa_config` (`string`) optional
Root domain name DNS SOA record:
- awsdns-hostmaster.amazon.com. ; AWS default value for administrator email address
- 1 ; serial number, not used by AWS
- 7200 ; refresh time in seconds for secondary DNS servers to refresh SOA record
- 900 ; retry time in seconds for secondary DNS servers to retry failed SOA record update
- 1209600 ; expire time in seconds (1209600 is 2 weeks) for secondary DNS servers to remove SOA record if they cannot refresh it
- 60 ; nxdomain TTL, or time in seconds for secondary DNS servers to cache negative responses
See [SOA Record Documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html) for more information.
**Default value:** `"awsdns-hostmaster.amazon.com. 1 7200 900 1209600 60"`
`domain_names` (`list(string)`) optional
Root domain name list, e.g. `["example.net"]`
**Default value:** `null`
`record_config` optional
DNS Record config
**Type:**
```hcl
list(object({
root_zone = string
name = string
type = string
ttl = string
records = list(string)
}))
```
**Default value:** `[ ]`
`request_acm_certificate` (`bool`) optional
Whether or not to request an ACM certificate for each domain
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`acms`
ACM certificates for domains
`zones`
DNS zones
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`acm` | 0.18.0 | [`cloudposse/acm-request-certificate/aws`](https://registry.terraform.io/modules/cloudposse/acm-request-certificate/aws/0.18.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_route53_record.aliasrec`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
- [`aws_route53_record.dnsrec`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
- [`aws_route53_record.soa`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
- [`aws_route53_zone.root`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_zone) (resource)
## Data Sources
The following data sources are used by this module:
---
## documentdb
This component is responsible for provisioning DocumentDB clusters.
## Usage
**Stack Level**: Regional
Here is an example snippet for how to use this component:
```yaml
components:
terraform:
documentdb:
backend:
s3:
workspace_key_prefix: documentdb
vars:
enabled: true
cluster_size: 3
engine: docdb
engine_version: 3.6.0
cluster_family: docdb3.6
retention_period: 35
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region.
### Optional Variables
`apply_immediately` (`bool`) optional
Specifies whether any cluster modifications are applied immediately, or during the next maintenance window
**Default value:** `true`
`auto_minor_version_upgrade` (`bool`) optional
Specifies whether any minor engine upgrades will be applied automatically to the DB instance during the maintenance window or not
**Default value:** `true`
`cluster_family` (`string`) optional
The family of the DocumentDB cluster parameter group. For more details, see https://docs.aws.amazon.com/documentdb/latest/developerguide/db-cluster-parameter-group-create.html
**Default value:** `"docdb3.6"`
`cluster_parameters` optional
List of DB parameters to apply
**Type:**
```hcl
list(object({
apply_method = string
name = string
value = string
}))
```
**Default value:** `[ ]`
`cluster_size` (`number`) optional
Number of DB instances to create in the cluster
**Default value:** `3`
`db_port` (`number`) optional
DocumentDB port
**Default value:** `27017`
`deletion_protection_enabled` (`bool`) optional
A value that indicates whether the DB cluster has deletion protection enabled
**Default value:** `false`
Whether to add the Security Group managed by the EKS cluster in the same regional stack to the ingress allowlist of the DocumentDB cluster.
**Default value:** `true`
`enable_performance_insights` (`bool`) optional
Specifies whether to enable Performance Insights for the DB Instance.
**Default value:** `false`
List of log types to export to cloudwatch. The following log types are supported: `audit`, `error`, `general`, `slowquery`
**Default value:** `[ ]`
`encryption_enabled` (`bool`) optional
Specifies whether the DB cluster is encrypted
**Default value:** `true`
`engine` (`string`) optional
The name of the database engine to be used for this DB cluster. Defaults to `docdb`. Valid values: `docdb`
**Default value:** `"docdb"`
`engine_version` (`string`) optional
The version number of the database engine to use
**Default value:** `"3.6.0"`
`instance_class` (`string`) optional
The instance class to use. For more details, see https://docs.aws.amazon.com/documentdb/latest/developerguide/db-instance-classes.html#db-instance-class-specs
**Default value:** `"db.r4.large"`
`manage_master_user_password` (`bool`) optional
Whether to manage the master user password using AWS Secrets Manager.
**Default value:** `null`
`master_password` (`string`) optional
(Required unless a snapshot_identifier is provided) Password for the master DB user. Note that this may show up in logs, and it will be stored in the state file. Please refer to the DocumentDB Naming Constraints
**Default value:** `null`
`master_username` (`string`) optional
(Required unless a snapshot_identifier is provided) Username for the master DB user
**Default value:** `"admin1"`
`preferred_backup_window` (`string`) optional
Daily time range during which the backups happen
**Default value:** `"07:00-09:00"`
The window to perform maintenance in. Syntax: `ddd:hh24:mi-ddd:hh24:mi`.
**Default value:** `"Mon:22:00-Mon:23:00"`
`retention_period` (`number`) optional
Number of days to retain backups for
**Default value:** `5`
`skip_final_snapshot` (`bool`) optional
Determines whether a final DB snapshot is created before the DB cluster is deleted
**Default value:** `true`
`snapshot_identifier` (`string`) optional
Specifies whether or not to create this cluster from a snapshot. You can use either the name or ARN when specifying a DB cluster snapshot, or the ARN when specifying a DB snapshot
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
Amazon Resource Name (ARN) of the cluster
`cluster_name`
Cluster Identifier
`endpoint`
Endpoint of the DocumentDB cluster
`master_host`
DB master hostname
`master_username`
Username for the master DB user
`reader_endpoint`
A read-only endpoint of the DocumentDB cluster, automatically load-balanced across replicas
`replicas_host`
DB replicas hostname
`security_group_arn`
ARN of the DocumentDB cluster Security Group
`security_group_id`
ID of the DocumentDB cluster Security Group
`security_group_name`
Name of the DocumentDB cluster Security Group
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 5.29.0, < 6.0.0`
- `random`, version: `>= 3.0`
### Providers
- `aws`, version: `>= 5.29.0, < 6.0.0`
- `random`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`documentdb_cluster` | 0.30.2 | [`cloudposse/documentdb-cluster/aws`](https://registry.terraform.io/modules/cloudposse/documentdb-cluster/aws/0.30.2) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_parameter.master_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.master_username`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`random_password.master_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
## Data Sources
The following data sources are used by this module:
---
## dynamodb
This component is responsible for provisioning a DynamoDB table.
## Usage
**Stack Level**: Regional
Here is an example snippet for how to use this component:
```yaml
components:
terraform:
dynamodb:
backend:
s3:
workspace_key_prefix: dynamodb
vars:
enabled: true
hash_key: HashKey
range_key: RangeKey
billing_mode: PAY_PER_REQUEST
autoscaler_enabled: false
encryption_enabled: true
point_in_time_recovery_enabled: true
streams_enabled: false
ttl_enabled: false
```
## Variables
### Required Variables
`hash_key` (`string`) required
DynamoDB table Hash Key
`region` (`string`) required
AWS Region.
### Optional Variables
`autoscale_max_read_capacity` (`number`) optional
DynamoDB autoscaling max read capacity
**Default value:** `20`
Additional DynamoDB attributes in the form of a list of mapped values
**Type:**
```hcl
list(object({
name = string
type = string
}))
```
**Default value:** `[ ]`
Additional global secondary indexes in the form of a list of mapped values
**Type:**
```hcl
list(object({
hash_key = string
name = string
non_key_attributes = list(string)
projection_type = string
range_key = string
read_capacity = number
write_capacity = number
}))
```
**Default value:** `[ ]`
`hash_key_type` (`string`) optional
Hash Key type, which must be a scalar type: `S`, `N`, or `B` for String, Number or Binary data, respectively.
**Default value:** `"S"`
`import_table` optional
Import Amazon S3 data into a new table.
**Type:**
```hcl
object({
# Valid values are GZIP, ZSTD and NONE
input_compression_type = optional(string, null)
# Valid values are CSV, DYNAMODB_JSON, and ION.
input_format = string
input_format_options = optional(object({
csv = object({
delimiter = string
header_list = list(string)
})
}), null)
s3_bucket_source = object({
bucket = string
bucket_owner = optional(string)
key_prefix = optional(string)
})
})
```
**Default value:** `null`
`local_secondary_index_map` optional
Additional local secondary indexes in the form of a list of mapped values
**Type:**
```hcl
list(object({
name = string
non_key_attributes = list(string)
projection_type = string
range_key = string
}))
```
**Default value:** `[ ]`
The ARN of the CMK that should be used for the AWS KMS encryption. This attribute should only be specified if the key is different from the default DynamoDB CMK, alias/aws/dynamodb.
**Default value:** `null`
`stream_view_type` (`string`) optional
When an item in the table is modified, what information is written to the stream
**Default value:** `""`
Set to false to disable DynamoDB table TTL
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`global_secondary_index_names`
DynamoDB global secondary index names
`hash_key`
DynamoDB table hash key
`local_secondary_index_names`
DynamoDB local secondary index names
`range_key`
DynamoDB table range key
`table_arn`
DynamoDB table ARN
`table_id`
DynamoDB table ID
`table_name`
DynamoDB table name
`table_stream_arn`
DynamoDB table stream ARN
`table_stream_label`
DynamoDB table stream label
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dynamodb_table` | 0.37.0 | [`cloudposse/dynamodb/aws`](https://registry.terraform.io/modules/cloudposse/dynamodb/aws/0.37.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## ec2-client-vpn
This component is responsible for provisioning VPN Client Endpoints.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component. This component should only be applied once as the resources it
creates are regional. This is typically done via the corp stack (e.g. `uw2-corp.yaml`). This is because a vpc endpoint
requires a vpc and the network stack does not have a vpc.
```yaml
components:
terraform:
ec2-client-vpn:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
client_cidr: 10.100.0.0/10
logging_stream_name: client_vpn
logging_enabled: true
retention_in_days: 7
organization_name: acme
split_tunnel: true
availability_zones:
- us-west-2a
- us-west-2b
- us-west-2c
associated_security_group_ids: []
additional_routes:
- destination_cidr_block: 0.0.0.0/0
description: Internet Route
authorization_rules:
- name: Internet Rule
authorize_all_groups: true
description: Allows routing to the internet"
target_network_cidr: 0.0.0.0/0
```
### Deploying
NOTE: This module uses the `aws_ec2_client_vpn_route` resource which throws an error if too many API calls come from a
single host. Ignore this error and repeat the terraform command. It usually takes 3 deploys (or destroys) to complete.
Error on create (See issue https://github.com/hashicorp/terraform-provider-aws/issues/19750)
```
ConcurrentMutationLimitExceeded: Cannot initiate another change for this endpoint at this time. Please try again later.
```
Error on destroy (See issue https://github.com/hashicorp/terraform-provider-aws/issues/16645)
```
timeout while waiting for resource to be gone (last state: 'deleting', timeout: 1m0s)
```
### Testing
NOTE: The `GoogleIDPMetadata-cloudposse.com.xml` in this repo is equivalent to the one in the `sso` component and is
used for testing. This component can only specify a single SAML document. The customer SAML xml should be placed in this
directory side-by-side the CloudPosse SAML xml.
Prior to testing, the component needs to be deployed and the AWS client app needs to be setup by the IdP admin otherwise
the following steps will result in an error similar to `app_not_configured_for_user`.
1. Deploy the component in a regional account with a VPC like `ue2-corp`.
1. Copy the contents of `client_configuration` into a file called `client_configuration.ovpn`
1. Download AWS client VPN `brew install --cask aws-vpn-client`
1. Launch the VPN
1. File > Manage Profiles to open the Manage Profiles window
1. Click Add Profile to open the Add Profile window
1. Set the display name e.g. `--`
1. Click the folder icon and find the file that was saved in a previous step
1. Click Add Profile to save the profile
1. Click Done to close to Manage Profiles window
1. Under "Ready to connect.", choose the profile, and click Connect
A browser will launch and allow you to connect to the VPN.
1. Make a note of where this component is deployed
1. Ensure that the resource to connect to is in a VPC that is connected by the transit gateway
1. Ensure that the resource to connect to contains a security group with a rule that allows ingress from where the
client vpn is deployed (e.g. `ue2-corp`)
1. Use `nmap` to test if the port is `open`. If the port is `filtered` then it's not open.
```console
nmap -p
```
Successful tests have been seen with MSK and RDS.
## Variables
### Required Variables
`authorization_rules` required
List of objects describing the authorization rules for the Client VPN. Each Target Network CIDR range given will be used to create an additional route attached to the Client VPN endpoint with the same Description.
**Type:**
```hcl
list(object({
name = string
access_group_id = string
authorize_all_groups = bool
description = string
target_network_cidr = string
}))
```
`client_cidr` (`string`) required
Network CIDR to use for clients
`logging_stream_name` (`string`) required
Names of stream used for logging
`organization_name` (`string`) required
Name of organization to use in private certificate
`region` (`string`) required
VPN Endpoints are region-specific. This identifies the region. AWS Region
List of security groups to attach to the client vpn network associations
**Default value:** `[ ]`
`authentication_type` (`string`) optional
One of `certificate-authentication` or `federated-authentication`
**Default value:** `"certificate-authentication"`
`ca_common_name` (`string`) optional
Unique Common Name for CA self-signed certificate
**Default value:** `null`
`dns_servers` (`list(string)`) optional
Information about the DNS servers to be used for DNS resolution. A Client VPN endpoint can have up to two DNS servers. If no DNS server is specified, the DNS address of the VPC that is to be associated with Client VPN endpoint is used as the DNS server.
**Default value:** `[ ]`
`export_client_certificate` (`bool`) optional
Flag to determine whether to export the client certificate with the VPN configuration
**Default value:** `true`
`logging_enabled` (`bool`) optional
Enables or disables Client VPN Cloudwatch logging.
**Default value:** `false`
`retention_in_days` (`number`) optional
Number of days you want to retain log events in the log group
**Default value:** `30`
`root_common_name` (`string`) optional
Unique Common Name for Root self-signed certificate
**Default value:** `null`
`saml_metadata_document` (`string`) optional
Optional SAML metadata document. Must include this or `saml_provider_arn`
**Default value:** `null`
`saml_provider_arn` (`string`) optional
Optional SAML provider ARN. Must include this or `saml_metadata_document`
**Default value:** `null`
`server_common_name` (`string`) optional
Unique Common Name for Server self-signed certificate
**Default value:** `null`
`session_timeout_hours` (`string`) optional
The maximum session duration time in hours. Valid values: 8, 10, 12, 24. Default is 24 hours.
**Default value:** `"24"`
`split_tunnel` (`bool`) optional
Indicates whether split-tunnel is enabled on VPN endpoint. Default value is false.
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`client_configuration`
VPN Client Configuration file (.ovpn) contents that can be imported into AWS client vpn
`full_client_configuration`
Client configuration including client certificate and private key for mutual authentication
`vpn_endpoint_arn`
The ARN of the Client VPN Endpoint Connection.
`vpn_endpoint_dns_name`
The DNS Name of the Client VPN Endpoint Connection.
`vpn_endpoint_id`
The ID of the Client VPN Endpoint Connection.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `awsutils`, version: `>= 0.11.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`ec2_client_vpn` | 2.0.0 | [`cloudposse/ec2-client-vpn/aws`](https://registry.terraform.io/modules/cloudposse/ec2-client-vpn/aws/2.0.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## ec2-instance
This component is responsible for provisioning a single EC2 instance.
## Usage
**Stack Level**: Regional
The typical stack configuration for this component is as follows:
```yaml
components:
terraform:
ec2-instance:
vars:
enabled: true
name: ec2
```
## Variables
### Required Variables
`region` (`string`) required
AWS region
### Optional Variables
`ami_filters` optional
A list of AMI filters for finding the latest AMI
**Type:**
```hcl
list(object({
name = string
values = list(string)
}))
```
**Default value:**
```hcl
[
{
"name": "architecture",
"values": [
"x86_64"
]
},
{
"name": "virtualization-type",
"values": [
"hvm"
]
}
]
```
`ami_name_regex` (`string`) optional
The regex used to match the latest AMI to be used for the EC2 instance.
**Default value:** `"^amzn2-ami-hvm.*"`
`ami_owner` (`string`) optional
The owner of the AMI used for the ZScaler EC2 instances.
**Default value:** `"amazon"`
`instance_type` (`string`) optional
The instance family to use for the EC2 instance
**Default value:** `"t3a.micro"`
`security_group_rules` (`list(any)`) optional
A list of maps of Security Group rules.
The values of map is fully completed with `aws_security_group_rule` resource.
To get more info see [security_group_rule](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule).
**Default value:**
```hcl
[
{
"cidr_blocks": [
"0.0.0.0/0"
],
"from_port": 0,
"protocol": "-1",
"to_port": 65535,
"type": "egress"
}
]
```
`user_data` (`string`) optional
User data to be included with this EC2 instance
**Default value:** `"echo \"hello user data\""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`instance_id`
Instance ID
`private_ip`
Private IP of the instance
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `template`, version: `>= 2.2`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `template`, version: `>= 2.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`ec2_instance` | 2.0.0 | [`cloudposse/ec2-instance/aws`](https://registry.terraform.io/modules/cloudposse/ec2-instance/aws/2.0.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_ami.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami) (data source)
- [`template_file.userdata`](https://registry.terraform.io/providers/cloudposse/template/latest/docs/data-sources/file) (data source)
---
## ecr
This component is responsible for provisioning repositories, lifecycle rules, and permissions for streamlined ECR usage.
This utilizes
[the roles-to-principals submodule](https://github.com/cloudposse/terraform-aws-components/tree/main/modules/account-map/modules/roles-to-principals)
to assign accounts to various roles. It is also compatible with the
[GitHub Actions IAM Role mixin](https://github.com/cloudposse-terraform-components/mixins/blob/main/src/mixins/github-actions-iam-role/README-github-action-iam-role.md).
Warning (Older) regarding eks-iam component
:::warning
Older versions of our reference architecture have an`eks-iam` component that needs to be updated to provide sufficient
IAM roles to allow pods to pull from ECR repos
:::
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component. This component is normally only applied once as the resources
it creates are globally accessible, but you may want to create ECRs in multiple regions for redundancy. This is
typically provisioned via the stack for the "artifact" account (typically `auto`, `artifact`, or `corp`) in the primary
region.
```yaml
components:
terraform:
ecr:
vars:
ecr_user_enabled: false
enable_lifecycle_policy: true
max_image_count: 500
scan_images_on_push: true
protected_tags:
- prod
# Tag mutability supports: MUTABLE, IMMUTABLE, IMMUTABLE_WITH_EXCLUSION, MUTABLE_WITH_EXCLUSION
image_tag_mutability: IMMUTABLE_WITH_EXCLUSION
# When using *_WITH_EXCLUSION, specify exclusions to allow certain tags to be mutable
image_tag_mutability_exclusion_filter:
- filter: "latest"
filter_type: "WILDCARD"
- filter: "dev-"
filter_type: "WILDCARD"
images:
- infrastructure
- microservice-a
- microservice-b
- microservice-c
read_write_account_role_map:
identity:
- admin
- cicd
automation:
- admin
read_only_account_role_map:
corp: ["*"]
dev: ["*"]
prod: ["*"]
stage: ["*"]
```
## Variables
### Required Variables
`enable_lifecycle_policy` (`bool`) required
Enable/disable image lifecycle policy
`images` (`list(string)`) required
List of image names (ECR repo names) to create repos for
`max_image_count` (`number`) required
Max number of images to store. Old ones will be deleted to make room for new ones.
Enable/disable the provisioning of the ECR user (for CI/CD systems that don't support assuming IAM roles to access ECR, e.g. Codefresh)
**Default value:** `false`
`image_tag_mutability` (`string`) optional
The tag mutability setting for the repository. Must be one of: `MUTABLE`, `IMMUTABLE`, `IMMUTABLE_WITH_EXCLUSION`, or `MUTABLE_WITH_EXCLUSION`
**Default value:** `"MUTABLE"`
`image_tag_mutability_exclusion_filter` optional
List of exclusion filters for image tag mutability. Each filter object must contain 'filter' and 'filter_type' attributes. Requires AWS provider >= 6.8.0
**Type:**
```hcl
list(object({
filter = string
filter_type = optional(string, "WILDCARD")
}))
```
**Default value:** `[ ]`
`principals_lambda` (`list(string)`) optional
Principal account IDs of Lambdas allowed to consume ECR
**Default value:** `[ ]`
`protected_tags` (`list(string)`) optional
Tags to refrain from deleting
**Default value:** `[ ]`
`protected_tags_keep_count` (`number`) optional
Number of Image versions to keep for protected tags
**Default value:** `999999`
`pull_through_cache_rules` optional
Map of pull through cache rules to configure
**Type:**
```hcl
map(object({
registry = string
secret = optional(string, "")
}))
```
**Default value:** `{ }`
Indicates whether images are scanned after being pushed to the repository
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`ecr_repo_arn_map`
Map of image names to ARNs
`ecr_repo_url_map`
Map of image names to URLs
`ecr_user_arn`
ECR user ARN
`ecr_user_name`
ECR user name
`ecr_user_unique_id`
ECR user unique ID assigned by AWS
`repository_host`
ECR repository name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 6.8.0, < 7.0.0`
### Providers
- `aws`, version: `>= 6.8.0, < 7.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`ecr` | 1.0.1 | [`cloudposse/ecr/aws`](https://registry.terraform.io/modules/cloudposse/ecr/aws/1.0.1) | n/a
`full_access` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/roles-to-principals`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/roles-to-principals/v1.537.1) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`readonly_access` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/roles-to-principals`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/roles-to-principals/v1.537.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ecr_pull_through_cache_rule.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecr_pull_through_cache_rule) (resource)
- [`aws_ecr_registry_policy.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecr_registry_policy) (resource)
- [`aws_iam_policy.ecr_user`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_user.ecr`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user) (resource)
- [`aws_iam_user_policy_attachment.ecr_user`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user_policy_attachment) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.ecr_user`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_secretsmanager_secret.cache_credentials`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret) (data source)
---
## ecs
This component is responsible for provisioning an ECS Cluster and associated load balancer.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
The following will create
- ecs cluster
- load balancer with an ACM cert placed on example.com
- r53 record on all \*.example.com which will point to the load balancer
```yaml
components:
terraform:
ecs:
settings:
spacelift:
workspace_enabled: true
vars:
name: ecs
enabled: true
acm_certificate_domain: example.com
route53_record_name: "*"
# Create records will be created in each zone
zone_names:
- example.com
capacity_providers_fargate: true
capacity_providers_fargate_spot: true
capacity_providers_ec2:
default:
instance_type: t3.medium
max_size: 2
alb_configuration:
public:
internal_enabled: false
# resolves to *.public-platform.....
route53_record_name: "*.public-platform"
additional_certs:
- "my-vanity-domain.com"
private:
internal_enabled: true
route53_record_name: "*.private-platform"
additional_certs:
- "my-vanity-domain.com"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`acm_certificate_domain` (`string`) optional
Domain to get the ACM cert to use on the ALB.
**Default value:** `null`
Use FARGATE_SPOT capacity provider
**Default value:** `false`
`container_insights_mode` (`string`) optional
Container insights mode. Valid values: 'enhanced', 'enabled', 'disabled'. NOTE: `enhanced` is more costly, but as described by AWS, it 'provides detailed health and performance metrics at task and container level in addition to aggregated metrics at cluster and service level. Enables easier drill downs for faster problem isolation and troubleshooting.' (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-container-insights.html)
**Default value:** `"enabled"`
Use this environment name to read from the remote state to get the dns_delegated zone ID
**Default value:** `"gbl"`
`dns_delegated_stage_name` (`string`) optional
Use this stage name to read from the remote state to get the dns_delegated zone ID
**Default value:** `null`
`internal_enabled` (`bool`) optional
Whether to create an internal load balancer for services in this cluster
**Default value:** `false`
`maintenance_page_path` (`string`) optional
The path from this directory to the text/html page to use as the maintenance page. Must be within 1024 characters
**Default value:** `"templates/503_example.html"`
`route53_enabled` (`bool`) optional
Whether or not to create a route53 record for the ALB
**Default value:** `true`
`route53_record_name` (`string`) optional
The route53 record name
**Default value:** `"*"`
`vpc_component_name` (`string`) optional
The name of a VPC component
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`alb`
ALB outputs
`cluster_arn`
ECS cluster ARN
`cluster_name`
ECS Cluster Name
`private_subnet_ids`
Private subnet ids
`public_subnet_ids`
Public subnet ids
`records`
Record names
`security_group_id`
Security group id
`vpc_id`
VPC ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>=6.0.0`
### Providers
- `aws`, version: `>=6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`alb` | 2.4.0 | [`cloudposse/alb/aws`](https://registry.terraform.io/modules/cloudposse/alb/aws/2.4.0) | n/a
`cluster` | 2.0.0 | [`cloudposse/ecs-cluster/aws`](https://registry.terraform.io/modules/cloudposse/ecs-cluster/aws/2.0.0) | n/a
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`target_group_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | This is used due to the short limit on target group names i.e. 32 characters
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_lb_listener_certificate.additional_certs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_listener_certificate) (resource)
- [`aws_route53_record.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record) (resource)
- [`aws_security_group.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) (resource)
- [`aws_security_group_rule.egress`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) (resource)
- [`aws_security_group_rule.ingress_cidr`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) (resource)
- [`aws_security_group_rule.ingress_security_groups`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_acm_certificate.additional_certs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/acm_certificate) (data source)
- [`aws_acm_certificate.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/acm_certificate) (data source)
---
## ecs-adot-collector
This component deploys the AWS Distro for OpenTelemetry (ADOT) collector as an ECS service.
It collects metrics from ECS tasks and forwards them to Amazon Managed Prometheus for visualization in Grafana.
## Usage
**Stack Level**: Regional
This component is the ECS counterpart to the EKS Prometheus scraper/Promtail setup for Grafana monitoring.
It runs the ADOT collector as a Fargate task that:
- Scrapes Prometheus metrics from ECS services via service discovery
- Collects ECS container metrics
- Forwards all metrics to Amazon Managed Prometheus
### Prerequisites
- An ECS cluster deployed via the `ecs` component
- Amazon Managed Prometheus workspace deployed via the `managed-prometheus/workspace` component
- VPC with private subnets
### Example Configuration
```yaml
components:
terraform:
ecs-adot-collector:
vars:
enabled: true
name: ecs-adot-collector
# ADOT collector image
adot_image: "public.ecr.aws/aws-observability/aws-otel-collector:latest"
# Task resources
task_cpu: 256
task_memory: 512
desired_count: 1
# Logging
log_retention_days: 30
# Prometheus scraping configuration
scrape_interval: "30s"
# ECS service discovery - discover and scrape all ECS tasks
ecs_service_discovery_enabled: true
ecs_service_discovery_port: 9090
# Network configuration
assign_public_ip: false
# Dependencies - looked up from current stack
prometheus_workspace_endpoint: !terraform.state prometheus workspace_endpoint
ecs_cluster_name: !terraform.state ecs/cluster cluster_name
vpc_id: !terraform.state vpc vpc_id
subnet_ids: !terraform.state vpc private_subnet_ids
```
### Custom Scrape Configurations
You can add additional scrape targets beyond ECS service discovery:
```yaml
vars:
scrape_configs:
- job_name: "custom-app"
targets:
- "app.internal:9090"
metrics_path: "/metrics"
scrape_interval: "15s"
```
## Variables
### Required Variables
`ecs_cluster_name` (`string`) required
The name of the ECS cluster to deploy the ADOT collector to
Additional security group IDs to attach to the ADOT collector task
**Default value:** `[ ]`
`task_cpu` (`number`) optional
CPU units for the ADOT collector task
**Default value:** `256`
`task_memory` (`number`) optional
Memory (MiB) for the ADOT collector task
**Default value:** `512`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cloudwatch_log_group_name`
The name of the CloudWatch log group for ADOT collector logs
`ecs_service_arn`
The ARN of the ECS service running the ADOT collector
`ecs_service_name`
The name of the ECS service running the ADOT collector
`id`
The ID of this component deployment
`security_group_id`
The ID of the security group for the ADOT collector
`task_definition_arn`
The ARN of the ADOT collector task definition
`task_execution_role_arn`
The ARN of the IAM role used for ECS task execution
`task_role_arn`
The ARN of the IAM role used by the ADOT collector task
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cloudwatch_log_group.adot`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group) (resource)
- [`aws_ecs_service.adot`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service) (resource)
- [`aws_ecs_task_definition.adot`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition) (resource)
- [`aws_iam_role.task`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_iam_role.task_execution`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_iam_role_policy.ecs_service_discovery`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) (resource)
- [`aws_iam_role_policy_attachment.prometheus_remote_write`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`aws_iam_role_policy_attachment.task_execution`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`aws_security_group.adot`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) (resource)
## Data Sources
The following data sources are used by this module:
---
## ecs-service
This component is responsible for creating an ECS service.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
# stacks/catalog/ecs-service/defaults.yaml
components:
terraform:
ecs-service/defaults:
metadata:
component: ecs-service
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
public_lb_enabled: false
ecr_stage_name: mgmt-automation
task:
launch_type: FARGATE
network_mode: awsvpc
desired_count: 1
ignore_changes_desired_count: true
ignore_changes_task_definition: false
assign_public_ip: false
propagate_tags: SERVICE
wait_for_steady_state: true
circuit_breaker_deployment_enabled: true
circuit_breaker_rollback_enabled: true
```
This will launch google's `echoserver` using an external image from gcr
NOTE: Usage of `image` instead of `ecr_image`.
```yaml
# stacks/catalog/ecs-service/echoserver.yaml
import:
- catalog/ecs-service/defaults
components:
terraform:
ecs/platform/echoserver/service:
metadata:
component: ecs-service
inherits:
- ecs-service/defaults
vars:
enabled: true
name: echoserver
public_lb_enabled: false
cluster_attributes: [platform]
## Example task_exec_iam_policy
# task_exec_iam_policy:
# - policy_id: "EcsServiceEchoServer"
# statements:
# - sid: "EcsServiceEchoServer"
# effect: "Allow"
# actions:
# - "kms:Decrypt"
# resources:
# - "*"
containers:
service:
name: "echoserver"
image: gcr.io/google_containers/echoserver:1.10
port_mappings:
- containerPort: 8080
hostPort: 8080
protocol: tcp
task:
desired_count: 1
task_memory: 512
task_cpu: 256
```
This will launch a `kong` service using an ecr image from `mgmt-automation` account.
NOTE: Usage of `ecr_image` instead of `image`.
```yaml
import:
- catalog/ecs-service/defaults
components:
terraform:
ecs/b2b/kong/service:
metadata:
component: ecs-service
inherits:
- ecs-service/defaults
vars:
name: kong
public_lb_enabled: true
cluster_attributes: [b2b]
containers:
service:
name: "kong-gateway"
ecr_image: kong:latest
map_environment:
KONG_DECLARATIVE_CONFIG: /home/kong/production.yml
port_mappings:
- containerPort: 8000
hostPort: 8000
protocol: tcp
task:
desired_count: 1
task_memory: 512
task_cpu: 256
```
This will launch a `httpd` service using an external image from dockerhub
NOTE: Usage of `image` instead of `ecr_image`.
```yaml
# stacks/catalog/ecs-service/httpd.yaml
import:
- catalog/ecs-service/defaults
components:
terraform:
ecs/platform/httpd/service:
metadata:
component: ecs-service
inherits:
- ecs-service/defaults
vars:
enabled: true
name: httpd
public_lb_enabled: true
cluster_attributes: [platform]
containers:
service:
name: "Hello"
image: httpd:2.4
port_mappings:
- containerPort: 80
hostPort: 80
protocol: tcp
command:
- '/bin/sh -c "echo '' Amazon ECS Sample App Amazon ECS
Sample App Congratulations! Your application is now running on a container in Amazon
ECS. '' > /usr/local/apache2/htdocs/index.html && httpd-foreground"'
entrypoint: ["sh", "-c"]
task:
desired_count: 1
task_memory: 512
task_cpu: 256
```
#### Other Domains
This component supports alternate service names for your ECS Service through a couple of variables:
- `vanity_domain` & `vanity_alias` - This will create a route to the service in the listener rules of the ALB. This will
also create a Route 53 alias record in the hosted zone in this account. The hosted zone is looked up by the
`vanity_domain` input.
- `additional_targets` - This will create a route to the service in the listener rules of the ALB. This will not create
a Route 53 alias record.
Examples:
```yaml
ecs/platform/service/echo-server:
vars:
vanity_domain: "dev-acme.com"
vanity_alias:
- "echo-server.dev-acme.com"
additional_targets:
- "echo.acme.com"
```
This then creates the following listener rules:
```text
HTTP Host Header is
echo-server.public-platform.use2.dev.plat.service-discovery.com
OR echo-server.dev-acme.com
OR echo.acme.com
```
It will also create the record in Route53 to point `"echo-server.dev-acme.com"` to the ALB. Thus
`"echo-server.dev-acme.com"` should resolve.
We can then create a pointer to this service in the `acme.come` hosted zone.
```yaml
dns-primary:
vars:
domain_names:
- acme.com
record_config:
- root_zone: acme.com
name: echo.
type: CNAME
ttl: 60
records:
- echo-server.dev-acme.com
```
This will create a CNAME record in the `acme.com` hosted zone that points `echo.acme.com` to `echo-server.dev-acme.com`.
### EFS
EFS is supported by this ecs service, you can use either `efs_volumes` or `efs_component_volumes` in your task
definition.
This example shows how to use `efs_component_volumes` which remote looks up efs component and uses the `efs_id` to mount
the volume. And how to use `efs_volumes`
```yaml
components:
terraform:
ecs-services/my-service:
metadata:
component: ecs-service
inherits:
- ecs-services/defaults
vars:
containers:
service:
name: app
image: my-image:latest
log_configuration:
logDriver: awslogs
options: {}
port_mappings:
- containerPort: 8080
hostPort: 8080
protocol: tcp
mount_points:
- containerPath: "/var/lib/"
sourceVolume: "my-volume-mount"
task:
efs_component_volumes:
- name: "my-volume-mount"
host_path: null
efs_volume_configuration:
- component: efs/my-volume-mount
root_directory: "/var/lib/"
transit_encryption: "ENABLED"
transit_encryption_port: 2999
authorization_config: []
efs_volumes:
- name: "my-volume-mount-2"
host_path: null
efs_volume_ configuration:
- file_system_id: "fs-1234"
root_directory: "/var/lib/"
transit_encryption: "ENABLED"
transit_encryption_port: 2998
authorization_config: []
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`additional_lb_target_groups` optional
Additional load balancer target group configurations for registering multiple container ports.
This allows you to register sidecar containers to separate target groups.
Each entry requires:
- container_name: Name of the container to register
- container_port: Port on the container to register
- target_group_arn: ARN of the target group. Each additional port must specify a unique target group ARN
**Type:**
```hcl
list(object({
container_name = string
container_port = number
target_group_arn = string
}))
```
**Default value:** `[ ]`
A list of additional security group IDs to add to the service
**Default value:** `[ ]`
`additional_targets` (`list(string)`) optional
Additional target routes to add to the ALB that point to this service. The only difference between this and `var.vanity_alias` is `var.vanity_alias` will create an alias record in Route 53 in the hosted zone in this account as well. `var.additional_targets` only adds the listener route to this service's target group.
**Default value:** `[ ]`
`alb_configuration` (`string`) optional
The configuration to use for the ALB, specifying which cluster alb configuration to use
**Default value:** `"default"`
`alb_name` (`string`) optional
The name of the ALB this service should attach to
**Default value:** `null`
`autoscaling_dimension` (`string`) optional
The dimension to use to decide to autoscale
**Default value:** `"cpu"`
`autoscaling_enabled` (`bool`) optional
Should this service autoscale using SNS alarams
**Default value:** `true`
`chamber_service` (`string`) optional
SSM parameter service name for use with chamber. This is used in chamber_format where /$chamber_service/$name/$container_name/$parameter would be the default.
**Default value:** `"ecs-service"`
`cluster_attributes` (`list(string)`) optional
The attributes of the cluster name e.g. if the full name is `namespace-tenant-environment-dev-ecs-b2b` then the `cluster_name` is `ecs` and this value should be `b2b`.
**Default value:** `[ ]`
`containers` optional
Inputs for the container definition module.
`user`: The user to run as inside the container. Can be any of these formats: user, user:group, uid, uid:gid, user:gid, uid:group. The default (null) will use the container's configured `USER` directive or root if not set."
**Type:**
```hcl
map(object({
name = string
ecr_image = optional(string)
image = optional(string)
memory = optional(number)
memory_reservation = optional(number)
cpu = optional(number)
essential = optional(bool, true)
readonly_root_filesystem = optional(bool, null)
privileged = optional(bool, null)
user = optional(string, null)
container_depends_on = optional(list(object({
containerName = string
condition = string # START, COMPLETE, SUCCESS, HEALTHY
})), null)
port_mappings = optional(list(object({
containerPort = number
hostPort = optional(number)
protocol = optional(string)
name = optional(string)
appProtocol = optional(string)
})), [])
command = optional(list(string), null)
entrypoint = optional(list(string), null)
healthcheck = optional(object({
command = list(string)
interval = number
retries = number
startPeriod = number
timeout = number
}), null)
ulimits = optional(list(object({
name = string
softLimit = number
hardLimit = number
})), null)
log_configuration = optional(object({
logDriver = string
options = optional(map(string), {})
}))
docker_labels = optional(map(string), null)
map_environment = optional(map(string), {})
map_secrets = optional(map(string), {})
volumes_from = optional(list(object({
sourceContainer = string
readOnly = bool
})), null)
mount_points = optional(list(object({
sourceVolume = optional(string)
containerPath = optional(string)
readOnly = optional(bool)
})), [])
}))
```
**Default value:** `{ }`
The minimum percentage of CPU utilization average
**Default value:** `20`
`custom_security_group_rules` optional
The list of custom security group rules to add to the service security group
**Type:**
```hcl
list(object({
type = string
from_port = number
to_port = number
protocol = string
cidr_blocks = optional(list(string))
description = optional(string)
source_security_group_id = optional(string)
prefix_list_ids = optional(list(string))
security_group_id = optional(string)
}))
```
**Default value:** `[ ]`
`datadog_agent_sidecar_enabled` (`bool`) optional
Enable the Datadog Agent Sidecar
**Default value:** `false`
Datadog logs can be sent via cloudwatch logs (and lambda) or firelens, set this to true to enable firelens via a sidecar container for fluentbit
**Default value:** `false`
A list of the GitHub repositories that are allowed to assume this role from GitHub Actions. For example,
["cloudposse/infra-live"]. Can contain "*" as wildcard.
If org part of repo name is omitted, "cloudposse" will be assumed.
**Default value:** `[ ]`
The number of consecutive health checks successes required before healthy
**Default value:** `2`
`health_check_interval` (`number`) optional
The duration in seconds in between health checks
**Default value:** `15`
`health_check_matcher` (`string`) optional
The HTTP response codes to indicate a healthy check
**Default value:** `"200-404"`
`health_check_path` (`string`) optional
The destination for the health check request
**Default value:** `"/health"`
`health_check_port` (`string`) optional
The port to use to connect with the target. Valid values are either ports 1-65536, or `traffic-port`. Defaults to `traffic-port`
**Default value:** `"traffic-port"`
`health_check_protocol` (`string`) optional
The protocol to use to connect with the target. Defaults to HTTP. Not applicable when target_type is lambda
**Default value:** `"HTTP"`
`health_check_timeout` (`number`) optional
The amount of time to wait in seconds before failing a health check request
**Default value:** `10`
The number of consecutive health check failures required before unhealthy
**Default value:** `2`
`http_protocol` (`string`) optional
Which http protocol to use in outputs and SSM url params. This value is ignored if a load balancer is not used. If it is `null`, the redirect value from the ALB determines the protocol.
**Default value:** `null`
`iam_policy_enabled` (`bool`) optional
If set to true will create IAM policy in AWS
**Default value:** `false`
`iam_policy_statements` (`any`) optional
Map of IAM policy statements to use in the policy. This can be used with or instead of the `var.iam_source_json_url`.
**Default value:** `{ }`
`kinesis_enabled` (`bool`) optional
Enable Kinesis
**Default value:** `false`
`kms_alias_name_ssm` (`string`) optional
KMS alias name for SSM
**Default value:** `"alias/aws/ssm"`
`kms_key_alias` (`string`) optional
ID of KMS key
**Default value:** `"default"`
`lb_catch_all` (`bool`) optional
Should this service act as catch all for all subdomain hosts of the vanity domain
**Default value:** `false`
`logs` (`any`) optional
Feed inputs into cloudwatch logs module
**Default value:** `{ }`
The minimum percentage of Memory utilization average
**Default value:** `20`
`nlb_name` (`string`) optional
The name of the NLB this service should attach to
**Default value:** `null`
`port` (`number`) optional
The port for the created ALB target group. Defaults to 80
**Default value:** `80`
`protocol` (`string`) optional
The protocol for the created ALB target group. Defaults to HTTP
**Default value:** `"HTTP"`
`rds_name` (`any`) optional
The name of the RDS database this service should allow access to
**Default value:** `null`
`retention_period` (`number`) optional
Length of time data records are accessible after they are added to the stream
**Default value:** `48`
`s3_mirror_name` (`string`) optional
The name of the S3 mirror component
**Default value:** `null`
`scale_down_step_adjustments` optional
List of step adjustments for scale down policy
**Type:**
```hcl
list(object({
metric_interval_lower_bound = optional(number)
metric_interval_upper_bound = optional(number)
scaling_adjustment = number
}))
```
**Default value:**
```hcl
[
{
"metric_interval_lower_bound": null,
"metric_interval_upper_bound": 0,
"scaling_adjustment": -1
}
]
```
`scale_up_step_adjustments` optional
List of step adjustments for scale up policy
**Type:**
```hcl
list(object({
metric_interval_lower_bound = optional(number)
metric_interval_upper_bound = optional(number)
scaling_adjustment = number
}))
```
**Default value:**
```hcl
[
{
"metric_interval_lower_bound": 0,
"metric_interval_upper_bound": null,
"scaling_adjustment": 1
}
]
```
`service_connect_configurations` optional
The list of Service Connect configurations.
See `service_connect_configuration` docs https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service#service_connect_configuration
**Type:**
```hcl
list(object({
enabled = bool
namespace = optional(string, null)
log_configuration = optional(object({
log_driver = string
options = optional(map(string), null)
secret_option = optional(list(object({
name = string
value_from = string
})), [])
}), null)
service = optional(list(object({
client_alias = list(object({
dns_name = string
port = number
}))
discovery_name = optional(string, null)
ingress_port_override = optional(number, null)
port_name = string
})), [])
}))
```
**Default value:** `[ ]`
`service_registries` optional
The list of Service Registries.
See `service_registries` docs https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service#service_registries
**Type:**
```hcl
list(object({
namespace = string
registry_arn = optional(string)
port = optional(number)
container_name = optional(string)
container_port = optional(number)
}))
```
**Default value:** `[ ]`
`shard_count` (`number`) optional
Number of shards that the stream will use
**Default value:** `1`
`shard_level_metrics` (`list(string)`) optional
List of shard-level CloudWatch metrics which can be enabled for the stream
**Default value:**
```hcl
[
"IncomingBytes",
"IncomingRecords",
"IteratorAgeMilliseconds",
"OutgoingBytes",
"OutgoingRecords",
"ReadProvisionedThroughputExceeded",
"WriteProvisionedThroughputExceeded"
]
```
`ssm_enabled` (`bool`) optional
If `true` create SSM keys for the database user and password.
**Default value:** `false`
`ssm_key_format` (`string`) optional
SSM path format. The values will will be used in the following order: `var.ssm_key_prefix`, `var.name`, `var.ssm_key_*`
**Default value:** `"/%v/%v/%v"`
`ssm_key_prefix` (`string`) optional
SSM path prefix. Omit the leading forward slash `/`.
**Default value:** `"ecs-service"`
`stickiness_cookie_duration` (`number`) optional
The time period, in seconds, during which requests from a client should be routed to the same target. After this time period expires, the load balancer-generated cookie is considered stale. The range is 1 second to 1 week (604800 seconds). The default value is 1 day (86400 seconds)
**Default value:** `86400`
`stickiness_enabled` (`bool`) optional
Boolean to enable / disable `stickiness`. Default is `true`
**Default value:** `true`
`stickiness_type` (`string`) optional
The type of sticky sessions. The only current possible value is `lb_cookie`
**Default value:** `"lb_cookie"`
`stream_mode` (`string`) optional
Stream mode details for the Kinesis stream
**Default value:** `"PROVISIONED"`
A map of name to IAM Policy ARNs to attach to the generated task execution role.
The names are arbitrary, but must be known at plan time. The purpose of the name
is so that changes to one ARN do not cause a ripple effect on the other ARNs.
If you cannot provide unique names known at plan time, use `task_exec_policy_arns` instead.
**Default value:** `{ }`
`task_iam_role_component` (`string`) optional
A component that outputs an iam_role module as 'role' for adding to the service as a whole.
**Default value:** `null`
`task_policy_arns` (`list(string)`) optional
The IAM policy ARNs to attach to the ECS task IAM role
**Default value:**
```hcl
[
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
"arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess"
]
```
A component that outputs security_group_id for adding to the service as a whole.
**Default value:** `null`
`unauthenticated_paths` (`list(string)`) optional
Unauthenticated path pattern to match
**Default value:** `[ ]`
`unauthenticated_priority` (`string`) optional
The priority for the rules without authentication, between 1 and 50000 (1 being highest priority). Must be different from `authenticated_priority` since a listener can't have multiple rules with the same priority
**Default value:** `0`
`use_lb` (`bool`) optional
Whether use load balancer for the service
**Default value:** `false`
`use_rds_client_sg` (`bool`) optional
Use the RDS client security group
**Default value:** `false`
`vanity_alias` (`list(string)`) optional
The vanity aliases to use for the public LB.
**Default value:** `[ ]`
`vanity_domain` (`string`) optional
Whether to use the vanity domain alias for the service
**Default value:** `null`
`vpc_component_name` (`string`) optional
The name of a VPC component
**Default value:** `"vpc"`
`zone_component` (`string`) optional
The component name to look up service domain remote-state on
**Default value:** `"dns-delegated"`
`zone_component_output` (`string`) optional
A json query to use to get the zone domain from the remote state. See
**Default value:** `".default_domain_name"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`ecs_cluster_arn`
Selected ECS cluster ARN
`environment_map`
Environment variables to pass to the container, this is a map of key/value pairs, where the key is `containerName,variableName`
The `format()` string to use to generate the hostname via `format(var.hostname_template, var.tenant, var.stage, var.environment)`"
Typically something like `"echo.%[3]v.%[2]v.example.com"`.
A list of Security Group rule objects to add to the created security group, in addition to the ones
this module normally creates. (To suppress the module's rules, set `create_security_group` to false
and supply your own security group via `associated_security_group_ids`.)
The keys and values of the objects are fully compatible with the `aws_security_group_rule` resource, except
for `security_group_id` which will be ignored, and the optional "key" which, if provided, must be unique and known at "plan" time.
To get more info see https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule .
**Default value:** `[ ]`
`efs_backup_policy_enabled` (`bool`) optional
If `true`, automatic backups will be enabled.
**Default value:** `false`
`eks_component_names` (`set(string)`) optional
The names of the eks components
**Default value:**
```hcl
[
"eks/cluster"
]
```
`eks_security_group_enabled` (`bool`) optional
Use the eks default security group
**Default value:** `false`
`performance_mode` (`string`) optional
The file system performance mode. Can be either `generalPurpose` or `maxIO`
**Default value:** `"generalPurpose"`
The throughput, measured in MiB/s, that you want to provision for the file system. Only applicable with `throughput_mode` set to provisioned
**Default value:** `0`
`throughput_mode` (`string`) optional
Throughput mode for the file system. Defaults to bursting. Valid values: `bursting`, `provisioned`. When using `provisioned`, also set `provisioned_throughput_in_mibps`
**Default value:** `"bursting"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`efs_arn`
EFS ARN
`efs_dns_name`
EFS DNS name
`efs_host`
DNS hostname for the EFS
`efs_id`
EFS ID
`efs_mount_target_dns_names`
List of EFS mount target DNS names
`efs_mount_target_ids`
List of EFS mount target IDs (one per Availability Zone)
`efs_mount_target_ips`
List of EFS mount target IPs (one per Availability Zone)
`efs_network_interface_ids`
List of mount target network interface IDs
`security_group_arn`
EFS Security Group ARN
`security_group_id`
EFS Security Group ID
`security_group_name`
EFS Security Group name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`efs` | 1.3.0 | [`cloudposse/efs/aws`](https://registry.terraform.io/modules/cloudposse/efs/aws/1.3.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`gbl_dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key_efs` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.kms_key_efs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
---
## actions-runner-controller
This component creates a Helm release for
[actions-runner-controller](https://github.com/actions-runner-controller/actions-runner-controller) on an EKS cluster.
## Usage
**Stack Level**: Regional
Once the catalog file is created, the file can be imported as follows.
```yaml
import:
- catalog/eks/actions-runner-controller
...
```
The default catalog values `e.g. stacks/catalog/eks/actions-runner-controller.yaml`
```yaml
components:
terraform:
eks/actions-runner-controller:
vars:
enabled: true
name: "actions-runner" # avoids hitting name length limit on IAM role
chart: "actions-runner-controller"
chart_repository: "https://actions-runner-controller.github.io/actions-runner-controller"
chart_version: "0.23.7"
kubernetes_namespace: "actions-runner-system"
create_namespace: true
kubeconfig_exec_auth_api_version: "client.authentication.k8s.io/v1beta1"
# helm_manifest_experiment_enabled feature causes inconsistent final plans with charts that have CRDs
# see https://github.com/hashicorp/terraform-provider-helm/issues/711#issuecomment-836192991
helm_manifest_experiment_enabled: false
ssm_github_secret_path: "/github_runners/controller_github_app_secret"
github_app_id: "REPLACE_ME_GH_APP_ID"
github_app_installation_id: "REPLACE_ME_GH_INSTALLATION_ID"
# use to enable docker config json secret, which can login to dockerhub for your GHA Runners
docker_config_json_enabled: true
# The content of this param should look like:
# {
# "auths": {
# "https://index.docker.io/v1/": {
# "username": "your_username",
# "password": "your_password
# "email": "your_email",
# "auth": "$(echo "your_username:your_password" | base64)"
# }
# }
# } | base64
ssm_docker_config_json_path: "/github_runners/docker/config-json"
# ssm_github_webhook_secret_token_path: "/github_runners/github_webhook_secret_token"
# The webhook based autoscaler is much more efficient than the polling based autoscaler
webhook:
enabled: true
hostname_template: "gha-webhook.%[3]v.%[2]v.%[1]v.acme.com"
eks_component_name: "eks/cluster"
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 250m
memory: 128Mi
runners:
infra-runner:
node_selector:
kubernetes.io/os: "linux"
kubernetes.io/arch: "amd64"
type: "repository" # can be either 'organization' or 'repository'
dind_enabled: true # If `true`, a Docker daemon will be started in the runner Pod.
# To run Docker in Docker (dind), change image to summerwind/actions-runner-dind
# If not running Docker, change image to summerwind/actions-runner use a smaller image
image: summerwind/actions-runner-dind
# `scope` is org name for Organization runners, repo name for Repository runners
scope: "org/infra"
min_replicas: 0 # Default, overridden by scheduled_overrides below
max_replicas: 20
# Scheduled overrides. See https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#scheduled-overrides
# Order is important. The earlier entry is prioritized higher than later entries. So you usually define
# one-time overrides at the top of your list, then yearly, monthly, weekly, and lastly daily overrides.
scheduled_overrides:
# Override the daily override on the weekends
- start_time: "2024-07-06T00:00:00-08:00" # Start of Saturday morning Pacific Standard Time
end_time: "2024-07-07T23:59:59-07:00" # End of Sunday night Pacific Daylight Time
min_replicas: 0
recurrence_rule:
frequency: "Weekly"
# Keep a warm pool of runners during normal working hours
- start_time: "2024-07-01T09:00:00-08:00" # 9am Pacific Standard Time (8am PDT), start of workday
end_time: "2024-07-01T17:00:00-07:00" # 5pm Pacific Daylight Time (6pm PST), end of workday
min_replicas: 2
recurrence_rule:
frequency: "Daily"
scale_down_delay_seconds: 100
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
webhook_driven_scaling_enabled: true
# max_duration is the duration after which a job will be considered completed,
# (and the runner killed) even if the webhook has not received a "job completed" event.
# This is to ensure that if an event is missed, it does not leave the runner running forever.
# Set it long enough to cover the longest job you expect to run and then some.
# See https://github.com/actions/actions-runner-controller/blob/9afd93065fa8b1f87296f0dcdf0c2753a0548cb7/docs/automatically-scaling-runners.md?plain=1#L264-L268
max_duration: "90m"
# Pull-driven scaling is obsolete and should not be used.
pull_driven_scaling_enabled: false
# Labels are not case-sensitive to GitHub, but *are* case-sensitive
# to the webhook based autoscaler, which requires exact matches
# between the `runs-on:` label in the workflow and the runner labels.
labels:
- "Linux"
- "linux"
- "Ubuntu"
- "ubuntu"
- "X64"
- "x64"
- "x86_64"
- "amd64"
- "AMD64"
- "core-auto"
- "common"
# Uncomment this additional runner if you want to run a second
# runner pool for `arm64` architecture
#infra-runner-arm64:
# node_selector:
# kubernetes.io/os: "linux"
# kubernetes.io/arch: "arm64"
# # Add the corresponding taint to the Kubernetes nodes running `arm64` architecture
# # to prevent Kubernetes pods without node selectors from being scheduled on them.
# tolerations:
# - key: "kubernetes.io/arch"
# operator: "Equal"
# value: "arm64"
# effect: "NoSchedule"
# type: "repository" # can be either 'organization' or 'repository'
# dind_enabled: false # If `true`, a Docker sidecar container will be deployed
# # To run Docker in Docker (dind), change image to summerwind/actions-runner-dind
# # If not running Docker, change image to summerwind/actions-runner use a smaller image
# image: summerwind/actions-runner-dind
# # `scope` is org name for Organization runners, repo name for Repository runners
# scope: "org/infra"
# group: "ArmRunners"
# # Tell Karpenter not to evict this pod while it is running a job.
# # If we do not set this, Karpenter will feel free to terminate the runner while it is running a job,
# # as part of its consolidation efforts, even when using "on demand" instances.
# running_pod_annotations:
# karpenter.sh/do-not-disrupt: "true"
# min_replicas: 0 # Set to so that no ARM instance is running idle, set to 1 for faster startups
# max_replicas: 20
# scale_down_delay_seconds: 100
# resources:
# limits:
# cpu: 200m
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 128Mi
# webhook_driven_scaling_enabled: true
# max_duration: "90m"
# pull_driven_scaling_enabled: false
# # Labels are not case-sensitive to GitHub, but *are* case-sensitive
# # to the webhook based autoscaler, which requires exact matches
# # between the `runs-on:` label in the workflow and the runner labels.
# # Leave "common" off the list so that "common" jobs are always
# # scheduled on the amd64 runners. This is because the webhook
# # based autoscaler will not scale a runner pool if the
# # `runs-on:` labels in the workflow match more than one pool.
# labels:
# - "Linux"
# - "linux"
# - "Ubuntu"
# - "ubuntu"
# - "amd64"
# - "AMD64"
# - "core-auto"
```
### Generating Required Secrets
AWS SSM is used to store and retrieve secrets.
Decide on the SSM path for the GitHub secret (PAT or Application private key) and GitHub webhook secret.
Since the secret is automatically scoped by AWS to the account and region where the secret is stored, we recommend the
secret be stored at `/github_runners/controller_github_app_secret` unless you plan on running multiple instances of the
controller. If you plan on running multiple instances of the controller, and want to give them different access
(otherwise they could share the same secret), then you can add a path component to the SSM path. For example
`/github_runners/cicd/controller_github_app_secret`.
```
ssm_github_secret_path: "/github_runners/controller_github_app_secret"
```
The preferred way to authenticate is by _creating_ and _installing_ a GitHub App. This is the recommended approach as it
allows for more much more restricted access than using a personal access token, at least until
[fine-grained personal access token permissions](https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github/)
are generally available. Follow the instructions
[here](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/docs/detailed-docs.md#deploying-using-github-app-authentication)
to create and install the GitHub App.
At the creation stage, you will be asked to generate a private key. This is the private key that will be used to
authenticate the Action Runner Controller. Download the file and store the contents in SSM using the following command,
adjusting the profile and file name. The profile should be the `admin` role in the account to which you are deploying
the runner controller. The file name should be the name of the private key file you downloaded.
```
AWS_PROFILE=acme-mgmt-use2-auto-admin chamber write github_runners controller_github_app_secret -- "$(cat APP_NAME.DATE.private-key.pem)"
```
You can verify the file was correctly written to SSM by matching the private key fingerprint reported by GitHub with:
```
AWS_PROFILE=acme-mgmt-use2-auto-admin chamber read -q github_runners controller_github_app_secret | openssl rsa -in - -pubout -outform DER | openssl sha256 -binary | openssl base64
```
At this stage, record the Application ID and the private key fingerprint in your secrets manager (e.g. 1Password). You
will need the Application ID to configure the runner controller, and want the fingerprint to verify the private key.
Proceed to install the GitHub App in the organization or repository you want to use the runner controller for, and
record the Installation ID (the final numeric part of the URL, as explained in the instructions linked above) in your
secrets manager. You will need the Installation ID to configure the runner controller.
In your stack configuration, set the following variables, making sure to quote the values so they are treated as
strings, not numbers.
```
github_app_id: "12345"
github_app_installation_id: "12345"
```
OR (obsolete)
- A PAT with the scope outlined in
[this document](https://github.com/actions-runner-controller/actions-runner-controller#deploying-using-pat-authentication).
Save this to the value specified by `ssm_github_token_path` using the following command, adjusting the AWS_PROFILE to
refer to the `admin` role in the account to which you are deploying the runner controller:
```
AWS_PROFILE=acme-mgmt-use2-auto-admin chamber write github_runners controller_github_app_secret -- ""
```
2. If using the Webhook Driven autoscaling (recommended), generate a random string to use as the Secret when creating
the webhook in GitHub.
Generate the string using 1Password (no special characters, length 45) or by running
```bash
dd if=/dev/random bs=1 count=33 2>/dev/null | base64
```
Store this key in AWS SSM under the same path specified by `ssm_github_webhook_secret_token_path`
```
ssm_github_webhook_secret_token_path: "/github_runners/github_webhook_secret"
```
### Dockerhub Authentication
Authenticating with Dockerhub is optional but when enabled can ensure stability by increasing the number of pulls
allowed from your runners.
To get started set `docker_config_json_enabled` to `true` and `ssm_docker_config_json_path` to the SSM path where the
credentials are stored, for example `github_runners/docker`.
To create the credentials file, fill out a JSON file locally with the following content:
```json
{
"auths": {
"https://index.docker.io/v1/": {
"username": "your_username",
"password": "your_password",
"email": "your_email",
"auth": "$(echo "your_username: your_password" | base64)"
}
}
}
```
Then write the file to SSM with the following Atmos Workflow:
```yaml
save/docker-config-json:
description: Prompt for uploading Docker Config JSON to the AWS SSM Parameter Store
steps:
- type: shell
command: |-
echo "Please enter the Docker Config JSON file path"
echo "See https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry for information on how to create the file"
read -p "Docker Config JSON file path: " -r DOCKER_CONFIG_JSON_FILE_PATH
if [ -z "DOCKER_CONFIG_JSON_FILE_PATH" ]
then
echo 'Inputs cannot be blank please try again!'
exit 0
fi
DOCKER_CONFIG_JSON=$(<$DOCKER_CONFIG_JSON_FILE_PATH);
ENCODED_DOCKER_CONFIG_JSON=$(echo "$DOCKER_CONFIG_JSON" | base64 -w 0 );
echo $DOCKER_CONFIG_JSON
echo $ENCODED_DOCKER_CONFIG_JSON
AWS_PROFILE=acme-core-gbl-auto-admin
set -e
chamber write github_runners/docker config-json -- "$ENCODED_DOCKER_CONFIG_JSON"
echo 'Saved Docker Config JSON to the AWS SSM Parameter Store'
```
Don't forget to update the AWS Profile in the script.
### Using Runner Groups
GitHub supports grouping runners into distinct
[Runner Groups](https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups),
which allow you to have different access controls for different runners. Read the linked documentation about creating
and configuring Runner Groups, which you must do through the GitHub Web UI. If you choose to create Runner Groups, you
can assign one or more Runner pools (from the `runners` map) to groups (only one group per runner pool) by including
`group: ` in the runner configuration. We recommend including it immediately after `scope`.
### Using Webhook Driven Autoscaling (recommended)
We recommend using Webhook Driven Autoscaling until GitHub's own autoscaling solution is as capable as the Summerwind
solution this component deploys. See
[this discussion](https://github.com/actions/actions-runner-controller/discussions/3340) for some perspective on why the
Summerwind solution is currently (summer 2024) considered superior.
To use the Webhook Driven Autoscaling, in addition to setting `webhook_driven_scaling_enabled` to `true`, you must also
install the GitHub organization-level webhook after deploying the component (specifically, the webhook server). The URL
for the webhook is determined by the `webhook.hostname_template` and where it is deployed. Recommended URL is
`https://gha-webhook.[environment].[stage].[tenant].[service-discovery-domain]`.
As a GitHub organization admin, go to `https://github.com/organizations/[organization]/settings/hooks`, and then:
- Click"Add webhook" and create a new webhook with the following settings:
- Payload URL: copy from Terraform output `webhook_payload_url`
- Content type: `application/json`
- Secret: whatever you configured in the `sops` secret above
- Which events would you like to trigger this webhook:
- Select "Let me select individual events"
- Uncheck everything ("Pushes" is likely the only thing already selected)
- Check "Workflow jobs"
- Ensure that "Active" is checked (should be checked by default)
- Click "Add webhook" at the bottom of the settings page
After the webhook is created, select "edit" for the webhook and go to the "Recent Deliveries" tab and verify that there
is a delivery (of a "ping" event) with a green check mark. If not, verify all the settings and consult the logs of the
`actions-runner-controller-github-webhook-server` pod.
### Configuring Webhook Driven Autoscaling
The `HorizontalRunnerAutoscaler scaleUpTriggers.duration` (see [Webhook Driven Scaling documentation](https://github.
com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#webhook-driven-scaling)) is
controlled by the `max_duration` setting for each Runner. The purpose of this timeout is to ensure, in case a job
cancellation or termination event gets missed, that the resulting idle runner eventually gets terminated.
#### How the Autoscaler Determines the Desired Runner Pool Size
When a job is queued, a `capacityReservation` is created for it. The HRA (Horizontal Runner Autoscaler) sums up all the
capacity reservations to calculate the desired size of the runner pool, subject to the limits of `minReplicas` and
`maxReplicas`. The idea is that a `capacityReservation` is deleted when a job is completed or canceled, and the pool
size will be equal to `jobsStarted - jobsFinished`. However, it can happen that a job will finish without the HRA being
successfully notified about it, so as a safety measure, the `capacityReservation` will expire after a configurable
amount of time, at which point it will be deleted without regard to the job being finished. This ensures that eventually
an idle runner pool will scale down to `minReplicas`.
If it happens that the capacity reservation expires before the job is finished, the Horizontal Runner Autoscaler (HRA)
will scale down the pool by 2 instead of 1: once because the capacity reservation expired, and once because the job
finished. This will also cause starvation of waiting jobs, because the next in line will have its timeout timer started
but will not actually start running because no runner is available. And if `minReplicas` is set to zero, the pool will
scale down to zero before finishing all the jobs, leaving some waiting indefinitely. This is why it is important to set
the `max_duration` to a time long enough to cover the full time a job may have to wait between the time it is queued and
the time it finishes, assuming that the HRA scales up the pool by 1 and runs the job on the new runner.
:::tip
If there are more jobs queued than there are runners allowed by `maxReplicas`, the timeout timer does not start on the
capacity reservation until enough reservations ahead of it are removed for it to be considered as representing and
active job. Although there are some edge cases regarding `max_duration` that seem not to be covered properly (see
[actions-runner-controller issue #2466](https://github.com/actions/actions-runner-controller/issues/2466)), they only
merit adding a few extra minutes to the timeout.
:::
### Recommended `max_duration` Duration
#### Consequences of Too Short of a `max_duration` Duration
If you set `max_duration` to too short a duration, the Horizontal Runner Autoscaler will cancel capacity reservations
for jobs that have not yet finished, and the pool will become too small. This will be most serious if you have set
`minReplicas = 0` because in this case, jobs will be left in the queue indefinitely. With a higher value of
`minReplicas`, the pool will eventually make it through all the queued jobs, but not as quickly as intended due to the
incorrectly reduced capacity.
#### Consequences of Too Long of a `max_duration` Duration
If the Horizontal Runner Autoscaler misses a scale-down event (which can happen because events do not have delivery
guarantees), a runner may be left running idly for as long as the `max_duration` duration. The only problem with this is
the added expense of leaving the idle runner running.
#### Recommendation
As a result, we recommend setting `max_duration` to a period long enough to cover:
- The time it takes for the HRA to scale up the pool and make a new runner available
- The time it takes for the runner to pick up the job from GitHub
- The time it takes for the job to start running on the new runner
- The maximum time a job might take
Because the consequences of expiring a capacity reservation before the job is finished can be severe, we recommend
setting `max_duration` to a period at least 30 minutes longer than you expect the longest job to take. Remember, when
everything works properly, the HRA will scale down the pool as jobs finish, so there is little cost to setting a long
duration, and the cost looks even smaller by comparison to the cost of having too short a duration.
For lightly used runner pools expecting only short jobs, you can set `max_duration` to `"30m"`. As a rule of thumb, we
recommend setting `maxReplicas` high enough that jobs never wait on the queue more than an hour.
### Interaction with Karpenter or other EKS autoscaling solutions
Kubernetes cluster autoscaling solutions generally expect that a Pod runs a service that can be terminated on one Node
and restarted on another with only a short duration needed to finish processing any in-flight requests. When the cluster
is resized, the cluster autoscaler will do just that. However, GitHub Action Runner Jobs do not fit this model. If a Pod
is terminated in the middle of a job, the job is lost. The likelihood of this happening is increased by the fact that
the Action Runner Controller Autoscaler is expanding and contracting the size of the Runner Pool on a regular basis,
causing the cluster autoscaler to more frequently want to scale up or scale down the EKS cluster, and, consequently, to
move Pods around.
To handle these kinds of situations, Karpenter respects an annotation on the Pod:
```yaml
spec:
template:
metadata:
annotations:
karpenter.sh/do-not-disrupt: "true"
```
When you set this annotation on the Pod, Karpenter will not evict it. This means that the Pod will stay on the Node it
is on, and the Node it is on will not be considered for eviction. This is good because it means that the Pod will not be
terminated in the middle of a job. However, it also means that the Node the Pod is on will not be considered for
termination, which means that the Node will not be removed from the cluster, which means that the cluster will not
shrink in size when you would like it to.
Since the Runner Pods terminate at the end of the job, this is not a problem for the Pods actually running jobs.
However, if you have set `minReplicas > 0`, then you have some Pods that are just idling, waiting for jobs to be
assigned to them. These Pods are exactly the kind of Pods you want terminated and moved when the cluster is
underutilized. Therefore, when you set `minReplicas > 0`, you should **NOT** set `karpenter.sh/do-not-evict: "true"` on
the Pod via the `pod_annotations` attribute of the `runners` input. (**But wait**, _there is good news_!)
We have [requested a feature](https://github.com/actions/actions-runner-controller/issues/2562) that will allow you to
set `karpenter.sh/do-not-disrupt: "true"` and `minReplicas > 0` at the same time by only annotating Pods running jobs.
Meanwhile, **we have implemented this for you** using a job startup hook. This hook will set annotations on the Pod when
the job starts. When the job finishes, the Pod will be deleted by the controller, so the annotations will not need to be
removed. Configure annotations that apply only to Pods running jobs in the `running_pod_annotations` attribute of the
`runners` input.
### Updating CRDs
When updating the chart or application version of `actions-runner-controller`, it is possible you will need to install
new CRDs. Such a requirement should be indicated in the `actions-runner-controller` release notes and may require some
adjustment to our custom chart or configuration.
This component uses `helm` to manage the deployment, and `helm` will not auto-update CRDs. If new CRDs are needed,
install them manually via a command like
```
kubectl create -f https://raw.githubusercontent.com/actions-runner-controller/actions-runner-controller/master/charts/actions-runner-controller/crds/actions.summerwind.dev_horizontalrunnerautoscalers.yaml
```
### Useful Reference
Consult [actions-runner-controller](https://github.com/actions-runner-controller/actions-runner-controller)
documentation for further details.
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
`chart_repository` (`string`) required
Repository URL where to locate the requested chart.
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region.
`resources` required
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
`runners` required
Map of Action Runner configurations, with the key being the name of the runner. Please note that the name must be in
kebab-case.
For example:
```hcl
organization_runner = {
type = "organization" # can be either 'organization' or 'repository'
dind_enabled: true # A Docker daemon will be started in the runner Pod
image: summerwind/actions-runner-dind # If dind_enabled=false, set this to 'summerwind/actions-runner'
scope = "ACME" # org name for Organization runners, repo name for Repository runners
group = "core-automation" # Optional. Assigns the runners to a runner group, for access control.
scale_down_delay_seconds = 300
min_replicas = 1
max_replicas = 5
labels = [
"Ubuntu",
"core-automation",
]
}
```
**Type:**
```hcl
map(object({
type = string
scope = string
group = optional(string, null)
image = optional(string, "summerwind/actions-runner-dind")
auto_update_enabled = optional(bool, true)
dind_enabled = optional(bool, true)
node_selector = optional(map(string), {})
pod_annotations = optional(map(string), {})
# running_pod_annotations are only applied to the pods once they start running a job
running_pod_annotations = optional(map(string), {})
# affinity is too complex to model. Whatever you assigned affinity will be copied
# to the runner Pod spec.
affinity = optional(any)
tolerations = optional(list(object({
key = string
operator = string
value = optional(string, null)
effect = string
})), [])
scale_down_delay_seconds = optional(number, 300)
min_replicas = number
max_replicas = number
# Scheduled overrides. See https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#scheduled-overrides
# Order is important. The earlier entry is prioritized higher than later entries. So you usually define
# one-time overrides at the top of your list, then yearly, monthly, weekly, and lastly daily overrides.
scheduled_overrides = optional(list(object({
start_time = string # ISO 8601 format, eg, "2021-06-01T00:00:00+09:00"
end_time = string # ISO 8601 format, eg, "2021-06-01T00:00:00+09:00"
min_replicas = optional(number)
max_replicas = optional(number)
recurrence_rule = optional(object({
frequency = string # One of Daily, Weekly, Monthly, Yearly
until_time = optional(string) # ISO 8601 format time after which the schedule will no longer apply
}))
})), [])
busy_metrics = optional(object({
scale_up_threshold = string
scale_down_threshold = string
scale_up_adjustment = optional(string)
scale_down_adjustment = optional(string)
scale_up_factor = optional(string)
scale_down_factor = optional(string)
}))
webhook_driven_scaling_enabled = optional(bool, true)
# max_duration is the duration after which a job will be considered completed,
# even if the webhook has not received a "job completed" event.
# This is to ensure that if an event is missed, it does not leave the runner running forever.
# Set it long enough to cover the longest job you expect to run and then some.
# See https://github.com/actions/actions-runner-controller/blob/9afd93065fa8b1f87296f0dcdf0c2753a0548cb7/docs/automatically-scaling-runners.md?plain=1#L264-L268
# Defaults to 1 hour programmatically (to be able to detect if both max_duration and webhook_startup_timeout are set).
max_duration = optional(string)
# The name `webhook_startup_timeout` was misleading and has been deprecated.
# It has been renamed `max_duration`.
webhook_startup_timeout = optional(string)
# Adjust the time (in seconds) to wait for the Docker in Docker daemon to become responsive.
wait_for_docker_seconds = optional(string, "")
pull_driven_scaling_enabled = optional(bool, false)
labels = optional(list(string), [])
# If not null, `docker_storage` specifies the size (as `go` string) of
# an ephemeral (default storage class) Persistent Volume to allocate for the Docker daemon.
# Takes precedence over `tmpfs_enabled` for the Docker daemon storage.
docker_storage = optional(string, null)
# storage is deprecated in favor of docker_storage, since it is only storage for the Docker daemon
storage = optional(string, null)
# If `pvc_enabled` is true, a Persistent Volume Claim will be created for the runner
# and mounted at /home/runner/work/shared. This is useful for sharing data between runners.
pvc_enabled = optional(bool, false)
# If `tmpfs_enabled` is `true`, both the runner and the docker daemon will use a tmpfs volume,
# meaning that all data will be stored in RAM rather than on disk, bypassing disk I/O limitations,
# but what would have been disk usage is now additional memory usage. You must specify memory
# requests and limits when using tmpfs or else the Pod will likely crash the Node.
tmpfs_enabled = optional(bool)
resources = optional(object({
limits = optional(object({
cpu = optional(string, "1")
memory = optional(string, "1Gi")
# ephemeral-storage is the Kubernetes name, but `ephemeral_storage` is the gomplate name,
# so allow either. If both are specified, `ephemeral-storage` takes precedence.
ephemeral-storage = optional(string)
ephemeral_storage = optional(string, "10Gi")
}), {})
requests = optional(object({
cpu = optional(string, "500m")
memory = optional(string, "256Mi")
# ephemeral-storage is the Kubernetes name, but `ephemeral_storage` is the gomplate name,
# so allow either. If both are specified, `ephemeral-storage` takes precedence.
ephemeral-storage = optional(string)
ephemeral_storage = optional(string, "1Gi")
}), {})
}), {})
}))
```
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`context_tags_enabled` (`bool`) optional
Whether or not to include all context tags as labels for each runner
**Default value:** `false`
`controller_replica_count` (`number`) optional
The number of replicas of the runner-controller to run.
**Default value:** `2`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `false`.
**Default value:** `null`
`docker_config_json_enabled` (`bool`) optional
Whether the Docker config JSON is enabled
**Default value:** `false`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
If you are going to create the Kubernetes Secret the runner-controller will use
by some means (such as SOPS) outside of this component, set the name of the secret
here and it will be used. In this case, this component will not create a secret
and you can leave the secret-related inputs with their default (empty) values.
The same secret will be used by both the runner-controller and the webhook-server.
**Default value:** `""`
`github_app_id` (`string`) optional
The ID of the GitHub App to use for the runner controller.
**Default value:** `""`
`github_app_installation_id` (`string`) optional
The "Installation ID" of the GitHub App to use for the runner controller.
**Default value:** `""`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`s3_bucket_arns` (`list(string)`) optional
List of ARNs of S3 Buckets to which the runners will have read-write access to.
**Default value:** `[ ]`
`ssm_docker_config_json_path` (`string`) optional
SSM path to the Docker config JSON
**Default value:** `null`
`ssm_github_secret_path` (`string`) optional
The path in SSM to the GitHub app private key file contents or GitHub PAT token.
**Default value:** `""`
The path in SSM to the GitHub Webhook Secret token.
**Default value:** `""`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `null`
`webhook` optional
Configuration for the GitHub Webhook Server.
`hostname_template` is the `format()` string to use to generate the hostname via `format(var.hostname_template, var.tenant, var.stage, var.environment)`"
Typically something like `"echo.%[3]v.%[2]v.example.com"`.
`queue_limit` is the maximum number of webhook events that can be queued up for processing by the autoscaler.
When the queue gets full, webhook events will be dropped (status 500).
**Type:**
```hcl
object({
enabled = bool
hostname_template = string
queue_limit = optional(number, 1000)
})
```
**Default value:**
```hcl
{
"enabled": false,
"hostname_template": null,
"queue_limit": 1000
}
```
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
`metadata_action_runner_releases`
Block statuses of the deployed actions-runner chart releases
`webhook_payload_url`
Payload URL for GitHub webhook
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`actions_runner` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`actions_runner_controller` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_ssm_parameter.docker_config_json`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.github_token`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.github_webhook_secret_token`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## addon
This component is responsible for installing and managing addons for EKS clusters.
You may want to use this component rather than `var.addons` with `eks/cluster` to deploy addons that require additional
prerequisites or configuration before they can be installed. For example, if you need to install a priority class before
installing an addon, you can use this component to install the priority class first.
## Usage
**Stack Level**: Regional
For example, to install the CloudWatch Observability addon for EKS:
```yaml
components:
terraform:
# https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Observability-EKS-addon.html
eks/addon/cloudwatch:
metadata:
component: eks/addon
vars:
addon_name: amazon-cloudwatch-observability
addon_version: "v2.5.0-eksbuild.1"
kubernetes_namespace: amazon-cloudwatch # this namespace is defined by the addon
resolve_conflicts_on_create: OVERWRITE
resolve_conflicts_on_update: OVERWRITE
priority_class_enabled: true
additional_policy_arns:
- "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
```
## Variables
### Required Variables
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
The Kubernetes namespace for the EKS addon
**Default value:** `"kube-system"`
`priority_class_enabled` (`bool`) optional
Whether to enable the priority class for the EKS addon
**Default value:** `false`
`resolve_conflicts_on_create` (`string`) optional
How to resolve conflicts on addon creation
**Default value:** `null`
`resolve_conflicts_on_update` (`string`) optional
How to resolve conflicts on addon update
**Default value:** `null`
`update_timeout` (`string`) optional
The timeout to update the EKS addon
**Default value:** `"15m"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`addon_arn`
The Amazon Resource Name (ARN) of the EKS addon
`addon_version`
The version of the EKS addon
`priority_class_name`
The name of the Kubernetes priority class (if enabled)
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks_iam_role` | 2.2.1 | [`cloudposse/eks-iam-role/aws`](https://registry.terraform.io/modules/cloudposse/eks-iam-role/aws/2.2.1) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_eks_addon.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_addon) (resource)
- [`aws_iam_role_policy_attachment.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`kubernetes_priority_class.this`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/priority_class) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## alb-controller
This component creates a Helm release for
[alb-controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller) on an EKS cluster.
[alb-controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller) is a Kubernetes addon that, in the
context of AWS, provisions and manages ALBs and NLBs based on Service and Ingress annotations. This module also can (and
is recommended to) provision a default IngressClass.
### Special note about upgrading
When upgrading the chart version, check to see if the IAM policy for the service account needs to be updated. If it
does, update the policy in the `distributed-iam-policy.tf` file. Probably the easiest way to check if it needs updating
is to simply download the policy from
https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json and
compare it to the policy in `distributed-iam-policy.tf`.
## Usage
**Stack Level**: Regional
Once the catalog file is created, the file can be imported as follows.
```yaml
import:
- catalog/eks/alb-controller
...
```
The default catalog values `e.g. stacks/catalog/eks/alb-controller.yaml`
```yaml
components:
terraform:
eks/alb-controller:
vars:
chart: aws-load-balancer-controller
chart_repository: https://aws.github.io/eks-charts
# IMPORTANT: When updating the chart version, check to see if the IAM policy for the service account.
# needs to be updated, and if it does, update the policy in the `distributed-iam-policy.tf` file.
chart_version: "1.7.1"
create_namespace: true
kubernetes_namespace: alb-controller
# this feature causes inconsistent final plans
# see https://github.com/hashicorp/terraform-provider-helm/issues/711#issuecomment-836192991
helm_manifest_experiment_enabled: false
default_ingress_class_name: default
default_ingress_group: common
default_ingress_ip_address_type: ipv4
default_ingress_scheme: internet-facing
# You can use `chart_values` to set any other chart options. Treat `chart_values` as the root of the doc.
#
# # For example
# ---
# chart_values:
# enableShield: false
chart_values: {}
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
```
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
`chart_repository` (`string`) required
Repository URL where to locate the requested chart.
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region.
`resources` required
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `false`.
**Default value:** `null`
IP address type for default ingress, one of `ipv4` or `dualstack`.
**Default value:** `"ipv4"`
`default_ingress_load_balancer_attributes` (`list(object({ key = string, value = string }))`) optional
A list of load balancer attributes to apply to the default ingress load balancer.
See [Load Balancer Attributes](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#load-balancer-attributes).
**Default value:** `[ ]`
`default_ingress_scheme` (`string`) optional
Scheme for default ingress, one of `internet-facing` or `internal`.
**Default value:** `"internet-facing"`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`vpc_component_name` (`string`) optional
The name of the vpc component
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`alb_controller` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## alb-controller-ingress-class
This component deploys a Kubernetes `IngressClass` resource for the AWS Load Balancer Controller. This is not often
needed, as the default IngressClass deployed by the `eks/alb-controller` component is sufficient for most use cases, and
when it is not, a service can deploy its own IngressClass. This is for the rare case where you want to deploy an
additional IngressClass deploying an additional ALB that you nevertheless want to be shared by some services, with none
of them explicitly owning it.
## Usage
**Stack Level**: Regional
```yaml
components:
terraform:
eks/alb-controller-ingress-class:
vars:
class_name: special
group: special
ip_address_type: ipv4
scheme: internet-facing
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region.
### Optional Variables
`additional_tags` (`map(string)`) optional
Additional tags to apply to the ingress load balancer.
**Default value:** `{ }`
`class_name` (`string`) optional
Class name for default ingress
**Default value:** `"default"`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
`group` (`string`) optional
Group name for default ingress
**Default value:** `"common"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`ip_address_type` (`string`) optional
IP address type for default ingress, one of `ipv4` or `dualstack`.
**Default value:** `"dualstack"`
`is_default` (`bool`) optional
Set `true` to make this the default IngressClass. There should only be one default per cluster.
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`load_balancer_attributes` (`list(object({ key = string, value = string }))`) optional
A list of load balancer attributes to apply to the default ingress load balancer.
See [Load Balancer Attributes](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#load-balancer-attributes).
**Default value:** `[ ]`
`scheme` (`string`) optional
Scheme for default ingress, one of `internet-facing` or `internal`.
**Default value:** `"internet-facing"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_ingress_class_v1.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/ingress_class_v1) (resource)
- [`kubernetes_manifest.alb_controller_class_params`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/manifest) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## alb-controller-ingress-group
This component provisions a Kubernetes Service that creates an AWS Application Load Balancer (ALB)
for a specific IngressGroup (https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#ingressgroup).
An IngressGroup is a feature of the AWS Load Balancer Controller
(https://github.com/kubernetes-sigs/aws-load-balancer-controller) which allows multiple Kubernetes Ingresses to
share the same Application Load Balancer.
## Usage
**Stack Level**: Regional
Once the catalog file is created, the file can be imported as follows.
```yaml
import:
- catalog/eks/alb-controller-ingress-group
```
The default catalog values (e.g. `stacks/catalog/eks/alb-controller-ingress-group.yaml`) will create a Kubernetes
Service in the `default` namespace with an IngressGroup
(https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#ingressgroup)
named `alb-controller-ingress-group`.
```yaml
components:
terraform:
eks/alb-controller-ingress-group:
metadata:
component: eks/alb-controller-ingress-group
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
# change the name of the Ingress Group
name: alb-controller-ingress-group
```
## Variables
### Required Variables
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region
### Optional Variables
`additional_annotations` (`map(any)`) optional
Additional annotations to add to the Kubernetes ingress
**Default value:** `{ }`
`alb_access_logs_enabled` (`bool`) optional
Whether or not to enable access logs for the ALB
**Default value:** `false`
The name of the `global_accelerator` component
**Default value:** `"global-accelerator"`
`global_accelerator_enabled` (`bool`) optional
Whether or not Global Accelerator Endpoint Group should be provisioned for the load balancer
**Default value:** `false`
`host` (`string`) optional
Hostname override. When set, this takes precedence over dns_delegated lookup.
**Default value:** `""`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_service_enabled` (`bool`) optional
Whether or not to enable a default kubernetes service
**Default value:** `false`
`kubernetes_service_path` (`string`) optional
The kubernetes default service's path if enabled
**Default value:** `"/*"`
`kubernetes_service_port` (`number`) optional
The kubernetes default service's port if enabled
**Default value:** `8080`
`tls_enabled` (`bool`) optional
Whether to enable TLS on the ingress. Requires an ACM certificate for the host.
**Default value:** `true`
`waf_component_name` (`string`) optional
The name of the `waf` component
**Default value:** `"waf"`
`waf_enabled` (`bool`) optional
Whether or not WAF ACL annotation should be provisioned for the load balancer
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`annotations`
The annotations of the Ingress
`group_name`
The value of `alb.ingress.kubernetes.io/group.name` of the Ingress
`host`
The name of the host used by the Ingress
`ingress_class`
The value of the `kubernetes.io/ingress.class` annotation of the Kubernetes Ingress
`load_balancer_name`
The name of the load balancer created by the Ingress
`load_balancer_scheme`
The value of the `alb.ingress.kubernetes.io/scheme` annotation of the Kubernetes Ingress
`message_body_length`
The length of the message body to ensure it's lower than the maximum limit
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`global_accelerator` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`load_balancer_name` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`waf` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_globalaccelerator_endpoint_group.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/globalaccelerator_endpoint_group) (resource)
- [`kubernetes_ingress_v1.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/ingress_v1) (resource)
- [`kubernetes_namespace.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
- [`kubernetes_service.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_lb.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lb) (data source)
---
## argocd
This component provisions [Argo CD](https://argoproj.github.io/cd/), a declarative GitOps continuous delivery tool for Kubernetes.
Note: Argo CD CRDs must be installed separately from this component/Helm release.
## Usage
### Install Argo CD CRDs
Install the Argo CD CRDs prior to deploying this component:
```shell
kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref="
# Eg. version v2.4.9
kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref=v2.4.9"
```
### Preparing AppProject repos
First, make sure you have a GitHub repo ready to go. We have a component for this called the `argocd-repo` component. It
will create a GitHub repo and adds some secrets and code owners. Most importantly, it configures an
`applicationset.yaml` that includes all the details for helm to create ArgoCD CRDs. These CRDs let ArgoCD know how to
fulfill changes to its repo.
```yaml
components:
terraform:
argocd-repo-defaults:
metadata:
type: abstract
vars:
enabled: true
github_user: acme_admin
github_user_email: infra@acme.com
github_organization: ACME
github_codeowner_teams:
- "@ACME/acme-admins"
- "@ACME/CloudPosse"
- "@ACME/developers"
gitignore_entries:
- "**/.DS_Store"
- ".DS_Store"
- "**/.vscode"
- "./vscode"
- ".idea/"
- ".vscode/"
permissions:
- team_slug: acme-admins
permission: admin
- team_slug: CloudPosse
permission: admin
- team_slug: developers
permission: push
```
### Injecting infrastructure details into applications
Second, your application repos could use values to best configure their helm releases. We have an `eks/platform`
component for exposing various infra outputs. It takes remote state lookups and stores them into SSM. We demonstrate how
to pull the platform SSM parameters later. Here's an example `eks/platform` config:
```yaml
components:
terraform:
eks/platform:
metadata:
type: abstract
component: eks/platform
backend:
s3:
workspace_key_prefix: platform
deps:
- catalog/eks/cluster
- catalog/eks/alb-controller-ingress-group
- catalog/acm
vars:
enabled: true
name: "platform"
eks_component_name: eks/cluster
ssm_platform_path: /platform/%s/%s
references:
default_alb_ingress_group:
component: eks/alb-controller-ingress-group
output: .group_name
default_ingress_domain:
component: dns-delegated
environment: gbl
output: "[.zones[].name][-1]"
eks/platform/acm:
metadata:
component: eks/platform
inherits:
- eks/platform
vars:
eks_component_name: eks/cluster
references:
default_ingress_domain:
component: acm
environment: use2
output: .domain_name
eks/platform/dev:
metadata:
component: eks/platform
inherits:
- eks/platform
vars:
platform_environment: dev
acm/qa2:
settings:
spacelift:
workspace_enabled: true
metadata:
component: acm
vars:
enabled: true
name: acm-qa2
tags:
Team: sre
Service: acm
process_domain_validation_options: true
validation_method: DNS
dns_private_zone_enabled: false
certificate_authority_enabled: false
```
In the previous sample we create platform settings for a `dev` platform and a `qa2` platform. Understand that these are
arbitrary titles that are used to separate the SSM parameters so that if, say, a particular hostname is needed, we can
safely select the right hostname using a moniker such as `qa2`. These otherwise are meaningless and do not need to align
with any particular stage or tenant.
### ArgoCD on SAML / AWS Identity Center (formerly aws-sso)
Here's an example snippet for how to use this component:
```yaml
components:
terraform:
eks/argocd:
settings:
spacelift:
workspace_enabled: true
depends_on:
- argocd-applicationset
- tenant-gbl-corp-argocd-depoy-non-prod
vars:
enabled: true
alb_group_name: argocd
alb_name: argocd
alb_logs_prefix: argocd
certificate_issuer: selfsigning-issuer
github_organization: MyOrg
oidc_enabled: false
saml_enabled: true
ssm_store_account: corp
ssm_store_account_region: us-west-2
argocd_repo_name: argocd-deploy-non-prod
argocd_rbac_policies:
- "p, role:org-admin, applications, *, */*, allow"
- "p, role:org-admin, clusters, get, *, allow"
- "p, role:org-admin, repositories, get, *, allow"
- "p, role:org-admin, repositories, create, *, allow"
- "p, role:org-admin, repositories, update, *, allow"
- "p, role:org-admin, repositories, delete, *, allow"
# Note: the IDs for AWS Identity Center groups will change if you alter/replace them:
argocd_rbac_groups:
- group: deadbeef-dead-beef-dead-beefdeadbeef
role: admin
- group: badca7sb-add0-65ba-dca7-sbadd065badc
role: reader
chart_values:
global:
logging:
format: json
level: warn
sso-saml/aws-sso:
settings:
spacelift:
workspace_enabled: true
metadata:
component: sso-saml-provider
vars:
enabled: true
ssm_path_prefix: "/sso/saml/aws-sso"
usernameAttr: email
emailAttr: email
groupsAttr: groups
```
Note, if you set up `sso-saml-provider`, you will need to restart DEX on your EKS cluster manually:
```bash
kubectl delete pod -n argocd
```
The configuration above will work for AWS Identity Center if you have the following attributes in a
[Custom SAML 2.0 application](https://docs.aws.amazon.com/singlesignon/latest/userguide/samlapps.html):
| attribute name | value | type |
| :------------- | :-------------- | :---------- |
| Subject | $\{user:subject\} | persistent |
| email | $\{user:email\} | unspecified |
| groups | $\{user:groups\} | unspecified |
You will also need to assign AWS Identity Center groups to your Custom SAML 2.0 application. Make a note of each group
and replace the IDs in the `argocd_rbac_groups` var accordingly.
### Google Workspace OIDC
To use Google OIDC:
```yaml
oidc_enabled: true
saml_enabled: false
oidc_providers:
google:
uses_dex: true
type: google
id: google
name: Google
serviceAccountAccess:
enabled: true
key: googleAuth.json
value: /sso/oidc/google/serviceaccount
admin_email: an_actual_user@acme.com
config:
# This filters emails when signing in with Google to only this domain. helpful for picking the right one.
hostedDomains:
- acme.com
clientID: /sso/saml/google/clientid
clientSecret: /sso/saml/google/clientsecret
```
### Working with ArgoCD and GitHub
Here's a simple GitHub action that will trigger a deployment in ArgoCD:
```yaml
# NOTE: Example will show dev, and qa2
name: argocd-deploy
on:
push:
branches:
- main
jobs:
ci:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2.1.0
with:
aws-region: us-east-2
role-to-assume: arn:aws:iam::123456789012:role/github-action-worker
- name: Build
shell: bash
run: docker build -t some.docker.repo/acme/app . & docker push some.docker.repo/acmo/app
- name: Checkout Argo Configuration
uses: actions/checkout@v3
with:
repository: acme/argocd-deploy-non-prod
ref: main
path: argocd-deploy-non-prod
- name: Deploy to dev
shell: bash
run: |
echo Rendering helmfile:
helmfile \
--namespace acme-app \
--environment dev \
--file deploy/app/release.yaml \
--state-values-file <(aws ssm get-parameter --name /platform/dev),<(docker image inspect some.docker.repo/acme/app) \
template > argocd-deploy-non-prod/plat/use2-dev/apps/my-preview-acme-app/manifests/resources.yaml
echo Updating sha for app:
yq e '' -i argocd-deploy-non-prod/plat/use2-dev/apps/my-preview-acme-app/config.yaml
echo Committing new helmfile
pushd argocd-deploy-non-prod
git add --all
git commit --message 'Updating acme-app'
git push
popd
```
In the above example, we make a few assumptions:
- You've already made the app in ArgoCD by creating a YAML file in your non-prod ArgoCD repo at the path
`plat/use2-dev/apps/my-preview-acme-app/config.yaml` with contents:
```yaml
app_repository: acme/app
app_commit: deadbeefdeadbeef
app_hostname: https://some.app.endpoint/landing_page
name: my-feature-branch.acme-app
namespace: my-feature-branch
manifests: plat/use2-dev/apps/my-preview-acme-app/manifests
```
- you have set up `ecr` with permissions for github to push docker images to it
- you already have your `ApplicationSet` and `AppProject` crd's in `plat/use2-dev/argocd/applicationset.yaml`, which
should be generated by our `argocd-repo` component.
- your app has a [helmfile template](https://helmfile.readthedocs.io/en/latest/#templating) in `deploy/app/release.yaml`
- that helmfile template can accept both the `eks/platform` config which is pulled from ssm at the path configured in
`eks/platform/defaults`
- the helmfile template can update container resources using the output of `docker image inspect`
### Notifications
Here's a configuration for letting argocd send notifications back to GitHub:
1. [Create GitHub PAT](https://docs.github.com/en/enterprise-server@3.6/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token)
with scope `repo:status`
2. Save the PAT to SSM `/argocd/notifications/notifiers/common/github-token`
3. Use this atmos stack configuration
```yaml
components:
terraform:
eks/argocd/notifications:
metadata:
component: eks/argocd
vars:
github_default_notifications_enabled: true
```
### Webhook
Here's a configuration Github notify ArgoCD on commit:
1. [Create GitHub PAT](https://docs.github.com/en/enterprise-server@3.6/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token)
with scope `admin:repo_hook`
2. Save the PAT to SSM `/argocd/github/api_key`
3. Use this atmos stack configuration
```yaml
components:
terraform:
eks/argocd/notifications:
metadata:
component: eks/argocd
vars:
github_webhook_enabled: true
```
#### Creating Webhooks with `github-webhook`
If you are creating webhooks for ArgoCD deployment repos in multiple GitHub Organizations, you cannot use the same
Terraform GitHub provider. Instead, we can use Atmos to deploy multiple component. To do this, disable the webhook
creation in this component and deploy the webhook with the `github-webhook` component as such:
```yaml
components:
terraform:
eks/argocd:
metadata:
component: eks/argocd
inherits:
- eks/argocd/defaults
vars:
github_webhook_enabled: true # create webhook value; required for argo-cd chart
create_github_webhook: false # created with github-webhook
argocd_repositories:
"argocd-deploy-non-prod/org1": # this is the name of the `argocd-repo` component for "org1"
environment: ue2
stage: auto
tenant: core
"argocd-deploy-non-prod/org2":
environment: ue2
stage: auto
tenant: core
webhook/org1/argocd:
metadata:
component: github-webhook
vars:
github_organization: org1
github_repository: argocd-deploy-non-prod
webhook_url: "https://argocd.ue2.dev.plat.acme.org/api/webhook"
ssm_github_webhook: "/argocd/github/webhook"
webhook/org2/argocd:
metadata:
component: github-webhook
vars:
github_organization: org2
github_repository: argocd-deploy-non-prod
webhook_url: "https://argocd.ue2.dev.plat.acme.org/api/webhook"
ssm_github_webhook: "/argocd/github/webhook"
```
### Slack Notifications
ArgoCD supports Slack notifications on application deployments.
1. In order to enable Slack notifications, first create a Slack Application following the
[ArgoCD documentation](https://argocd-notifications.readthedocs.io/en/stable/services/slack/).
1. Create an OAuth token for the new Slack App
1. Save the OAuth token to AWS SSM Parameter Store in the same account and region as Github tokens. For example,
`core-use2-auto`
1. Add the app to the chosen Slack channel. _If not added, notifications will not work_
1. For this component, enable Slack integrations for each Application with `var.slack_notifications_enabled` and
`var.slack_notifications`:
```yaml
slack_notifications_enabled: true
slack_notifications:
channel: argocd-updates
```
6. In the `argocd-repo` component, set `var.slack_notifications_channel` to the name of the Slack notification channel
to add the relevant ApplicationSet annotations
### Troubleshooting
#### Login to ArgoCD admin UI
For ArgoCD v1.9 and later, the initial admin password is available from a Kubernetes secret named
`argocd-initial-admin-secret`. To get the initial password, execute the following command:
```shell
kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 --decode
```
Then open the ArgoCD admin UI and use the username `admin` and the password obtained in the previous step to log in to
the ArgoCD admin.
#### Error "server.secretkey is missing"
If you provision a new version of the `eks/argocd` component, and some Helm Chart values get updated, you might
encounter the error "server.secretkey is missing" in the ArgoCD admin UI. To fix the error, execute the following
commands:
```shell
# Download `kubeconfig` and set EKS cluster
set-eks-cluster cluster-name
# Restart the `argocd-server` Pods
kubectl rollout restart deploy/argocd-server -n argocd
# Get the new admin password from the Kubernetes secret
kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 --decode
```
Reference: https://stackoverflow.com/questions/75046330/argo-cd-error-server-secretkey-is-missing
## Variables
### Required Variables
`github_organization` (`string`) required
GitHub Organization
`region` (`string`) required
AWS Region.
`ssm_store_account` (`string`) required
Account storing SSM parameters
`ssm_store_account_region` (`string`) required
AWS region storing SSM parameters
### Optional Variables
`admin_enabled` (`bool`) optional
Toggles Admin user creation the deployed chart
**Default value:** `false`
`alb_group_name` (`string`) optional
A name used in annotations to reuse an ALB (e.g. `argocd`) or to generate a new one
**Default value:** `null`
`alb_logs_bucket` (`string`) optional
The name of the bucket for ALB access logs. The bucket must have policy allowing the ELB logging principal
**Default value:** `""`
The name of the ALB (e.g. `argocd`) provisioned by `alb-controller`. Works together with `var.alb_group_name`
**Default value:** `null`
`anonymous_enabled` (`bool`) optional
Toggles anonymous user access using default RBAC setting (Defaults to read-only)
**Default value:** `false`
`argocd_apps_chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"argocd-apps"`
Set release description attribute (visible in the history).
**Default value:** `"A Helm chart for managing additional Argo CD Applications and Projects"`
Default ArgoCD RBAC default role.
See https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/#basic-built-in-roles for more information.
**Default value:** `"role:readonly"`
`argocd_rbac_groups` optional
List of ArgoCD Group Role Assignment strings to be added to the argocd-rbac configmap policy.csv item.
e.g.
[
\{
group: idp-group-name,
role: argocd-role-name
\},
]
becomes: `g, idp-group-name, role:argocd-role-name`
See https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/ for more information.
**Type:**
```hcl
list(object({
group = string,
role = string
}))
```
**Default value:** `[ ]`
`argocd_rbac_policies` (`list(string)`) optional
List of ArgoCD RBAC Permission strings to be added to the argocd-rbac configmap policy.csv item.
See https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/ for more information.
**Default value:** `[ ]`
`argocd_repositories` optional
Map of objects defining an `argocd_repo` to configure. The key is the name of the ArgoCD repository.
**Type:**
```hcl
map(object({
environment = string # The environment where the `argocd_repo` component is deployed.
stage = string # The stage where the `argocd_repo` component is deployed.
tenant = string # The tenant where the `argocd_repo` component is deployed.
}))
```
**Default value:** `{ }`
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"argo-cd"`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://argoproj.github.io/argo-helm"`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `"5.55.0"`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_github_webhook` (`bool`) optional
Enable GitHub webhook creation
Use this to create the GitHub Webhook for the given ArgoCD repo using the value created when `var.github_webhook_enabled` is `true`.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `false`.
**Default value:** `false`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
`forecastle_enabled` (`bool`) optional
Toggles Forecastle integration in the deployed chart
**Default value:** `false`
`github_app_enabled` (`bool`) optional
Whether to use GitHub App authentication instead of PAT
**Default value:** `false`
`github_app_id` (`string`) optional
The ID of the GitHub App to use for authentication
**Default value:** `null`
`github_app_installation_id` (`string`) optional
The Installation ID of the GitHub App to use for authentication
**Default value:** `null`
`github_base_url` (`string`) optional
This is the target GitHub base API endpoint. Providing a value is a requirement when working with GitHub Enterprise. It is optional to provide this value and it can also be sourced from the `GITHUB_BASE_URL` environment variable. The value must end with a slash, for example: `https://terraformtesting-ghe.westus.cloudapp.azure.com/`
**Default value:** `null`
Enable default GitHub commit statuses notifications (required for CD sync mode)
**Default value:** `true`
`github_deploy_keys_enabled` (`bool`) optional
Enable GitHub deploy keys for the repository. These are used for Argo CD application syncing.
Alternatively, you can use a GitHub App to access this desired state repository configured with `var.github_app_enabled`, `var.github_app_id`, and `var.github_app_installation_id`.
**Default value:** `true`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`host` (`string`) optional
Host name to use for ingress and ALB
**Default value:** `""`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
The namespace to install the release into.
**Default value:** `"argocd"`
Notification Triggers to configure.
See: https://argocd-notifications.readthedocs.io/en/stable/triggers/
See: [Example value in argocd-notifications Helm Chart](https://github.com/argoproj/argo-helm/blob/a0a74fb43d147073e41aadc3d88660b312d6d638/charts/argocd-notifications/values.yaml#L352)
**Type:**
```hcl
map(list(
object({
oncePer = optional(string)
send = list(string)
when = string
})
))
```
**Default value:** `{ }`
`oidc_enabled` (`bool`) optional
Toggles OIDC integration in the deployed chart
**Default value:** `false`
`oidc_issuer` (`string`) optional
OIDC issuer URL
**Default value:** `""`
`oidc_name` (`string`) optional
Name of the OIDC resource
**Default value:** `""`
`oidc_rbac_scopes` (`string`) optional
OIDC RBAC scopes to request
**Default value:** `"[argocd_realm_access]"`
`oidc_requested_scopes` (`string`) optional
Set of OIDC scopes to request
**Default value:** `"[\"openid\", \"profile\", \"email\", \"groups\"]"`
`rbac_enabled` (`bool`) optional
Enable Service Account for pods.
**Default value:** `true`
`resources` optional
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
**Default value:** `null`
`saml_enabled` (`bool`) optional
Toggles SAML integration in the deployed chart
**Default value:** `false`
`saml_rbac_scopes` (`string`) optional
SAML RBAC scopes to request
**Default value:** `"[email,groups]"`
Service type for exposing the ArgoCD service. The available type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort).
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.
**Default value:** `"NodePort"`
`slack_notifications` optional
ArgoCD Slack notification configuration. Requires Slack Bot created with token stored at the given SSM Parameter path.
See: https://argocd-notifications.readthedocs.io/en/stable/services/slack/
**Type:**
```hcl
object({
token_ssm_path = optional(string, "/argocd/notifications/notifiers/slack/token")
api_url = optional(string, null)
username = optional(string, "ArgoCD")
icon = optional(string, null)
})
```
**Default value:** `{ }`
`slack_notifications_enabled` (`bool`) optional
Whether or not to enable Slack notifications. See `var.slack_notifications.
**Default value:** `false`
`ssm_github_api_key` (`string`) optional
SSM path to the GitHub API key
**Default value:** `"/argocd/github/api_key"`
`ssm_github_app_private_key` (`string`) optional
SSM path to the GitHub App private key
**Default value:** `"/argocd/github/app_private_key"`
SSM path to the GitHub App private key for notifications
**Default value:** `"/argocd/github_notifications/app_private_key"`
`ssm_oidc_client_id` (`string`) optional
The SSM Parameter Store path for the ID of the IdP client
**Default value:** `"/argocd/oidc/client_id"`
`ssm_oidc_client_secret` (`string`) optional
The SSM Parameter Store path for the secret of the IdP client
**Default value:** `"/argocd/oidc/client_secret"`
`ssm_store_account_tenant` (`string`) optional
Tenant of the account storing SSM parameters.
If the tenant label is not used, leave this as null.
**Default value:** `null`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `300`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`github_webhook_value`
The value of the GitHub webhook secret used for ArgoCD
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `github`, version: `>= 4.0`
- `helm`, version: `>= 2.6.0, < 3.0.0`
- `kubernetes`, version: `>= 2.9.0, != 2.21.0`
- `random`, version: `>= 3.5`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `github`, version: `>= 4.0`
- `kubernetes`, version: `>= 2.9.0, != 2.21.0`
- `random`, version: `>= 3.5`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`argocd` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`argocd_apps` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`argocd_repo` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`iam_roles_config_secrets` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`notifications_notifiers` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
`notifications_templates` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
`saml_sso_providers` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`github_repository_webhook.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_webhook) (resource)
- [`random_password.webhook`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_ssm_parameter.github_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.github_app_private_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.github_deploy_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.github_notifications_app_private_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.oidc_client_id`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.oidc_client_secret`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.slack_notifications`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameters_by_path.argocd_notifications`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameters_by_path) (data source)
- [`kubernetes_resources.crd`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/resources) (data source)
---
## cert-manager
This component creates a Helm release for [cert-manager](https://github.com/jetstack/cert-manager) on a Kubernetes
cluster. [cert-manager](https://github.com/jetstack/cert-manager) is a Kubernetes addon that provisions X.509
certificates.
## Usage
**Stack Level**: Regional
Once the catalog file is created, the file can be imported as follows.
```yaml
import:
- catalog/eks/cert-manager
...
```
The default catalog values `e.g. stacks/catalog/eks/cert-manager.yaml`
```yaml
enabled: true
name: cert-manager
kubernetes_namespace: cert-manager
# `helm_manifest_experiment_enabled` does not work with cert-manager or any Helm chart that uses CRDs
helm_manifest_experiment_enabled: false
# Use the cert-manager as a private CA (Certificate Authority)
# to issue certificates for use within the Kubernetes cluster.
# Something like this is required for the ALB Ingress Controller.
cert_manager_issuer_selfsigned_enabled: true
# Use Let's Encrypt to issue certificates for use outside the Kubernetes cluster,
# ones that will be trusted by browsers.
# These do not (yet) work with the ALB Ingress Controller,
# which require ACM certificates, so we have no use for them.
letsencrypt_enabled: true
# cert_manager_issuer_support_email_template is only used if letsencrypt_enabled is true.
# If it were true, we would want to set it at the organization level.
cert_manager_issuer_support_email_template: "aws+%s@acme.com"
cert_manager_repository: https://charts.jetstack.io
cert_manager_chart: cert-manager
cert_manager_chart_version: v1.5.4
# use a local chart to provision Certificate Issuers
cert_manager_issuer_chart: ./cert-manager-issuer/
cert_manager_resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
```
## Variables
### Required Variables
If `true`, if any part of the installation process fails, all parts are treated as failed. Highly recommended to prevent cert-manager from getting into a wedged state. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`cart_manager_rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`cert_manager_chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"cert-manager"`
`cert_manager_chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cert_manager_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`cert_manager_issuer_chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"./cert-manager-issuer/"`
Whether or not to use selfsigned issuer.
**Default value:** `true`
`cert_manager_issuer_values` (`any`) optional
Additional values to yamlencode as `helm_release` values for cert-manager-issuer.
**Default value:** `{ }`
`cert_manager_metrics_enabled` (`bool`) optional
Whether or not to enable metrics for cert-manager.
**Default value:** `false`
`cert_manager_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://charts.jetstack.io"`
`cert_manager_resources` optional
The cpu and memory of the cert manager's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
**Default value:**
```hcl
{
"limits": {
"cpu": "200m",
"memory": "256Mi"
},
"requests": {
"cpu": "100m",
"memory": "128Mi"
}
}
```
`cert_manager_values` (`any`) optional
Additional values to yamlencode as `helm_release` values for cert-manager.
**Default value:** `{ }`
`cleanup_on_fail` (`bool`) optional
If `true`, resources created in this deploy will be deleted when deploy fails. Highly recommended to prevent cert-manager from getting into a wedeged state.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `true`.
**Default value:** `true`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
The namespace to install the release into.
**Default value:** `"cert-manager"`
`letsencrypt_enabled` (`bool`) optional
Whether or not to use letsencrypt issuer and manager. If this is enabled, it will also provision an IAM role.
**Default value:** `false`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Set `true` to wait until all resources are in a ready state before marking the release as successful. Ignored if provisioning Issuers. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cert_manager_issuer_metadata`
Block status of the deployed release
`cert_manager_metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`cert_manager` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`cert_manager_issuer` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
---
## cloudwatch
This component installs the CloudWatch Observability chart for EKS. You may want to use this chart rather than the addon if you need to install a priority class with the CloudWatch Observability chart. The addon at this time does not support priority classes with configuration (see References for details).
## Usage
**Stack Level**: Regional
For example, to install the CloudWatch Observability chart for EKS:
```yaml
components:
terraform:
eks/cloudwatch:
vars:
name: eks-cloudwatch
# We need to create a priority class for the CloudWatch agent to use
# to ensure the cloudwatch-agent and fluent-bit pods are scheduled on all nodes
priority_class_enabled: true
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used
**Default value:** `true`
`chart` (`string`) optional
The Helm chart to install
**Default value:** `"amazon-cloudwatch-observability"`
`chart_description` (`string`) optional
Set release description attribute (visible in the history)
**Default value:** `"Amazon CloudWatch Observability for EKS"`
`chart_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://aws-observability.github.io/helm-charts"`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values
**Default value:** `{ }`
`chart_version` (`string`) optional
The version of the Helm chart to install
**Default value:** `"v3.0.0"`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails
**Default value:** `true`
`eks_component_name` (`string`) optional
The name of the EKS component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
Name of the Kubernetes Namespace this pod is deployed in to
**Default value:** `"amazon-cloudwatch"`
`priority_class_enabled` (`bool`) optional
Whether to enable the priority class for the EKS addon
**Default value:** `false`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks)
**Default value:** `900`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cloudwatch_log_group_names`
List of CloudWatch log group names created by the agent
`metadata`
Block status of the deployed release
`priority_class_name`
Name of the priority class
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`cloudwatch` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`local` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_role_policy_attachment.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`kubernetes_priority_class.this`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/priority_class) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## cluster
This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate
profiles.
:::note
#### Windows not supported
This component has not been tested with Windows worker nodes of any launch type. Although upstream modules support
Windows nodes, there are likely issues around incorrect or insufficient IAM permissions or other configuration that
would need to be resolved for this component to properly configure the upstream modules for Windows nodes. If you need
Windows nodes, please experiment and be on the lookout for issues, and then report any issues to Cloud Posse.
:::
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
This example expects the [Cloud Posse Reference Architecture](https://docs.cloudposse.com/) Identity and Network designs
deployed for mapping users to EKS service roles and granting access in a private network. In addition, this example has
the GitHub OIDC integration added and makes use of Karpenter to dynamically scale cluster nodes.
For more on these requirements, see [Identity Reference Architecture](https://docs.cloudposse.com/layers/identity/),
[Network Reference Architecture](https://docs.cloudposse.com/layers/network/), the
[GitHub OIDC component](https://docs.cloudposse.com/components/library/aws/github-oidc-provider/), and the
[Karpenter component](https://docs.cloudposse.com/components/library/aws/eks/karpenter/).
### Mixin pattern for Kubernetes version
We recommend separating out the Kubernetes and related addons versions into a separate mixin (one per Kubernetes minor
version), to make it easier to run different versions in different environments, for example while testing a new
version.
We also recommend leaving "resolve conflicts" settings unset and therefore using the default "OVERWRITE" setting because
any custom configuration that you would want to preserve should be managed by Terraform configuring the add-ons
directly.
For example, create `catalog/eks/cluster/mixins/k8s-1-29.yaml` with the following content:
```yaml
components:
terraform:
eks/cluster:
vars:
cluster_kubernetes_version: "1.29"
# You can set all the add-on versions to `null` to use the latest version,
# but that introduces drift as new versions are released. As usual, we recommend
# pinning the versions to a specific version and upgrading when convenient.
# Determine the latest version of the EKS add-ons for the specified Kubernetes version
# EKS_K8S_VERSION=1.29 # replace with your cluster version
# ADD_ON=vpc-cni # replace with the add-on name
# echo "${ADD_ON}:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name $ADD_ON \
# --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
# To see versions for all the add-ons, wrap the above command in a for loop:
# for ADD_ON in vpc-cni kube-proxy coredns aws-ebs-csi-driver aws-efs-csi-driver; do
# echo "${ADD_ON}:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name $ADD_ON \
# --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
# done
# To see the custom configuration schema for an add-on, run the following command:
# aws eks describe-addon-configuration --addon-name aws-ebs-csi-driver \
# --addon-version v1.20.0-eksbuild.1 | jq '.configurationSchema | fromjson'
# See the `coredns` configuration below for an example of how to set a custom configuration.
# https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
# https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#creating-an-add-on
addons:
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
# https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html#cni-iam-role-create-role
# https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/#deploy-vpc-cni-managed-add-on
vpc-cni:
addon_version: "v1.16.0-eksbuild.1" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
kube-proxy:
addon_version: "v1.29.0-eksbuild.1" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
coredns:
addon_version: "v1.11.1-eksbuild.4" # set `addon_version` to `null` to use the latest version
## override default replica count of 2. In very large clusters, you may want to increase this.
configuration_values: '{"replicaCount": 3}'
# https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
# https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons
# https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html#csi-iam-role
# https://github.com/kubernetes-sigs/aws-ebs-csi-driver
aws-ebs-csi-driver:
addon_version: "v1.27.0-eksbuild.1" # set `addon_version` to `null` to use the latest version
# If you are not using [volume snapshots](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/#how-to-use-volume-snapshots)
# (and you probably are not), disable the EBS Snapshotter
# See https://github.com/aws/containers-roadmap/issues/1919
configuration_values: '{"sidecars":{"snapshotter":{"forceEnable":false}}}'
aws-efs-csi-driver:
addon_version: "v1.7.7-eksbuild.1" # set `addon_version` to `null` to use the latest version
# Set a short timeout in case of conflict with an existing efs-controller deployment
create_timeout: "7m"
```
### Common settings for all Kubernetes versions
In your main stack configuration, you can then set the Kubernetes version by importing the appropriate mixin:
```yaml
#
import:
- catalog/eks/cluster/mixins/k8s-1-29
components:
terraform:
eks/cluster:
vars:
enabled: true
name: eks
vpc_component_name: "vpc"
eks_component_name: "eks/cluster"
# Your choice of availability zones or availability zone ids
# availability_zones: ["us-east-1a", "us-east-1b", "us-east-1c"]
aws_ssm_agent_enabled: true
allow_ingress_from_vpc_accounts:
- tenant: core
stage: auto
- tenant: core
stage: corp
- tenant: core
stage: network
public_access_cidrs: []
allowed_cidr_blocks: []
allowed_security_groups: []
enabled_cluster_log_types:
# Caution: enabling `api` log events may lead to a substantial increase in Cloudwatch Logs expenses.
- api
- audit
- authenticator
- controllerManager
- scheduler
oidc_provider_enabled: true
# Allows GitHub OIDC role
github_actions_iam_role_enabled: true
github_actions_iam_role_attributes: ["eks"]
github_actions_allowed_repos:
- acme/infra
# We recommend, at a minimum, deploying 1 managed node group,
# with the same number of instances as availability zones (typically 3).
managed_node_groups_enabled: true
node_groups: # for most attributes, setting null here means use setting from node_group_defaults
main:
# availability_zones = null will create one autoscaling group
# in every private subnet in the VPC
availability_zones: null
# Tune the desired and minimum group size according to your baseload requirements.
# We recommend no autoscaling for the main node group, so it will
# stay at the specified desired group size, with additional
# capacity provided by Karpenter. Nevertheless, we recommend
# deploying enough capacity in the node group to handle your
# baseload requirements, and in production, we recommend you
# have a large enough node group to handle 3/2 (1.5) times your
# baseload requirements, to handle the loss of a single AZ.
desired_group_size: 3 # number of instances to start with, should be >= number of AZs
min_group_size: 3 # must be >= number of AZs
max_group_size: 3
# Can only set one of ami_release_version or kubernetes_version
# Leave both null to use latest AMI for Cluster Kubernetes version
kubernetes_version: null # use cluster Kubernetes version
ami_release_version: null # use latest AMI for Kubernetes version
attributes: []
create_before_destroy: true
cluster_autoscaler_enabled: true
instance_types:
# Tune the instance type according to your baseload requirements.
- c7a.medium
ami_type: AL2_x86_64 # use "AL2_x86_64" for standard instances, "AL2_x86_64_GPU" for GPU instances
node_userdata:
# WARNING: node_userdata is alpha status and will likely change in the future.
# Also, it is only supported for AL2 and some Windows AMIs, not BottleRocket or AL2023.
# Kubernetes docs: https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
kubelet_extra_args: >-
--kube-reserved cpu=100m,memory=0.6Gi,ephemeral-storage=1Gi --system-reserved
cpu=100m,memory=0.2Gi,ephemeral-storage=1Gi --eviction-hard
memory.available<200Mi,nodefs.available<10%,imagefs.available<15%
block_device_map:
# EBS volume for local ephemeral storage
# IGNORED if legacy `disk_encryption_enabled` or `disk_size` are set!
# Use "/dev/xvda" for most of the instances (without local NVMe)
# using most of the Linuxes, "/dev/xvdb" for BottleRocket
"/dev/xvda":
ebs:
volume_size: 100 # number of GB
volume_type: gp3
kubernetes_labels: {}
kubernetes_taints: {}
resources_to_tag:
- instance
- volume
tags: null
# The abbreviation method used for Availability Zones in your project.
# Used for naming resources in managed node groups.
# Either "short" or "fixed".
availability_zone_abbreviation_type: fixed
cluster_private_subnets_only: true
cluster_encryption_config_enabled: true
cluster_endpoint_private_access: true
cluster_endpoint_public_access: false
cluster_log_retention_period: 90
# List of `aws-team-roles` (in the account where the EKS cluster is deployed) to map to Kubernetes RBAC groups
# You cannot set `system:*` groups here, except for `system:masters`.
# The `idp:*` roles referenced here are created by the `eks/idp-roles` component.
# While set here, the `idp:*` roles will have no effect until after
# the `eks/idp-roles` component is applied, which must be after the
# `eks/cluster` component is deployed.
aws_team_roles_rbac:
- aws_team_role: admin
groups:
- system:masters
- aws_team_role: poweruser
groups:
- idp:poweruser
- aws_team_role: observer
groups:
- idp:observer
- aws_team_role: planner
groups:
- idp:observer
- aws_team_role: terraform
groups:
- system:masters
# Permission sets from AWS SSO allowing cluster access
# See `aws-sso` component.
aws_sso_permission_sets_rbac:
- aws_sso_permission_set: PowerUserAccess
groups:
- idp:poweruser
# Set to false if you are not using Karpenter
karpenter_iam_role_enabled: true
# All Fargate Profiles will use the same IAM Role when `legacy_fargate_1_role_per_profile_enabled` is set to false.
# Recommended for all new clusters, but will damage existing clusters provisioned with the legacy component.
legacy_fargate_1_role_per_profile_enabled: false
# While it is possible to deploy add-ons to Fargate Profiles, it is not recommended. Use a managed node group instead.
deploy_addons_to_fargate: false
```
### Amazon EKS End-of-Life Dates
When picking a Kubernetes version, be sure to review the
[end-of-life dates for Amazon EKS](https://endoflife.date/amazon-eks). Refer to the chart below:
| cycle | release | latest | latest release | eol | extended support |
| :---- | :--------: | :---------- | :------------: | :--------: | :--------------: |
| 1.29 | 2024-01-23 | 1.29-eks-6 | 2024-04-18 | 2025-03-23 | 2026-03-23 |
| 1.28 | 2023-09-26 | 1.28-eks-12 | 2024-04-18 | 2024-11-26 | 2025-11-26 |
| 1.27 | 2023-05-24 | 1.27-eks-16 | 2024-04-18 | 2024-07-24 | 2025-07-24 |
| 1.26 | 2023-04-11 | 1.26-eks-17 | 2024-04-18 | 2024-06-11 | 2025-06-11 |
| 1.25 | 2023-02-21 | 1.25-eks-18 | 2024-04-18 | 2024-05-01 | 2025-05-01 |
| 1.24 | 2022-11-15 | 1.24-eks-21 | 2024-04-18 | 2024-01-31 | 2025-01-31 |
| 1.23 | 2022-08-11 | 1.23-eks-23 | 2024-04-18 | 2023-10-11 | 2024-10-11 |
| 1.22 | 2022-04-04 | 1.22-eks-14 | 2023-06-30 | 2023-06-04 | 2024-09-01 |
| 1.21 | 2021-07-19 | 1.21-eks-18 | 2023-06-09 | 2023-02-16 | 2024-07-15 |
| 1.20 | 2021-05-18 | 1.20-eks-14 | 2023-05-05 | 2022-11-01 | False |
| 1.19 | 2021-02-16 | 1.19-eks-11 | 2022-08-15 | 2022-08-01 | False |
| 1.18 | 2020-10-13 | 1.18-eks-13 | 2022-08-15 | 2022-08-15 | False |
\* This Chart was generated 2024-05-12 with [the `eol` tool](https://github.com/hugovk/norwegianblue). Install it with
`python3 -m pip install --upgrade norwegianblue` and create a new table by running `eol --md amazon-eks` locally, or
view the information by visiting [the endoflife website](https://endoflife.date/amazon-eks).
You can also view the release and support timeline for
[the Kubernetes project itself](https://endoflife.date/kubernetes).
### Using Addons
EKS clusters support “Addons” that can be automatically installed on a cluster. Install these addons with the
[`var.addons` input](https://docs.cloudposse.com/components/library/aws/eks/cluster/#input_addons).
:::tip
Run the following command to see all available addons, their type, and their publisher. You can also see the URL for
addons that are available through the AWS Marketplace. Replace 1.27 with the version of your cluster. See
[Creating an addon](https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#creating-an-add-on) for
more details.
:::
```shell
EKS_K8S_VERSION=1.29 # replace with your cluster version
aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION \
--query 'addons[].{MarketplaceProductUrl: marketplaceInformation.productUrl, Name: addonName, Owner: owner Publisher: publisher, Type: type}' --output table
```
:::tip
You can see which versions are available for each addon by executing the following commands. Replace 1.29 with the
version of your cluster.
:::
```shell
EKS_K8S_VERSION=1.29 # replace with your cluster version
echo "vpc-cni:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name vpc-cni \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
echo "kube-proxy:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name kube-proxy \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
echo "coredns:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name coredns \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
echo "aws-ebs-csi-driver:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name aws-ebs-csi-driver \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
echo "aws-efs-csi-driver:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name aws-efs-csi-driver \
--query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
```
Some add-ons accept additional configuration. For example, the `vpc-cni` addon accepts a `disableNetworking` parameter.
View the available configuration options (as JSON Schema) via the `aws eks describe-addon-configuration` command. For
example:
```shell
aws eks describe-addon-configuration \
--addon-name aws-ebs-csi-driver \
--addon-version v1.20.0-eksbuild.1 | jq '.configurationSchema | fromjson'
```
You can then configure the add-on via the `configuration_values` input. For example:
```yaml
aws-ebs-csi-driver:
configuration_values: '{"node": {"loggingFormat": "json"}}'
```
Configure the addons like the following example:
```yaml
# https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
# https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#creating-an-add-on
# https://aws.amazon.com/blogs/containers/amazon-eks-add-ons-advanced-configuration/
addons:
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
# https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html#cni-iam-role-create-role
# https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/#deploy-vpc-cni-managed-add-on
vpc-cni:
addon_version: "v1.12.2-eksbuild.1" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
kube-proxy:
addon_version: "v1.25.6-eksbuild.1" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
coredns:
addon_version: "v1.9.3-eksbuild.2" # set `addon_version` to `null` to use the latest version
# Override default replica count of 2, to have one in each AZ
configuration_values: '{"replicaCount": 3}'
# https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
# https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons
# https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html#csi-iam-role
# https://github.com/kubernetes-sigs/aws-ebs-csi-driver
aws-ebs-csi-driver:
addon_version: "v1.19.0-eksbuild.2" # set `addon_version` to `null` to use the latest version
# If you are not using [volume snapshots](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/#how-to-use-volume-snapshots)
# (and you probably are not), disable the EBS Snapshotter with:
configuration_values: '{"sidecars":{"snapshotter":{"forceEnable":false}}}'
```
Some addons, such as CoreDNS, require at least one node to be fully provisioned first. See
[issue #170](https://github.com/cloudposse/terraform-aws-eks-cluster/issues/170) for more details. Set
`var.addons_depends_on` to `true` to require the Node Groups to be provisioned before addons.
```yaml
addons_depends_on: true
addons:
coredns:
addon_version: "v1.8.7-eksbuild.1"
```
:::warning
Addons may not be suitable for all use-cases! For example, if you are deploying Karpenter to Fargate and using
Karpenter to provision all nodes, these nodes will never be available before the cluster component is deployed if you
are using the CoreDNS addon (for example).
This is one of the reasons we recommend deploying a managed node group: to ensure that the addons will become fully
functional during deployment of the cluster.
:::
For more information on upgrading EKS Addons, see
["How to Upgrade EKS Cluster Addons"](https://docs.cloudposse.com/learn/maintenance/upgrades/how-to-upgrade-eks-cluster-addons/)
### Adding and Configuring a new EKS Addon
The component already supports all the EKS addons shown in the configurations above. To add a new EKS addon, not
supported by the cluster, add it to the `addons` map (`addons` variable):
```yaml
addons:
my-addon:
addon_version: "..."
```
If the new addon requires an EKS IAM Role for Kubernetes Service Account, perform the following steps:
- Add a file `addons-custom.tf` to the `eks/cluster` folder if not already present
- In the file, add an IAM policy document with the permissions required for the addon, and use the `eks-iam-role` module
to provision an IAM Role for Kubernetes Service Account for the addon:
```hcl
data "aws_iam_policy_document" "my_addon" {
statement {
sid = "..."
effect = "Allow"
resources = ["..."]
actions = [
"...",
"..."
]
}
}
module "my_addon_eks_iam_role" {
source = "cloudposse/eks-iam-role/aws"
version = "2.1.0"
eks_cluster_oidc_issuer_url = local.eks_cluster_oidc_issuer_url
service_account_name = "..."
service_account_namespace = "..."
aws_iam_policy_document = [one(data.aws_iam_policy_document.my_addon[*].json)]
context = module.this.context
}
```
For examples of how to configure the IAM role and IAM permissions for EKS addons, see [addons.tf](https://github.com/cloudposse-terraform-components/aws-eks-cluster/tree/main/cluster/addons.tf).
- Add a file `additional-addon-support_override.tf` to the `eks/cluster` folder if not already present
- In the file, add the IAM Role for Kubernetes Service Account for the addon to the
`overridable_additional_addon_service_account_role_arn_map` map:
```hcl
locals {
overridable_additional_addon_service_account_role_arn_map = {
my-addon = module.my_addon_eks_iam_role.service_account_role_arn
}
}
```
- This map will override the default map in the [additional-addon-support.tf](https://github.com/cloudposse-terraform-components/aws-eks-cluster/tree/main/cluster/additional-addon-support.tf) file, and
will be merged into the final map together with the default EKS addons `vpc-cni` and `aws-ebs-csi-driver` (which this
component configures and creates IAM Roles for Kubernetes Service Accounts)
- Follow the instructions in the [additional-addon-support.tf](https://github.com/cloudposse-terraform-components/aws-eks-cluster/tree/main/cluster/additional-addon-support.tf) file if the addon may need
to be deployed to Fargate, or has dependencies that Terraform cannot detect automatically.
## Variables
### Required Variables
Manages [EKS addons](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_addon) resources
**Type:**
```hcl
map(object({
enabled = optional(bool, true)
addon_version = optional(string, null)
# configuration_values is a JSON string, such as '{"computeType": "Fargate"}'.
configuration_values = optional(string, null)
# Set default resolve_conflicts to OVERWRITE because it is required on initial installation of
# add-ons that have self-managed versions installed by default (e.g. vpc-cni, coredns), and
# because any custom configuration that you would want to preserve should be managed by Terraform.
resolve_conflicts_on_create = optional(string, "OVERWRITE")
resolve_conflicts_on_update = optional(string, "OVERWRITE")
service_account_role_arn = optional(string, null)
create_timeout = optional(string, null)
update_timeout = optional(string, null)
delete_timeout = optional(string, null)
}))
```
**Default value:** `{ }`
`addons_depends_on` (`bool`) optional
If set `true` (recommended), all addons will depend on managed node groups provisioned by this component and therefore not be installed until nodes are provisioned.
See [issue #170](https://github.com/cloudposse/terraform-aws-eks-cluster/issues/170) for more details.
**Default value:** `true`
Type of Availability Zone abbreviation (either `fixed` or `short`) to use in names. See https://github.com/cloudposse/terraform-aws-utils for details.
**Default value:** `"fixed"`
`availability_zone_ids` (`list(string)`) optional
List of Availability Zones IDs where subnets will be created. Overrides `availability_zones`.
Can be the full name, e.g. `use1-az1`, or just the part after the AZ ID region code, e.g. `-az1`,
to allow reusable values across regions. Consider contention for resources and spot pricing in each AZ when selecting.
Useful in some regions when using only some AZs and you want to use the same ones across multiple accounts.
**Default value:** `[ ]`
`availability_zones` (`list(string)`) optional
AWS Availability Zones in which to deploy multi-AZ resources.
Ignored if `availability_zone_ids` is set.
Can be the full name, e.g. `us-east-1a`, or just the part after the region, e.g. `a` to allow reusable values across regions.
If not provided, resources will be provisioned in every zone with a private subnet in the VPC.
**Default value:** `[ ]`
`aws_ssm_agent_enabled` (`bool`) optional
Set true to attach the required IAM policy for AWS SSM agent to each EC2 instance's IAM Role
**Default value:** `false`
`aws_sso_permission_sets_rbac` optional
(Not Recommended): AWS SSO (IAM Identity Center) permission sets in the EKS deployment account to add to `aws-auth` ConfigMap.
Unfortunately, `aws-auth` ConfigMap does not support SSO permission sets, so we map the generated
IAM Role ARN corresponding to the permission set at the time Terraform runs. This is subject to change
when any changes are made to the AWS SSO configuration, invalidating the mapping, and requiring a
`terraform apply` in this project to update the `aws-auth` ConfigMap and restore access.
**Type:**
```hcl
list(object({
aws_sso_permission_set = string
groups = list(string)
}))
```
**Default value:** `[ ]`
`aws_team_roles_rbac` optional
List of `aws-team-roles` (in the target AWS account) to map to Kubernetes RBAC groups.
**Type:**
```hcl
list(object({
aws_team_role = string
groups = list(string)
}))
```
**Default value:** `[ ]`
Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default to AWS EKS resource and it is `false`
**Default value:** `false`
Number of days to retain cluster logs. Requires `enabled_cluster_log_types` to be set. See https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html.
**Default value:** `0`
`cluster_private_subnets_only` (`bool`) optional
Whether or not to enable private subnets or both public and private subnets
**Default value:** `false`
`color` (`string`) optional
The cluster stage represented by a color; e.g. blue, green
**Default value:** `""`
`deploy_addons_to_fargate` (`bool`) optional
Set to `true` (not recommended) to deploy addons to Fargate instead of initial node pool
**Default value:** `false`
A list of the desired control plane logging to enable. For more information, see https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. Possible values [`api`, `audit`, `authenticator`, `controllerManager`, `scheduler`]
**Default value:** `[ ]`
Flag to enable/disable ECR Public read-only access for the Karpenter node role. When enabled, attaches a policy granting ecr-public and sts:GetServiceBearerToken permissions to allow authenticated pulls from public.ecr.aws
**Default value:** `false`
List of ECR Public resource ARNs to scope the read-only policy. Use `["*"]` for all repositories or specify ARNs to restrict access.
**Default value:**
```hcl
[
"*"
]
```
`karpenter_iam_role_enabled` (`bool`) optional
Flag to enable/disable creation of IAM role for EC2 Instance Profile that is attached to the nodes launched by Karpenter
**Default value:** `false`
**Obsolete:** The issues this was meant to mitigate were fixed in AWS Terraform Provider v5.43.0
and Karpenter v0.33.0. This variable will be removed in a future release.
Remove this input from your configuration and leave it at default.
**Old description:** When `true` (the default), suppresses creation of the IAM Instance Profile
for nodes launched by Karpenter, to preserve the legacy behavior of
the `eks/karpenter` component creating it.
Set to `false` to enable creation of the IAM Instance Profile, which
ensures that both the role and the instance profile have the same lifecycle,
and avoids AWS Provider issue [#32671](https://github.com/hashicorp/terraform-provider-aws/issues/32671).
Use in conjunction with `eks/karpenter` component `legacy_create_karpenter_instance_profile`.
**Default value:** `true`
Set to `false` for new clusters to create a single Fargate Pod Execution role for the cluster.
Set to `true` for existing clusters to preserve the old behavior of creating
a Fargate Pod Execution role for each Fargate Profile.
**Default value:** `true`
`managed_node_groups_enabled` (`bool`) optional
Set false to prevent the creation of EKS managed node groups.
**Default value:** `true`
`map_additional_iam_roles` optional
Additional IAM roles to grant access to the cluster.
*WARNING*: Full Role ARN, including path, is required for `rolearn`.
In earlier versions (with `aws-auth` ConfigMap), only the path
had to be removed from the Role ARN. The path is now required.
`username` is now ignored. This input is planned to be replaced
in a future release with a more flexible input structure that consolidates
`map_additional_iam_roles` and `map_additional_iam_users`.
**Type:**
```hcl
list(object({
rolearn = string
username = optional(string)
groups = list(string)
}))
```
**Default value:** `[ ]`
`map_additional_iam_users` optional
Additional IAM roles to grant access to the cluster.
`username` is now ignored. This input is planned to be replaced
in a future release with a more flexible input structure that consolidates
`map_additional_iam_roles` and `map_additional_iam_users`.
**Type:**
```hcl
list(object({
userarn = string
username = optional(string)
groups = list(string)
}))
```
**Default value:** `[ ]`
(Deprecated) AWS IAM Role ARNs of unmanaged Linux worker nodes to grant access to the EKS cluster.
In earlier versions, this could be used to grant access to worker nodes of any type
that were not managed by the EKS cluster. Now EKS requires that unmanaged worker nodes
be classified as Linux or Windows servers, in this input is temporarily retained
with the assumption that all worker nodes are Linux servers. (It is likely that
earlier versions did not work properly with Windows worker nodes anyway.)
This input is deprecated and will be removed in a future release.
In the future, this component will either have a way to separate Linux and Windows worker nodes,
or drop support for unmanaged worker nodes entirely.
**Default value:** `[ ]`
`node_group_defaults` optional
Defaults for node groups in the cluster
**Type:**
```hcl
object({
ami_release_version = optional(string, null)
ami_type = optional(string, null)
attributes = optional(list(string), null)
availability_zones = optional(list(string)) # set to null to use var.availability_zones
cluster_autoscaler_enabled = optional(bool, null)
create_before_destroy = optional(bool, null)
desired_group_size = optional(number, null)
instance_types = optional(list(string), null)
kubernetes_labels = optional(map(string), {})
kubernetes_taints = optional(list(object({
key = string
value = string
effect = string
})), [])
node_userdata = optional(object({
before_cluster_joining_userdata = optional(string)
bootstrap_extra_args = optional(string)
kubelet_extra_args = optional(string)
after_cluster_joining_userdata = optional(string)
}), {})
kubernetes_version = optional(string, null) # set to null to use cluster_kubernetes_version
max_group_size = optional(number, null)
min_group_size = optional(number, null)
resources_to_tag = optional(list(string), null)
tags = optional(map(string), null)
# block_device_map copied from cloudposse/terraform-aws-eks-node-group
# Keep in sync via copy and paste, but make optional
# Most of the time you want "/dev/xvda". For BottleRocket, use "/dev/xvdb".
block_device_map = optional(map(object({
no_device = optional(bool, null)
virtual_name = optional(string, null)
ebs = optional(object({
delete_on_termination = optional(bool, true)
encrypted = optional(bool, true)
iops = optional(number, null)
kms_key_id = optional(string, null)
snapshot_id = optional(string, null)
throughput = optional(number, null) # for gp3, MiB/s, up to 1000
volume_size = optional(number, 50) # disk size in GB
volume_type = optional(string, "gp3")
# Catch common camel case typos. These have no effect, they just generate better errors.
# It would be nice to actually use these, but volumeSize in particular is a number here
# and in most places it is a string with a unit suffix (e.g. 20Gi)
# Without these defined, they would be silently ignored and the default values would be used instead,
# which is difficult to debug.
deleteOnTermination = optional(any, null)
kmsKeyId = optional(any, null)
snapshotId = optional(any, null)
volumeSize = optional(any, null)
volumeType = optional(any, null)
}))
})), null)
# DEPRECATED: disk_encryption_enabled is DEPRECATED, use `block_device_map` instead.
disk_encryption_enabled = optional(bool, null)
# DEPRECATED: disk_size is DEPRECATED, use `block_device_map` instead.
disk_size = optional(number, null)
})
```
**Default value:**
```hcl
{
"block_device_map": {
"/dev/xvda": {
"ebs": {
"encrypted": true,
"volume_size": 20,
"volume_type": "gp2"
}
}
},
"desired_group_size": 1,
"instance_types": [
"t3.medium"
],
"kubernetes_version": null,
"max_group_size": 100
}
```
`node_groups` optional
List of objects defining a node group for the cluster
**Type:**
```hcl
map(object({
# EKS AMI version to use, e.g. "1.16.13-20200821" (no "v").
ami_release_version = optional(string, null)
# Type of Amazon Machine Image (AMI) associated with the EKS Node Group
ami_type = optional(string, null)
# Additional attributes (e.g. `1`) for the node group
attributes = optional(list(string), null)
# will create 1 auto scaling group in each specified availability zone
# or all AZs with subnets if none are specified anywhere
availability_zones = optional(list(string), null)
# Whether to enable Node Group to scale its AutoScaling Group
cluster_autoscaler_enabled = optional(bool, null)
# True to create new node_groups before deleting old ones, avoiding a temporary outage
create_before_destroy = optional(bool, null)
# Desired number of worker nodes when initially provisioned
desired_group_size = optional(number, null)
# Set of instance types associated with the EKS Node Group. Terraform will only perform drift detection if a configuration value is provided.
instance_types = optional(list(string), null)
# Key-value mapping of Kubernetes labels. Only labels that are applied with the EKS API are managed by this argument. Other Kubernetes labels applied to the EKS Node Group will not be managed
kubernetes_labels = optional(map(string), null)
# List of objects describing Kubernetes taints.
kubernetes_taints = optional(list(object({
key = string
value = string
effect = string
})), null)
node_userdata = optional(object({
before_cluster_joining_userdata = optional(string)
bootstrap_extra_args = optional(string)
kubelet_extra_args = optional(string)
after_cluster_joining_userdata = optional(string)
}), {})
# Desired Kubernetes master version. If you do not specify a value, the latest available version is used
kubernetes_version = optional(string, null)
# The maximum size of the AutoScaling Group
max_group_size = optional(number, null)
# The minimum size of the AutoScaling Group
min_group_size = optional(number, null)
# List of auto-launched resource types to tag
resources_to_tag = optional(list(string), null)
tags = optional(map(string), null)
# block_device_map copied from cloudposse/terraform-aws-eks-node-group
# Keep in sync via copy and paste, but make optional.
# Most of the time you want "/dev/xvda". For BottleRocket, use "/dev/xvdb".
block_device_map = optional(map(object({
no_device = optional(bool, null)
virtual_name = optional(string, null)
ebs = optional(object({
delete_on_termination = optional(bool, true)
encrypted = optional(bool, true)
iops = optional(number, null)
kms_key_id = optional(string, null)
snapshot_id = optional(string, null)
throughput = optional(number, null) # for gp3, MiB/s, up to 1000
volume_size = optional(number, 20) # Disk size in GB
volume_type = optional(string, "gp3")
# Catch common camel case typos. These have no effect, they just generate better errors.
# It would be nice to actually use these, but volumeSize in particular is a number here
# and in most places it is a string with a unit suffix (e.g. 20Gi)
# Without these defined, they would be silently ignored and the default values would be used instead,
# which is difficult to debug.
deleteOnTermination = optional(any, null)
kmsKeyId = optional(any, null)
snapshotId = optional(any, null)
volumeSize = optional(any, null)
volumeType = optional(any, null)
}))
})), null)
# DEPRECATED:
# Enable disk encryption for the created launch template (if we aren't provided with an existing launch template)
# DEPRECATED: disk_encryption_enabled is DEPRECATED, use `block_device_map` instead.
disk_encryption_enabled = optional(bool, null)
# Disk size in GiB for worker nodes. Terraform will only perform drift detection if a configuration value is provided.
# DEPRECATED: disk_size is DEPRECATED, use `block_device_map` instead.
disk_size = optional(number, null)
}))
```
**Default value:** `{ }`
`oidc_provider_enabled` (`bool`) optional
Create an IAM OIDC identity provider for the cluster, then you can create IAM roles to associate with a service account in the cluster, instead of using kiam or kube2iam. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
**Default value:** `true`
`public_access_cidrs` (`list(string)`) optional
Indicates which CIDR blocks can access the Amazon EKS public API server endpoint when enabled. EKS defaults this to a list with 0.0.0.0/0.
**Default value:**
```hcl
[
"0.0.0.0/0"
]
```
`subnet_type_tag_key` (`string`) optional
The tag used to find the private subnets to find by availability zone. If null, will be looked up in vpc outputs.
**Default value:** `null`
`vpc_component_name` (`string`) optional
The name of the vpc component
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`availability_zones`
Availability Zones in which the cluster is provisioned
`eks_addons_versions`
Map of enabled EKS Addons names and versions
`eks_auth_worker_roles`
List of worker IAM roles that were included in the `auth-map` ConfigMap.
`eks_cluster_arn`
The Amazon Resource Name (ARN) of the cluster
`eks_cluster_certificate_authority_data`
The Kubernetes cluster certificate authority data
`eks_cluster_endpoint`
The endpoint for the Kubernetes API server
`eks_cluster_id`
The name of the cluster
`eks_cluster_identity_oidc_issuer`
The OIDC Identity issuer for the cluster
`eks_cluster_managed_security_group_id`
Security Group ID that was created by EKS for the cluster. EKS creates a Security Group and applies it to ENI that is attached to EKS Control Plane master nodes and to any managed workloads
`eks_cluster_version`
The Kubernetes server version of the cluster
`eks_managed_node_workers_role_arns`
List of ARNs for workers in managed node groups
`eks_node_group_arns`
List of all the node group ARNs in the cluster
`eks_node_group_count`
Count of the worker nodes
`eks_node_group_ids`
EKS Cluster name and EKS Node Group name separated by a colon
`eks_node_group_role_names`
List of worker nodes IAM role names
`eks_node_group_statuses`
Status of the EKS Node Group
`fargate_profile_role_arns`
Fargate Profile Role ARNs
`fargate_profile_role_names`
Fargate Profile Role names
`fargate_profiles`
Fargate Profiles
`karpenter_iam_role_arn`
Karpenter IAM Role ARN
`karpenter_iam_role_name`
Karpenter IAM Role name
`vpc_cidr`
The CIDR of the VPC where this cluster is deployed.
The desired, minimum and maximum size of the node group in this availability zone
**Type:**
```hcl
object({
desired_size = number
min_size = number
max_size = number
})
```
### Optional Variables
`node_repair_enabled` (`bool`) optional
The node auto-repair configuration for the node group will be enabled. Defaults to false
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`node_group`
The EKS node group
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 5.8, < 6.0.0`
- `null`, version: `>= 3.0`
- `random`, version: `>= 2.0`
### Providers
- `aws`, version: `>= 5.8, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`az_abbreviation` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
`eks_node_group` | 3.4.0 | [`cloudposse/eks-node-group/aws`](https://registry.terraform.io/modules/cloudposse/eks-node-group/aws/3.4.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_subnets.private`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnets) (data source)
None
---
## node_group_by_region
# EKS Node Group by Region
## Variables
### Required Variables
The desired, minimum, and maximum number of nodes in the cluster.
**Type:**
```hcl
object({
desired_size = number
min_size = number
max_size = number
})
```
### Optional Variables
`availability_zones` (`list(string)`) optional
List of availability zones to deploy the cluster in
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`region_node_groups`
A map of availability zones to EKS node groups
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 5.8, < 6.0.0`
- `null`, version: `>= 3.0`
- `random`, version: `>= 2.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`node_group` | latest | [`../node_group_by_az`](https://registry.terraform.io/modules/../node_group_by_az/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
None
---
## datadog-agent
This component installs the `datadog-agent` for EKS clusters.
## Sponsorship
This project is supported by the [Datadog Open Source Program](https://www.datadoghq.com/partner/open-source/).
As part of this collaboration, Datadog provides a dedicated sandbox account that we use for automated integration and acceptance testing. This contribution allows us to continuously validate changes against a real Datadog environment, improving reliability and reducing the risk of regressions.
We are grateful to Datadog for supporting our open source ecosystem and helping ensure that infrastructure code for Terraform remains stable and well-tested
___
## Usage
**Stack Level**: Regional
Use this in the catalog as default values.
```yaml
components:
terraform:
datadog-agent:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
eks_component_name: eks/cluster
name: "datadog"
description: "Datadog Kubernetes Agent"
kubernetes_namespace: "monitoring"
create_namespace: true
repository: "https://helm.datadoghq.com"
chart: "datadog"
chart_version: "3.29.2"
timeout: 1200
wait: true
atomic: true
cleanup_on_fail: true
cluster_checks_enabled: false
helm_manifest_experiment_enabled: false
secrets_store_type: SSM
tags:
team: sre
service: datadog-agent
app: monitoring
# datadog-agent shouldn't be deployed to the Fargate nodes
values:
agents:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: eks.amazonaws.com/compute-type
operator: NotIn
values:
- fargate
datadog:
env:
- name: DD_EC2_PREFER_IMDSV2 # this merges ec2 instances and the node in the hostmap section
value: "true"
```
Deploy this to a particular environment such as dev, prod, etc.
This will add cluster checks to a specific environment.
```yaml
components:
terraform:
datadog-agent:
vars:
# Order affects merge order. Later takes priority. We append lists though.
datadog_cluster_check_config_paths:
- catalog/cluster-checks/defaults/*.yaml
- catalog/cluster-checks/dev/*.yaml
datadog_cluster_check_config_parameters: {}
# add additional tags to all data coming in from this agent.
datadog_tags:
- "env:dev"
- "region:us-west-2"
- "stage:dev"
```
## Cluster Checks
Cluster Checks are configurations that allow us to setup external URLs to be monitored. They can be configured through
the datadog agent or annotations on kubernetes services.
Cluster Checks are similar to synthetics checks, they are not as indepth, but significantly cheaper. Use Cluster Checks
when you need a simple health check beyond the kubernetes pod health check.
Public addresses that test endpoints must use the agent configuration, whereas service addresses internal to the cluster
can be tested by annotations.
### Adding Cluster Checks
Cluster Checks can be enabled or disabled via the `cluster_checks_enabled` variable. We recommend this be set to true.
New Cluster Checks can be added to defaults to be applied in every account. Alternatively they can be placed in an
individual stage folder which will be applied to individual stages. This is controlled by the
`datadog_cluster_check_config_parameters` variable, which determines the paths of yaml files to look for cluster checks
per stage.
Once they are added, and properly configured, the new checks show up in the network monitor creation under `ssl` and
`Http`
**Please note:** the yaml file name doesn't matter, but the root key inside which is `something.yaml` does matter. this
is following
[datadogs docs](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/?tab=helm#configuration-from-static-configuration-files)
for `.yaml`.
#### Sample Yaml
:::warning
The key of a filename must match datadog docs, which is `.yaml` >
[Datadog Cluster Checks](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/?tab=helm#configuration-from-static-configuration-files)
:::
Cluster Checks **can** be used for external URL testing (loadbalancer endpoints), whereas annotations **must** be used
for kubernetes services.
```
http_check.yaml:
cluster_check: true
init_config:
instances:
- name: "[${stage}] Echo Server"
url: "https://echo.${stage}.uw2.acme.com"
- name: "[${stage}] Portal"
url: "https://portal.${stage}.uw2.acme.com"
- name: "[${stage}] ArgoCD"
url: "https://argocd.${stage}.uw2.acme.com"
```
### Monitoring Cluster Checks
Using Cloudposse's `datadog-monitor` component. The following yaml snippet will monitor all HTTP Cluster Checks, this
can be added to each stage (usually via a defaults folder).
```yaml
https-checks:
name: "(Network Check) ${stage} - HTTPS Check"
type: service check
query: |
"http.can_connect".over("stage:${stage}").by("instance").last(2).count_by_status()
message: |
HTTPS Check failed on {{instance.name}}
in Stage: {{stage.name}}
escalation_message: ""
tags:
managed-by: Terraform
notify_no_data: false
notify_audit: false
require_full_window: true
enable_logs_sample: false
force_delete: true
include_tags: true
locked: false
renotify_interval: 0
timeout_h: 0
evaluation_delay: 0
new_host_delay: 0
new_group_delay: 0
no_data_timeframe: 2
threshold_windows: {}
thresholds:
critical: 1
warning: 1
ok: 1
```
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended
`kubernetes_namespace` (`string`) required
Kubernetes namespace to install the release into
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used
**Default value:** `true`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails
**Default value:** `true`
`cluster_checks_enabled` (`bool`) optional
Enable Cluster Checks for the Datadog Agent
**Default value:** `false`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`repository` (`string`) optional
Repository URL where to locate the requested chart
**Default value:** `null`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`verify` (`bool`) optional
Verify the package before installing it. Helm uses a provenance file to verify the integrity of the chart; this must be hosted alongside the chart
**Default value:** `false`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cluster_checks`
Cluster Checks for the cluster
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
- `utils`, version: `>= 1.10.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`datadog_agent` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`datadog_cluster_check_yaml_config` | 1.0.2 | [`cloudposse/config/yaml`](https://registry.terraform.io/modules/cloudposse/config/yaml/1.0.2) | n/a
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`values_merge` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## echo-server
This is copied from
[cloudposse/terraform-aws-components](https://github.com/cloudposse/terraform-aws-components/tree/main/modules/echo-server).
This component installs the [Ealenn/Echo-Server](https://github.com/Ealenn/Echo-Server) to EKS clusters. The echo server
is a server that sends it back to the client a JSON representation of all the data the server received, which is a
combination of information sent by the client and information sent by the web server infrastructure. For further
details, please consult the [Echo-Server documentation](https://ealenn.github.io/Echo-Server/).
## Prerequisites
Echo server is intended to provide end-to-end testing of everything needed to deploy an application or service with a
public HTTPS endpoint. It uses defaults where possible, such as using the default IngressClass, in order to verify that
the defaults are sufficient for a typical application.
In order to minimize the impact of the echo server on the rest of the cluster, it does not set any configuration that
would affect other ingresses, such as WAF rules, logging, or redirecting HTTP to HTTPS. Those settings should be
configured in the IngressClass where possible.
Therefore, it requires several other components. At the moment, it supports 2 configurations:
1. ALB with ACM Certificate
- AWS Load Balancer Controller (ALB) version 2.2.0 or later, with ACM certificate auto-discovery enabled
- A default IngressClass, which can be provisioned by the `alb-controller` component as part of deploying the
controller, or can be provisioned separately, for example by the `alb-controller-ingress-class` component.
- Pre-provisioned ACM TLS certificate covering the provisioned host name (typically a wildcard certificate covering all
hosts in the domain)
2. Nginx with Cert Manager Certificate
- Nginx (via `kubernetes/ingress-nginx` controller). We recommend `ingress-nginx` v1.1.0 or later, but `echo-server`
should work with any version that supports Ingress API version `networking.k8s.io/v1`.
- `jetstack/cert-manager` configured to automatically (via Ingress Shim, installed by default) generate TLS certificates
via a Cluster Issuer (by default, named `letsEncrypt-prod`).
In both configurations, it has these common requirements:
- EKS component deployed, with component name specified in `eks_component_name` (defaults to "eks/cluster")
- Kubernetes version 1.19 or later
- Ingress API version `networking.k8s.io/v1`
- [kubernetes-sigs/external-dns](https://github.com/kubernetes-sigs/external-dns)
- A default IngressClass, either explicitly provisioned or supported without provisioning by the Ingress controller.
## Warnings
A Terraform plan may fail to apply, giving a Kubernetes authentication failure. This is due to a known issue with
Terraform and the Kubernetes provider. During the "plan" phase Terraform gets a short-lived Kubernetes authentication
token and caches it, and then tries to use it during "apply". If the token has expired by the time you try to run
"apply", the "apply" will fail. The workaround is to run `terraform apply -auto-approve` without a "plan" file.
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
Set `ingress_type` to "alb" if using `alb-controller` or "nginx" if using `ingress-nginx`.
Normally, you should not set the IngressClass or IngressGroup, as this component is intended to test the defaults.
However, if you need to, set them in `chart_values`:
```yaml
chart_values:
ingress:
class: "other-ingress-class"
alb:
# IngressGroup is specific to alb-controller
group_name: "other-ingress-group"
```
Note that if you follow recommendations and do not set the ingress class name, the deployed Ingress will have the
ingressClassName setting injected by the Ingress controller, set to the then-current default. This means that if later
you change the default IngressClass, the Ingress will be NOT be updated to use the new default. Furthermore, because of
limitations in the Helm provider, this will not be detected as drift. You will need to destroy and re-deploy the echo
server to update the Ingress to the new default.
```yaml
components:
terraform:
echo-server:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: "echo-server"
kubernetes_namespace: "echo"
description: "Echo server, for testing purposes"
create_namespace: true
timeout: 180
wait: true
atomic: true
cleanup_on_fail: true
ingress_type: "alb" # or "nginx"
# %[1]v is the tenant name, %[2]v is the stage name, %[3]v is the region name
hostname_template: "echo.%[3]v.%[2]v.%[1]v.sample-domain.net"
```
In rare cases where some ingress controllers do not support the `ingressClassName` field, you can restore the old
`kubernetes.io/ingress.class` annotation by setting `ingress.use_ingress_class_annotation: true` in `chart_values`.
## Variables
### Required Variables
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_values` (`any`) optional
Addition map values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
`description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`hostname` (`string`) optional
Hostname override. When set, this takes precedence over hostname_template.
**Default value:** `""`
`hostname_template` (`string`) optional
The `format()` string to use to generate the hostname via `format(var.hostname_template, var.tenant, var.stage, var.environment)`"
Typically something like `"echo.%[3]v.%[2]v.example.com"`.
**Default value:** `""`
`ingress_type` (`string`) optional
Set to 'nginx' to create an ingress resource relying on an NGiNX backend for the echo-server service. Set to 'alb' to create an ingress resource relying on an AWS ALB backend for the echo-server service. Leave blank to not create any ingress for the echo-server service.
**Default value:** `null`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `null`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`verify` (`bool`) optional
Verify the package before installing it. Helm uses a provenance file to verify the integrity of the chart; this must be hosted alongside the chart
**Default value:** `false`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`hostname`
Hostname of the deployed echo server
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`echo_server` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## external-dns
This component creates a Helm deployment for [external-dns](https://github.com/kubernetes-sigs/external-dns) on a
Kubernetes cluster. [external-dns](https://github.com/kubernetes-sigs/external-dns) is a Kubernetes addon that
configures public DNS servers with information about exposed Kubernetes services to make them discoverable.
## Usage
**Stack Level**: Regional
Once the catalog is created, the file can be imported as follows.
```yaml
import:
- catalog/eks/external-dns
...
```
The default catalog values `e.g. stacks/catalog/eks/external-dns.yaml`
```yaml
components:
terraform:
external-dns:
vars:
enabled: true
name: external-dns
chart: external-dns
chart_repository: https://kubernetes-sigs.github.io/external-dns/
chart_version: "1.18.0"
create_namespace: true
kubernetes_namespace: external-dns
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
# Set this to a unique value to avoid conflicts with other external-dns instances managing the same zones.
# For example, when using blue-green deployment pattern to update EKS cluster.
txt_prefix: ""
# You can use `chart_values` to set any other chart options. Treat `chart_values` as the root of the doc.
# See documentation for latest chart version and list of chart_values: https://artifacthub.io/packages/helm/external-dns/external-dns
#
# # For example
# ---
# chart_values:
# provider:
# name: aws
# extraArgs:
# - --aws-batch-change-size=1000
chart_values: {}
# Extra hosted zones to lookup and support by component name
dns_components:
- component: dns-primary
- component: dns-delegated
- component: dns-delegated/abc
- component: dns-delegated/123
environment: "gbl" # Optional (default "gbl")
```
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
`chart_repository` (`string`) required
Repository URL where to locate the requested chart.
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region.
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_values` (`any`) optional
Addition map values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`crd_enabled` (`bool`) optional
Install and use the integrated DNSEndpoint CRD.
**Default value:** `false`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `false`.
**Default value:** `null`
`dns_components` optional
A list of additional DNS components to search for ZoneIDs
**Type:**
```hcl
list(object({
component = string,
environment = optional(string)
}))
```
**Default value:** `[ ]`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`istio_enabled` (`bool`) optional
Add istio gateways to monitored sources.
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`metrics_enabled` (`bool`) optional
Whether or not to enable metrics in the helm chart.
**Default value:** `false`
`policy` (`string`) optional
Modify how DNS records are synchronized between sources and providers (options: sync, upsert-only)
**Default value:** `"sync"`
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`resources` optional
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
**Default value:**
```hcl
{
"limits": {
"cpu": "200m",
"memory": "256Mi"
},
"requests": {
"cpu": "100m",
"memory": "128Mi"
}
}
```
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`txt_prefix` (`string`) optional
Prefix to create a TXT record with a name following the pattern prefix.`<CNAME record>`.
**Default value:** `"external-dns"`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`additional_dns_components` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`dns_gbl_primary` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`external_dns` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
---
## external-secrets-operator
This component (ESO) is used to create an external `SecretStore` configured to synchronize secrets from AWS SSM
Parameter store as Kubernetes Secrets within the cluster. Per the operator pattern, the `external-secret-operator` pods
will watch for any `ExternalSecret` resources which reference the `SecretStore` to pull secrets from.
In practice, this means apps will define an `ExternalSecret` that pulls all env into a single secret as part of a helm
chart; e.g.:
```
# Part of the charts in `/releases
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
spec:
refreshInterval: 30s
secretStoreRef:
name: "secret-store-parameter-store" # Must match name of the Cluster Secret Store created by this component
kind: ClusterSecretStore
target:
creationPolicy: Owner
name: app-secrets
dataFrom:
- find:
name:
regexp: "^/app/" # Match the path prefix of your service
rewrite:
- regexp:
source: "/app/(.*)" # Remove the path prefix of your service from the name before creating the envars
target: "$1"
```
This component assumes secrets are prefixed by "service" in parameter store (e.g. `/app/my_secret`). The `SecretStore`.
The component is designed to pull secrets from a `path` prefix (defaulting to `"app"`). This should work nicely along
`chamber` which uses this same path (called a "service" in Chamber). For example, developers should store keys like so.
```bash
assume-role acme-platform-gbl-sandbox-admin
chamber write app MY_KEY my-value
```
See `docs/recipes.md` for more information on managing secrets.
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
```yaml
components:
terraform:
eks/external-secrets-operator:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: "external-secrets-operator"
helm_manifest_experiment_enabled: false
chart: "external-secrets"
chart_repository: "https://charts.external-secrets.io"
chart_version: "0.8.3"
kubernetes_namespace: "secrets"
create_namespace: true
timeout: 90
wait: true
atomic: true
cleanup_on_fail: true
tags:
Team: sre
Service: external-secrets-operator
resources:
limits:
cpu: "100m"
memory: "300Mi"
requests:
cpu: "20m"
memory: "60Mi"
parameter_store_paths:
- app
- rds
# You can use `chart_values` to set any other chart options. Treat `chart_values` as the root of the doc.
#
# # For example
# ---
# chart_values:
# installCRDs: true
chart_values: {}
kms_aliases_allow_decrypt: []
# - "alias/foo/bar"
```
## Variables
### Required Variables
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region
`resources` required
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"external-secrets"`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `"External Secrets Operator is a Kubernetes operator that integrates external secret management systems including AWS SSM, Parameter Store, Hasicorp Vault, 1Password Secrets Automation, etc. It reads values from external vaults and injects values as a Kubernetes Secret"`
`chart_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://charts.external-secrets.io"`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `"0.6.0-rc1"`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
A list of KMS aliases that the SecretStore is allowed to decrypt.
**Default value:** `[ ]`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`parameter_store_paths` (`set(string)`) optional
A list of path prefixes that the SecretStore is allowed to access via IAM. This should match the convention 'service' that Chamber uploads keys under.
**Default value:**
```hcl
[
"app"
]
```
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`verify` (`bool`) optional
Verify the package before installing it. Helm uses a provenance file to verify the integrity of the chart; this must be hosted alongside the chart
**Default value:** `false`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`external_secrets_operator` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | CRDs are automatically installed by "cloudposse/helm-release/aws" https://external-secrets.io/v0.5.9/guides-getting-started/
`external_ssm_secrets` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_namespace.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_kms_alias.kms_aliases`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/kms_alias) (data source)
- [`kubernetes_resources.crd`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/resources) (data source)
---
## github-actions-runner
This component deploys self-hosted GitHub Actions Runners and a
[Controller](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller#introduction)
on an EKS cluster, using
"[runner scale sets](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#runner-scale-set)".
This solution is supported by GitHub and supersedes the
[actions-runner-controller](https://github.com/actions/actions-runner-controller/blob/master/docs/about-arc.md)
developed by Summerwind and deployed by Cloud Posse's
[actions-runner-controller](https://docs.cloudposse.com/components/library/aws/eks/actions-runner-controller/)
component.
### Current limitations
The runner image used by Runner Sets contains
[no more packages than are necessary](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller#about-the-runner-container-image)
to run the runner. This is in contrast to the Summerwind implementation, which contains some commonly needed packages
like `build-essential`, `curl`, `wget`, `git`, and `jq`, and the GitHub hosted images which contain a robust set of
tools. (This is a limitation of the official Runner Sets implementation, not this component per se.) You will need to
install any tools you need in your workflows, either as part of your workflow (recommended), by maintaining a
[custom runner image](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller#creating-your-own-runner-image),
or by running such steps in a
[separate container](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container) that has the tools
pre-installed. Many tools have publicly available actions to install them, such as `actions/setup-node` to install
NodeJS or `dcarbone/install-jq-action` to install `jq`. You can also install packages using
`awalsh128/cache-apt-pkgs-action`, which has the advantage of being able to skip the installation if the package is
already installed, so you can more efficiently run the same workflow on GitHub hosted as well as self-hosted runners.
:::info
There are (as of this writing) open feature requests to add some commonly needed packages to the official Runner Sets
runner image. You can upvote these requests
[here](https://github.com/actions/actions-runner-controller/discussions/3168) and
[here](https://github.com/orgs/community/discussions/80868) to help get them implemented.
:::
In the current version of this component, only "dind" (Docker in Docker) mode has been tested. Support for "kubernetes"
mode is provided, but has not been validated.
Many elements in the Controller chart are not directly configurable by named inputs. To configure them, you can use the
`controller.chart_values` input or create a `resources/values-controller.yaml` file in the component to supply values.
Almost all the features of the Runner Scale Set chart are configurable by named inputs. The exceptions are:
- There is no specific input for specifying an outbound HTTP proxy.
- There is no specific input for supplying a
[custom certificate authority (CA) certificate](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#custom-tls-certificates)
to use when connecting to GitHub Enterprise Server.
You can specify these values by creating a `resources/values-runner.yaml` file in the component and setting values as
shown by the default Helm
[values.yaml](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml),
and they will be applied to all runners.
Currently, this component has some additional limitations. In particular:
- The controller and all runners and listeners share the Image Pull Secrets. You cannot use different ones for different
runners.
- All the runners use the same GitHub secret (app or PAT). Using a GitHub app is preferred anyway, and the single GitHub
app serves the entire organization.
- Only one controller is supported per cluster, though it can have multiple replicas.
These limitations could be addressed if there is demand. Contact
[Cloud Posse Professional Services](https://cloudposse.com/professional-services/) if you would be interested in
sponsoring the development of any of these features.
### Ephemeral work storage
The runners are configured to use ephemeral storage for workspaces, but the details and defaults can be a bit confusing.
When running in "dind" ("Docker in Docker") mode, the default is to use `emptyDir`, which means space on the `kubelet`
base directory, which is usually the root disk. You can manage the amount of storage allowed to be used with
`ephemeral_storage` requests and limits, or you can just let it use whatever free space there is on the root disk.
When running in `kubernetes` mode, the only supported local disk storage is an ephemeral `PersistentVolumeClaim`, which
causes a separate disk to be allocated for the runner pod. This disk is ephemeral, and will be deleted when the runner
pod is deleted. When combined with the recommended ephemeral runner configuration, this means that a new disk will be
created for each job, and deleted when the job is complete. That is a lot of overhead and will slow things down
somewhat.
The size of the attached PersistentVolume is controlled by `ephemeral_pvc_storage` (a Kubernetes size string like "1G")
and the kind of storage is controlled by `ephemeral_pvc_storage_class` (which can be omitted to use the cluster default
storage class).
This mode is also optionally available when using `dind`. To enable it, set `ephemeral_pvc_storage` to the desired size.
Leave `ephemeral_pvc_storage` at the default value of `null` to use `emptyDir` storage (recommended).
Beware that using a PVC may significantly increase the startup of the runner. If you are using a PVC, you may want to
keep idle runners available so that jobs can be started without waiting for a new runner to start.
## Usage
**Stack Level**: Regional
Once the catalog file is created, the file can be imported as follows.
```yaml
import:
- catalog/eks/github-actions-runner
...
```
The default catalog values `e.g. stacks/catalog/eks/github-actions-runner.yaml`
```yaml
components:
terraform:
eks/github-actions-runner:
vars:
enabled: true
ssm_region: "us-east-2"
name: "gha-runner-controller"
charts:
controller:
chart_version: "0.7.0"
runner_sets:
chart_version: "0.7.0"
controller:
kubernetes_namespace: "gha-runner-controller"
create_namespace: true
create_github_kubernetes_secret: true
ssm_github_secret_path: "/github-action-runners/github-auth-secret"
github_app_id: "123456"
github_app_installation_id: "12345678"
runners:
config-default: &runner-default
enabled: false
github_url: https://github.com/cloudposse
# group: "default"
# kubernetes_namespace: "gha-runner-private"
create_namespace: true
# If min_replicas > 0 and you also have do-not-evict: "true" set
# then the idle/waiting runner will keep Karpenter from deprovisioning the node
# until a job runs and the runner is deleted.
# override by setting `pod_annotations: {}`
pod_annotations:
karpenter.sh/do-not-evict: "true"
min_replicas: 0
max_replicas: 8
resources:
limits:
cpu: 1100m
memory: 1024Mi
ephemeral-storage: 5Gi
requests:
cpu: 500m
memory: 256Mi
ephemeral-storage: 1Gi
self-hosted-default:
<<: *runner-default
enabled: true
kubernetes_namespace: "gha-runner-private"
# If min_replicas > 0 and you also have do-not-evict: "true" set
# then the idle/waiting runner will keep Karpenter from deprovisioning the node
# until a job runs and the runner is deleted. So we override the default.
pod_annotations: {}
min_replicas: 1
max_replicas: 12
resources:
limits:
cpu: 1100m
memory: 1024Mi
ephemeral-storage: 5Gi
requests:
cpu: 500m
memory: 256Mi
ephemeral-storage: 1Gi
self-hosted-large:
<<: *runner-default
enabled: true
resources:
limits:
cpu: 6000m
memory: 7680Mi
ephemeral-storage: 90G
requests:
cpu: 4000m
memory: 7680Mi
ephemeral-storage: 40G
```
### Authentication and Secrets
The GitHub Action Runners need to authenticate to GitHub in order to do such things as register runners and pickup jobs.
You can authenticate using either a
[GitHub App](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/authenticating-to-the-github-api#authenticating-arc-with-a-github-app)
or a
[Personal Access Token (classic)](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/authenticating-to-the-github-api#authenticating-arc-with-a-personal-access-token-classic).
The preferred way to authenticate is by _creating_ and _installing_ a GitHub App. This is the recommended approach as it
allows for much more restricted access than using a Personal Access Token (classic), and the Action Runners do not
currently support using a fine-grained Personal Access Token.
#### Site note about SSM and Regions
This component supports using AWS SSM to store and retrieve secrets. SSM parameters are regional, so if you want to
deploy to multiple regions you have 2 choices:
1. Create the secrets in each region. This is the most robust approach, but requires you to create the secrets in each
region and keep them in sync.
2. Create the secrets in one region and use the `ssm_region` input to specify the region where they are stored. This is
the easiest approach, but does add some obstacles to managing deployments during a region outage. If the region where
the secrets are stored goes down, there will be no impact on runners in other regions, but you will not be able to
deploy new runners or modify existing runners until the SSM region is restored or until you set up SSM parameters in
a new region.
Alternatively, you can create Kubernetes secrets outside of this component (perhaps using
[SOPS](https://github.com/getsops/sops)) and reference them by name. We describe here how to save the secrets to SSM,
but you can save the secrets wherever and however you want to, as long as you deploy them as Kubernetes secret the
runners can reference. If you store them in SSM, this component will take care of the rest, but the standard Terraform
caveat applies: any secrets referenced by Terraform will be stored unencrypted in the Terraform state file.
#### Creating and Using a GitHub App
Follow the instructions
[here](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/authenticating-to-the-github-api#authenticating-arc-with-a-github-app)
to create and install a GitHub App for the runners to use for authentication.
At the App creation stage, you will be asked to generate a private key. This is the private key that will be used to
authenticate the Action Runner. Download the file and store the contents in SSM using the following command, adjusting
the profile, region, and file name. The profile should be the `terraform` role in the account to which you are deploying
the runner controller. The region should be the region where you are deploying the primary runner controller. If you are
deploying runners to multiple regions, they can all reference the same SSM parameter by using the `ssm_region` input to
specify the region where they are stored. The file name (argument to `cat`) should be the name of the private key file
you downloaded.
```
# Adjust profile name and region to suit your environment, use file name you chose for key
AWS_PROFILE=acme-core-gbl-auto-terraform AWS_REGION=us-west-2 chamber write github-action-runners github-auth-secret -- "$(cat APP_NAME.DATE.private-key.pem)"
```
You can verify the file was correctly written to SSM by matching the private key fingerprint reported by GitHub with:
```
AWS_PROFILE=acme-core-gbl-auto-terraform AWS_REGION=us-west-2 chamber read -q github-action-runners github-auth-secret | openssl rsa -in - -pubout -outform DER | openssl sha256 -binary | openssl base64
```
At this stage, record the Application ID and the private key fingerprint in your secrets manager (e.g. 1Password). You
may want to record the private key as well, or you may consider it sufficient to have it in SSM. You will need the
Application ID to configure the runner controller, and want the fingerprint to verify the private key. (You can see the
fingerprint in the GitHub App settings, under "Private keys".)
Proceed to install the GitHub App in the organization or repository you want to use the runner controller for, and
record the Installation ID (the final numeric part of the URL, as explained in the instructions linked above) in your
secrets manager. You will need the Installation ID to configure the runner controller.
In your stack configuration, set the following variables, making sure to quote the values so they are treated as
strings, not numbers.
```
github_app_id: "12345"
github_app_installation_id: "12345"
```
#### OR (obsolete): Creating and Using a Personal Access Token (classic)
Though not recommended, you can use a Personal Access Token (classic) to authenticate the runners. To do so, create a
PAT (classic) as described in the
[GitHub Documentation](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/authenticating-to-the-github-api#authenticating-arc-with-a-personal-access-token-classic).
Save this to the value specified by `ssm_github_token_path` using the following command, adjusting the AWS profile and
region as explained above:
```
AWS_PROFILE=acme-core-gbl-auto-terraform AWS_REGION=us-west-2 chamber write github-action-runners github-auth-secret -- ""
```
### Using Runner Groups
GitHub supports grouping runners into distinct
[Runner Groups](https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups),
which allow you to have different access controls for different runners. Read the linked documentation about creating
and configuring Runner Groups, which you must do through the GitHub Web UI. If you choose to create Runner Groups, you
can assign one or more Runner Sets (from the `runners` map) to groups (only one group per runner set, but multiple sets
can be in the same group) by including `group: ` in the runner configuration. We recommend including
it immediately after `github_url`.
### Interaction with Karpenter or other EKS autoscaling solutions
Kubernetes cluster autoscaling solutions generally expect that a Pod runs a service that can be terminated on one Node
and restarted on another with only a short duration needed to finish processing any in-flight requests. When the cluster
is resized, the cluster autoscaler will do just that. However, GitHub Action Runner Jobs do not fit this model. If a Pod
is terminated in the middle of a job, the job is lost. The likelihood of this happening is increased by the fact that
the Action Runner Controller Autoscaler is expanding and contracting the size of the Runner Pool on a regular basis,
causing the cluster autoscaler to more frequently want to scale up or scale down the EKS cluster, and, consequently, to
move Pods around.
To handle these kinds of situations, Karpenter respects an annotation on the Pod:
```yaml
spec:
template:
metadata:
annotations:
karpenter.sh/do-not-evict: "true"
```
When you set this annotation on the Pod, Karpenter will not voluntarily evict it. This means that the Pod will stay on
the Node it is on, and the Node it is on will not be considered for deprovisioning (scale down). This is good because it
means that the Pod will not be terminated in the middle of a job. However, it also means that the Node the Pod is on
will remain running until the Pod is terminated, even if the node is underutilized and Karpenter would like to get rid
of it.
Since the Runner Pods terminate at the end of the job, this is not a problem for the Pods actually running jobs.
However, if you have set `minReplicas > 0`, then you have some Pods that are just idling, waiting for jobs to be
assigned to them. These Pods are exactly the kind of Pods you want terminated and moved when the cluster is
underutilized. Therefore, when you set `minReplicas > 0`, you should **NOT** set `karpenter.sh/do-not-evict: "true"` on
the Pod.
### Updating CRDs
When updating the chart or application version of `gha-runner-scale-set-controller`, it is possible you will need to
install new CRDs. Such a requirement should be indicated in the `gha-runner-scale-set-controller` release notes and may
require some adjustment to this component.
This component uses `helm` to manage the deployment, and `helm` will not auto-update CRDs. If new CRDs are needed,
follow the instructions in the release notes for the Helm chart or `gha-runner-scale-set-controller` itself.
### Useful Reference
- Runner Scale Set Controller's Helm chart
[values.yaml](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set-controller/values.yaml)
- Runner Scale Set's Helm chart
[values.yaml](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml)
- Runner Scale Set's
[Docker image](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller#about-the-runner-container-image)
and
[how to create your own](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller#creating-your-own-runner-image)
When reviewing documentation, code, issues, etc. for self-hosted GitHub action runners or the Actions Runner Controller
(ARC), keep in mind that there are 2 implementations going by that name. The original implementation, which is now
deprecated, uses the `actions.summerwind.dev` API group, and is at times called the Summerwind or Legacy implementation.
It is primarily described by documentation in the
[actions/actions-runner-controller](https://github.com/actions/actions-runner-controller) GitHub repository itself.
The new implementation, which is the one this component uses, uses the `actions.github.com` API group, and is at times
called the GitHub implementation or "Runner Scale Sets" implementation. The new implementation is described in the
official
[GitHub documentation](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller).
Feature requests about the new implementation are officially directed to the
[Actions category of GitHub community discussion](https://github.com/orgs/community/discussions/categories/actions).
However, Q&A and community support is directed to the `actions/actions-runner-controller` repo's
[Discussion section](https://github.com/actions/actions-runner-controller/discussions), though beware that discussions
about the old implementation are mixed in with discussions about the new implementation.
Bug reports for the new implementation are still filed under the `actions/actions-runner-controller` repo's
[Issues](https://github.com/actions/actions-runner-controller/issues) tab, though again, these are mixed in with bug
reports for the old implementation. Look for the `gha-runner-scale-set` label to find issues specific to the new
implementation.
## Variables
### Required Variables
`charts` required
Map of Helm charts to install. Keys are "controller" and "runner_sets".
**Type:**
```hcl
map(object({
chart_version = string
chart = optional(string, null) # defaults according to the key to "gha-runner-scale-set-controller" or "gha-runner-scale-set"
chart_description = optional(string, null) # visible in Helm history
chart_repository = optional(string, "oci://ghcr.io/actions/actions-runner-controller-charts")
wait = optional(bool, true)
atomic = optional(bool, true)
cleanup_on_fail = optional(bool, true)
timeout = optional(number, null)
}))
```
If `true`, this component will create the Kubernetes Secret that will be used to get
the GitHub App private key or GitHub PAT token, based on the value retrieved
from SSM at the `var.ssm_github_secret_path`. WARNING: This will cause
the secret to be stored in plaintext in the Terraform state.
If `false`, this component will not create a secret and you must create it
(with the name given by `var.github_kubernetes_secret_name`) in every
namespace where you are deploying runners (the controller does not need it).
**Default value:** `true`
If `true` and `image_pull_secret_enabled` is `true`, this component will create the Kubernetes image pull secret resource,
using the value in SSM at the path specified by `ssm_image_pull_secret_path`.
WARNING: This will cause the secret to be stored in plaintext in the Terraform state.
If `false`, this component will not create a secret and you must create it
(with the name given by `var.github_kubernetes_secret_name`) in every
namespace where you are deploying controllers or runners.
**Default value:** `true`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
`github_app_id` (`string`) optional
The ID of the GitHub App to use for the runner controller. Leave empty if using a GitHub PAT.
**Default value:** `null`
`github_app_installation_id` (`string`) optional
The "Installation ID" of the GitHub App to use for the runner controller. Leave empty if using a GitHub PAT.
**Default value:** `null`
Name of the Kubernetes Secret that will be used as the imagePullSecret.
**Default value:** `"gha-image-pull-secret"`
`image_pull_secret_enabled` (`bool`) optional
Whether to configure the controller and runners with an image pull secret.
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`runners` optional
Map of Runner Scale Set configurations, with the key being the name of the runner set.
Please note that the name must be in kebab-case (no underscores).
For example:
```hcl
organization-runner = {
# Specify the scope (organization or repository) and the target
# of the runner via the `github_url` input.
# ex: https://github.com/myorg/myrepo or https://github.com/myorg
github_url = https://github.com/myorg
group = "core-automation" # Optional. Assigns the runners to a runner group, for access control.
min_replicas = 1
max_replicas = 5
}
```
**Type:**
```hcl
map(object({
# we allow a runner to be disabled because Atmos cannot delete an inherited map object
enabled = optional(bool, true)
github_url = string
group = optional(string, null)
kubernetes_namespace = optional(string, null) # defaults to the controller's namespace
create_namespace = optional(bool, true)
image = optional(string, "ghcr.io/actions/actions-runner:latest") # repo and tag
mode = optional(string, "dind") # Optional. Can be "dind" or "kubernetes".
pod_labels = optional(map(string), {})
pod_annotations = optional(map(string), {})
affinity = optional(map(string), {})
node_selector = optional(map(string), {})
tolerations = optional(list(object({
key = string
operator = string
value = optional(string, null)
effect = string
# tolerationSeconds is not supported, because Terraform requires all objects in a list to have the same keys,
# but tolerationSeconds must be omitted to get the default behavior of "tolerate forever".
# If really needed, could use a default value of 1,000,000,000 (one billion seconds = about 32 years).
})), [])
min_replicas = number
max_replicas = number
# ephemeral_pvc_storage and _class are ignored for "dind" mode but required for "kubernetes" mode
ephemeral_pvc_storage = optional(string, null) # ex: 10Gi
ephemeral_pvc_storage_class = optional(string, null)
kubernetes_mode_service_account_annotations = optional(map(string), {})
resources = optional(object({
limits = optional(object({
cpu = optional(string, null)
memory = optional(string, null)
ephemeral-storage = optional(string, null)
}), null)
requests = optional(object({
cpu = optional(string, null)
memory = optional(string, null)
ephemeral-storage = optional(string, null)
}), null)
}), null)
}))
```
**Default value:** `{ }`
`ssm_github_secret_path` (`string`) optional
The path in SSM to the GitHub app private key file contents or GitHub PAT token.
**Default value:** `"/github-action-runners/github-auth-secret"`
`ssm_image_pull_secret_path` (`string`) optional
SSM path to the base64 encoded `dockercfg` image pull secret.
**Default value:** `"/github-action-runners/image-pull-secrets"`
`ssm_region` (`string`) optional
AWS Region where SSM secrets are stored. Defaults to `var.region`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
`runners`
Human-readable summary of the deployed runners
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`gha_runner_controller` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`gha_runners` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_namespace.controller`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
- [`kubernetes_namespace.runner`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
- [`kubernetes_secret_v1.controller_image_pull_secret`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret_v1) (resource)
- [`kubernetes_secret_v1.controller_ns_github_secret`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret_v1) (resource)
- [`kubernetes_secret_v1.github_secret`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret_v1) (resource)
- [`kubernetes_secret_v1.image_pull_secret`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret_v1) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_ssm_parameter.github_token`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.image_pull_secret`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## idp-roles
This component installs the `idp-roles` for EKS clusters. These identity provider roles specify several pre-determined
permission levels for cluster users and come with bindings that make them easy to assign to Users and Groups.
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
```yaml
components:
terraform:
eks/idp-roles:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: "idp-roles"
kubeconfig_exec_auth_api_version: "client.authentication.k8s.io/v1beta1"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `"Identity provider roles and role bindings"`
`chart_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `null`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
Kubernetes namespace to install the release into
**Default value:** `"kube-system"`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `300`
`verify` (`bool`) optional
Verify the package before installing it. Helm uses a provenance file to verify the integrity of the chart; this must be hosted alongside the chart
**Default value:** `false`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`idp_roles` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## karpenter-controller
This component provisions [Karpenter](https://karpenter.sh) on an EKS cluster.
It requires at least version 0.32.0 of Karpenter, though using the latest
version is recommended.
## Usage
**Stack Level**: Regional
These instructions assume you are provisioning 2 EKS clusters in the same account and region, named "blue" and "green",
and alternating between them. If you are only using a single cluster, you can ignore the "blue" and "green" references
and remove the `metadata` block from the `karpenter` module.
```yaml
components:
terraform:
# Base component of all `karpenter` components
eks/karpenter:
metadata:
type: abstract
vars:
enabled: true
eks_component_name: "eks/cluster"
name: "karpenter"
# https://github.com/aws/karpenter/tree/main/charts/karpenter
chart_repository: "oci://public.ecr.aws/karpenter"
chart: "karpenter"
chart_version: "1.6.0"
# Enable Karpenter to get advance notice of spot instances being terminated
# See https://karpenter.sh/docs/concepts/#interruption
interruption_handler_enabled: true
resources:
limits:
cpu: "300m"
memory: "1Gi"
requests:
cpu: "100m"
memory: "512Mi"
cleanup_on_fail: true
atomic: true
wait: true
# "karpenter-crd" can be installed as an independent helm chart to manage the lifecycle of Karpenter CRDs
crd_chart_enabled: true
crd_chart: "karpenter-crd"
# replicas set the number of Karpenter controller replicas to run
replicas: 2
# "settings" controls a subset of the settings for the Karpenter controller regarding batch idle and max duration.
# you can read more about these settings here: https://karpenter.sh/docs/reference/settings/
settings:
batch_idle_duration: "1s"
batch_max_duration: "10s"
# (Optional) "settings" which do not have an explicit mapping and may be subject to change between helm chart versions
additional_settings:
featureGates:
nodeRepair: false
reservedCapacity: true
spotToSpotConsolidation: true
# The logging settings for the Karpenter controller
logging:
enabled: true
level:
controller: "info"
global: "info"
webhook: "error"
```
## Provision Karpenter on EKS cluster
Here we describe how to provision Karpenter on an EKS cluster. We will be using the `plat-ue2-dev` stack as an example.
### Provision Service-Linked Roles for EC2 Spot and EC2 Spot Fleet
Note: If you want to use EC2 Spot for the instances launched by Karpenter, you may need to provision the following
Service-Linked Role for EC2 Spot:
- Service-Linked Role for EC2 Spot
This is only necessary if this is the first time you're using EC2 Spot in the account. Since this is a one-time
operation, we recommend you do this manually via the AWS CLI:
```bash
aws --profile --gbl--admin iam create-service-linked-role --aws-service-name spot.amazonaws.com
```
Note that if the Service-Linked Roles already exist in the AWS account (if you used EC2 Spot or Spot Fleet before), and
you try to provision them again, you will see the following errors:
```text
An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation:
Service role name AWSServiceRoleForEC2Spot has been taken in this account, please try a different suffix
```
For more details, see:
- https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html
- https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html
The process of provisioning Karpenter on an EKS cluster consists of 3 steps.
### 1. Provision EKS IAM Role for Nodes Launched by Karpenter
:::note
#### VPC assumptions being made
We assume you've already created a VPC using our [VPC component](https://github.com/cloudposse-terraform-components/aws-eks-karpenter-controller/tree/main/modules/vpc) and have private subnets already set
up. The Karpenter node pools will be launched in the private subnets.
:::
EKS IAM Role for Nodes launched by Karpenter are provisioned by the `eks/cluster` component. (EKS can also provision a
Fargate Profile for Karpenter, but deploying Karpenter to Fargate is not recommended.):
```yaml
components:
terraform:
eks/cluster-blue:
metadata:
component: eks/cluster
inherits:
- eks/cluster
vars:
karpenter_iam_role_enabled: true
```
:::note
The AWS Auth API for EKS is used to authorize the Karpenter controller to interact with the EKS cluster.
:::
Karpenter is installed using a Helm chart. The Helm chart installs the Karpenter controller and a webhook pod as a
Deployment that needs to run before the controller can be used for scaling your cluster. We recommend a minimum of one
small node group with at least one worker node.
As an alternative, you can run these pods on EKS Fargate by creating a Fargate profile for the karpenter namespace.
Doing so will cause all pods deployed into this namespace to run on EKS Fargate. Do not run Karpenter on a node that is
managed by Karpenter.
See
[Run Karpenter Controller...](https://aws.github.io/aws-eks-best-practices/karpenter/#run-the-karpenter-controller-on-eks-fargate-or-on-a-worker-node-that-belongs-to-a-node-group)
for more details.
We provision IAM Role for Nodes launched by Karpenter because they must run with an Instance Profile that grants
permissions necessary to run containers and configure networking.
We define the IAM role for the Instance Profile in `components/terraform/eks/cluster/controller-policy.tf`.
Note that we provision the EC2 Instance Profile for the Karpenter IAM role in the `components/terraform/eks/karpenter`
component (see the next step).
Run the following commands to provision the EKS Instance Profile for Karpenter and the IAM role for instances launched
by Karpenter on the blue EKS cluster and add the role ARNs to the EKS Auth API:
```bash
atmos terraform plan eks/cluster-blue -s plat-ue2-dev
atmos terraform apply eks/cluster-blue -s plat-ue2-dev
```
For more details, refer to:
- [Getting started with Terraform](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/)
- [Getting started with `eksctl`](https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/)
### 2. Provision `karpenter` component
In this step, we provision the `components/terraform/eks/karpenter` component, which deploys the following resources:
- Karpenter CustomerResourceDefinitions (CRDs) using the Karpenter CRD Chart and the `helm_release` Terraform resource
- Karpenter Kubernetes controller using the Karpenter Helm Chart and the `helm_release` Terraform resource
- EKS IAM role for Kubernetes Service Account for the Karpenter controller (with all the required permissions)
- An SQS Queue and Event Bridge rules for handling Node Interruption events (i.e. Spot)
Create a stack config for the blue Karpenter component in `stacks/catalog/eks/clusters/blue.yaml`:
```yaml
eks/karpenter-blue:
metadata:
component: eks/karpenter
inherits:
- eks/karpenter
vars:
eks_component_name: eks/cluster-blue
```
Run the following commands to provision the Karpenter component on the blue EKS cluster:
```bash
atmos terraform plan eks/karpenter-blue -s plat-ue2-dev
atmos terraform apply eks/karpenter-blue -s plat-ue2-dev
```
### 3. Provision `karpenter-node-pool` component
In this step, we provision the `components/terraform/eks/karpenter-node-pool` component, which deploys Karpenter
[NodePools](https://karpenter.sh/v0.36/getting-started/getting-started-with-karpenter/#5-create-nodepool) using the
`kubernetes_manifest` resource.
:::tip
#### Why use a separate component for NodePools?
We create the NodePools as a separate component since the CRDs for the NodePools are created by the Karpenter
component. This helps manage dependencies.
:::
First, create an abstract component for the `eks/karpenter-node-pool` component:
```yaml
components:
terraform:
eks/karpenter-node-pool:
metadata:
type: abstract
vars:
enabled: true
# Disabling Manifest Experiment disables stored metadata with Terraform state
# Otherwise, the state will show changes on all plans
helm_manifest_experiment_enabled: false
node_pools:
default:
# Whether to place EC2 instances launched by Karpenter into VPC private subnets. Set it to `false` to use public subnets
private_subnets_enabled: true
# You can use disruption to set the maximum instance lifetime for the EC2 instances launched by Karpenter.
# You can also configure how fast or slow Karpenter should add/remove nodes.
# See more: https://karpenter.sh/v0.36/concepts/disruption/
disruption:
max_instance_lifetime: "336h" # 14 days
# Taints can be used to prevent pods without the right tolerations from running on this node pool.
# See more: https://karpenter.sh/v0.36/concepts/nodepools/#taints
taints: []
total_cpu_limit: "1k"
# Karpenter node pool total memory limit for all pods running on the EC2 instances launched by Karpenter
total_memory_limit: "1200Gi"
# Set acceptable (In) and unacceptable (Out) Kubernetes and Karpenter values for node provisioning based on
# Well-Known Labels and cloud-specific settings. These can include instance types, zones, computer architecture,
# and capacity type (such as AWS spot or on-demand).
# See https://karpenter.sh/v0.36/concepts/nodepools/#spectemplatespecrequirements for more details
requirements:
- key: "karpenter.sh/capacity-type"
operator: "In"
# See https://karpenter.sh/docs/concepts/nodepools/#capacity-type
# Allow fallback to on-demand instances when spot instances are unavailable
# By default, Karpenter uses the "price-capacity-optimized" allocation strategy
# https://aws.amazon.com/blogs/compute/introducing-price-capacity-optimized-allocation-strategy-for-ec2-spot-instances/
# It is currently not configurable, but that may change in the future.
# See https://github.com/aws/karpenter-provider-aws/issues/1240
values:
- "on-demand"
- "spot"
- key: "kubernetes.io/os"
operator: "In"
values:
- "linux"
- key: "kubernetes.io/arch"
operator: "In"
values:
- "amd64"
# The following two requirements pick instances such as c3 or m5
- key: karpenter.k8s.aws/instance-category
operator: In
values: ["c", "m", "r"]
- key: karpenter.k8s.aws/instance-generation
operator: Gt
values: ["2"]
```
Now, create the stack config for the blue Karpenter NodePool component in `stacks/catalog/eks/clusters/blue.yaml`:
```yaml
eks/karpenter-node-pool/blue:
metadata:
component: eks/karpenter-node-pool
inherits:
- eks/karpenter-node-pool
vars:
eks_component_name: eks/cluster-blue
```
Finally, run the following commands to deploy the Karpenter NodePools on the blue EKS cluster:
```bash
atmos terraform plan eks/karpenter-node-pool/blue -s plat-ue2-dev
atmos terraform apply eks/karpenter-node-pool/blue -s plat-ue2-dev
```
## Node Interruption
Karpenter also supports listening for and responding to Node Interruption events. If interruption handling is enabled,
Karpenter will watch for upcoming involuntary interruption events that would cause disruption to your workloads. These
interruption events include:
- Spot Interruption Warnings
- Scheduled Change Health Events (Maintenance Events)
- Instance Terminating Events
- Instance Stopping Events
:::tip
#### Interruption Handler vs. Termination Handler
The Node Interruption Handler is not the same as the Node Termination Handler. The latter is always enabled and
cleanly shuts down the node in 2 minutes in response to a Node Termination event. The former gets advance notice that
a node will soon be terminated, so it can have 5-10 minutes to shut down a node.
:::
For more details, see refer to the [Karpenter docs](https://karpenter.sh/v0.32/concepts/disruption/#interruption) and
[FAQ](https://karpenter.sh/v0.32/faq/#interruption-handling)
To enable Node Interruption handling, set `var.interruption_handler_enabled` to `true`. This will create an SQS queue
and a set of Event Bridge rules to deliver interruption events to Karpenter.
## Custom Resource Definition (CRD) Management
Karpenter ships with a few Custom Resource Definitions (CRDs). In earlier versions of this component, when installing a
new version of the `karpenter` helm chart, CRDs were not be upgraded at the same time, requiring manual steps to upgrade
CRDs after deploying the latest chart. However Karpenter now supports an additional, independent helm chart for CRD
management. This helm chart, `karpenter-crd`, can be installed alongside the `karpenter` helm chart to automatically
manage the lifecycle of these CRDs.
To deploy the `karpenter-crd` helm chart, set `var.crd_chart_enabled` to `true`. (Installing the `karpenter-crd` chart
is recommended. `var.crd_chart_enabled` defaults to `false` to preserve backward compatibility with older versions of
this component.)
## EKS Cluster Configuration
This component supports two methods for obtaining EKS cluster information, controlled by the
`account_map_enabled` variable:
1. **Direct Variables (Recommended)**: Set `account_map_enabled: false` and provide EKS cluster details via the `eks` object variable
2. **Internal Remote State (Default)**: Set `account_map_enabled: true` (default) to fetch EKS cluster details from Terraform remote state using `eks_component_name`
### Using Atmos State Functions (Recommended)
When using [Atmos](https://atmos.tools), you can use the `!terraform.state` function to read
EKS cluster outputs from another component's Terraform state and pass them as variables.
```yaml
components:
terraform:
eks/karpenter:
vars:
enabled: true
name: "karpenter"
account_map_enabled: false
eks:
eks_cluster_id: !terraform.state eks/cluster eks_cluster_id
eks_cluster_arn: !terraform.state eks/cluster eks_cluster_arn
eks_cluster_endpoint: !terraform.state eks/cluster eks_cluster_endpoint
eks_cluster_certificate_authority_data: !terraform.state eks/cluster eks_cluster_certificate_authority_data
eks_cluster_identity_oidc_issuer: !terraform.state eks/cluster eks_cluster_identity_oidc_issuer
karpenter_iam_role_arn: !terraform.state eks/cluster karpenter_iam_role_arn
chart_repository: "oci://public.ecr.aws/karpenter"
chart: "karpenter"
chart_version: "1.6.0"
```
For more information on `!terraform.state`, see the
[Atmos documentation](https://atmos.tools/core-concepts/stacks/templating/functions/terraform.state/).
### Direct Variables Approach
You can also provide EKS cluster information directly:
```yaml
components:
terraform:
eks/karpenter:
vars:
enabled: true
name: "karpenter"
account_map_enabled: false
eks:
eks_cluster_id: "my-eks-cluster"
eks_cluster_arn: "arn:aws:eks:us-east-1:123456789012:cluster/my-eks-cluster"
eks_cluster_endpoint: "https://ABCDEF1234567890.gr7.us-east-1.eks.amazonaws.com"
eks_cluster_certificate_authority_data: "LS0tLS1CRUdJTi..."
eks_cluster_identity_oidc_issuer: "https://oidc.eks.us-east-1.amazonaws.com/id/ABCDEF1234567890"
karpenter_iam_role_arn: "arn:aws:iam::123456789012:role/my-eks-cluster-karpenter-node"
chart_repository: "oci://public.ecr.aws/karpenter"
chart: "karpenter"
chart_version: "1.6.0"
```
### Internal Remote State Approach (Default)
The default approach uses Cloud Posse's remote state module to fetch EKS cluster information:
```yaml
components:
terraform:
eks/karpenter:
vars:
enabled: true
# account_map_enabled defaults to true
eks_component_name: "eks/cluster-blue"
```
## Troubleshooting
For Karpenter issues, checkout the [Karpenter Troubleshooting Guide](https://karpenter.sh/docs/troubleshooting/)
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended
`chart_repository` (`string`) required
Repository URL where to locate the requested chart
`region` (`string`) required
AWS Region
`resources` required
The CPU and memory of the deployment's limits and requests
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
### Optional Variables
`account_map_enabled` (`bool`) optional
Enable the account map component lookup. When disabled, use the `eks` variable to provide static EKS cluster configuration.
**Default value:** `true`
`additional_settings` (`any`) optional
Additional settings to merge into the Karpenter controller settings.
This is useful for setting featureGates or other advanced settings that may
vary by chart version. These settings will be merged with the base settings
and take precedence over any conflicting keys.
Example:
additional_settings = \{
featureGates = \{
nodeRepair = false
reservedCapacity = true
spotToSpotConsolidation = false
\}
\}
**Default value:** `{ }`
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history)
**Default value:** `null`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails
**Default value:** `true`
`crd_chart` (`string`) optional
The name of the Karpenter CRD chart to be installed, if `var.crd_chart_enabled` is set to `true`.
**Default value:** `"karpenter-crd"`
`crd_chart_enabled` (`bool`) optional
`karpenter-crd` can be installed as an independent helm chart to manage the lifecycle of Karpenter CRDs. Set to `true` to install this CRD helm chart before the primary karpenter chart.
**Default value:** `false`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`interruption_handler_enabled` (`bool`) optional
If `true`, deploy a SQS queue and Event Bridge rules to enable interruption handling by Karpenter.
https://karpenter.sh/docs/concepts/disruption/#interruption
**Default value:** `true`
The message retention in seconds for the interruption handler SQS queue.
**Default value:** `300`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`logging` optional
A subset of the logging settings for the Karpenter controller
**Type:**
```hcl
object({
enabled = optional(bool, true)
level = optional(object({
controller = optional(string, "info")
global = optional(string, "info")
webhook = optional(string, "error")
}), {})
})
```
**Default value:** `{ }`
`metrics_enabled` (`bool`) optional
Whether to expose the Karpenter's Prometheus metric
**Default value:** `true`
`metrics_port` (`number`) optional
Container port to use for metrics
**Default value:** `8080`
`replicas` (`number`) optional
The number of Karpenter controller replicas to run
**Default value:** `2`
`settings` optional
A subset of the settings for the Karpenter controller.
Some settings are implicitly set by this component, such as `clusterName` and
`interruptionQueue`. All settings can be overridden by providing a `settings`
section in the `chart_values` variable. The settings provided here are the ones
mostly likely to be set to other than default values, and are provided here for convenience.
**Type:**
```hcl
object({
batch_idle_duration = optional(string, "1s")
batch_max_duration = optional(string, "10s")
})
```
**Default value:** `{ }`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`karpenter` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | Deploy Karpenter helm chart
`karpenter_crd` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | Deploy karpenter-crd helm chart "karpenter-crd" can be installed as an independent helm chart to manage the lifecycle of Karpenter CRDs
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cloudwatch_event_rule.interruption_handler`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_rule) (resource)
- [`aws_cloudwatch_event_target.interruption_handler`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_target) (resource)
- [`aws_iam_policy.v1alpha`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role_policy_attachment.v1alpha`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`aws_sqs_queue.interruption_handler`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sqs_queue) (resource)
- [`aws_sqs_queue_policy.interruption_handler`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sqs_queue_policy) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_iam_policy_document.interruption_handler`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
---
## karpenter-node-pool
This component deploys [Karpenter NodePools](https://karpenter.sh/docs/concepts/nodepools/) to an EKS cluster.
Karpenter is still rapidly evolving. At this time, this component only supports a subset of the features
available in Karpenter. Support could be added for additional features as needed.
Not supported:
- Elements of NodePool:
- [`template.spec.kubelet`](https://karpenter.sh/docs/concepts/nodepools/#spectemplatespeckubelet)
- Elements of NodeClass:
- `subnetSelectorTerms`. This component only supports selecting all public or all private subnets of the referenced
EKS cluster.
- `securityGroupSelectorTerms`. This component only supports selecting the security group of the referenced EKS
cluster.
- `amiSelectorTerms`. Such terms override the `amiFamily` setting, which is the only AMI selection supported by this
component.
- `instanceStorePolicy`
- `associatePublicIPAddress`
## Usage
**Stack Level**: Regional
If provisioning more than one NodePool, it is
[best practice](https://aws.github.io/aws-eks-best-practices/karpenter/#creating-nodepools) to create NodePools that are
mutually exclusive or weighted.
## Configuration Approaches
This component supports three configuration approaches controlled by the `account_map_enabled` variable.
### Option 1: Direct Input Variables (`account_map_enabled: false`)
Set `account_map_enabled: false` and provide the required values via the `eks` and `vpc` object variables.
This approach is simpler and avoids cross-component dependencies.
Example using direct inputs:
```yaml
components:
terraform:
eks/karpenter-node-pool:
vars:
enabled: true
account_map_enabled: false
name: "karpenter-node-pool"
eks:
eks_cluster_id: "my-cluster"
eks_cluster_endpoint: "https://XXXXXXXX.gr7.us-west-2.eks.amazonaws.com"
eks_cluster_certificate_authority_data: "LS0tLS1CRUdJTi..."
karpenter_iam_role_name: "my-cluster-karpenter"
vpc:
private_subnet_ids:
- "subnet-xxxxxxxxx"
- "subnet-yyyyyyyyy"
# ... node_pools configuration
```
### Option 2: Using Atmos `!terraform.state` (Recommended)
For Atmos users, the recommended approach is to use `!terraform.state` to dynamically fetch values from
other component outputs and pass them as direct input variables. This keeps dependencies explicit in your
stack configuration without using internal remote-state modules.
Example using Atmos `!terraform.state`:
```yaml
components:
terraform:
eks/karpenter-node-pool:
vars:
enabled: true
account_map_enabled: false
name: "karpenter-node-pool"
eks:
eks_cluster_id: !terraform.state eks/cluster eks_cluster_id
eks_cluster_endpoint: !terraform.state eks/cluster eks_cluster_endpoint
eks_cluster_certificate_authority_data: !terraform.state eks/cluster eks_cluster_certificate_authority_data
karpenter_iam_role_name: !terraform.state eks/cluster karpenter_iam_role_name
vpc:
private_subnet_ids: !terraform.state vpc private_subnet_ids
node_pools:
default:
name: default
private_subnets_enabled: true
# ... rest of node pool configuration
```
This approach:
- Uses native Atmos functionality for cross-component references
- Makes dependencies explicit and visible in stack configuration
- Does not use internal remote-state modules (cleaner component code)
- Supports referencing components in different stacks with extended syntax
For referencing components in different stacks:
```yaml
eks:
eks_cluster_id: !terraform.state eks/cluster eks_cluster_id
```
### Option 3: Internal Remote State Modules (`account_map_enabled: true`, default, deprecated)
> **Warning:** The `account_map_enabled: true` setting and `eks_component_name`/`vpc_component_name` variables
> are deprecated and will be removed in a future version. Please migrate to using `!terraform.state` (Option 2)
> or direct input variables (Option 1).
When `account_map_enabled` is `true` (the default), the component uses internal CloudPosse remote-state modules
to fetch EKS cluster and VPC information. This approach is being phased out in favor of explicit variable passing
via `!terraform.state` which provides better visibility into component dependencies.
Example using CloudPosse remote state:
```yaml
components:
terraform:
eks/karpenter-node-pool:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
account_map_enabled: true # default, can be omitted
eks_component_name: eks/cluster
vpc_component_name: vpc
name: "karpenter-node-pool"
# https://karpenter.sh/v0.36.0/docs/concepts/nodepools/
node_pools:
default:
name: default
# Whether to place EC2 instances launched by Karpenter into VPC private subnets. Set it to `false` to use public subnets
private_subnets_enabled: true
disruption:
consolidation_policy: WhenUnderutilized
consolidate_after: 1h
max_instance_lifetime: 336h
budgets:
# This budget allows 0 disruptions during business hours (from 9am to 5pm) on weekdays
- schedule: "0 9 * * mon-fri"
duration: 8h
nodes: "0"
# The total cpu of the cluster. Maps to spec.limits.cpu in the Karpenter NodeClass
total_cpu_limit: "100"
# The total memory of the cluster. Maps to spec.limits.memory in the Karpenter NodeClass
total_memory_limit: "1000Gi"
# The total GPU of the cluster. Maps to spec.limits for GPU in the Karpenter NodeClass
gpu_total_limits:
"nvidia.com/gpu" = "1"
# The weight of the node pool. See https://karpenter.sh/docs/concepts/scheduling/#weighted-nodepools
weight: 50
# Taints to apply to the nodes in the node pool. See https://karpenter.sh/docs/concepts/nodeclasses/#spectaints
taints:
- key: "node.kubernetes.io/unreachable"
effect: "NoExecute"
value: "true"
# Taints to apply to the nodes in the node pool at startup. See https://karpenter.sh/docs/concepts/nodeclasses/#specstartuptaints
startup_taints:
- key: "node.kubernetes.io/unreachable"
effect: "NoExecute"
value: "true"
# Metadata options for the node pool. See https://karpenter.sh/docs/concepts/nodeclasses/#specmetadataoptions
metadata_options:
httpEndpoint: "enabled" # allows the node to call the AWS metadata service
httpProtocolIPv6: "disabled"
httpPutResponseHopLimit: 2
httpTokens: "required"
# The AMI used by Karpenter provisioner when provisioning nodes. Based on the value set for amiFamily, Karpenter will automatically query for the appropriate EKS optimized AMI via AWS Systems Manager (SSM)
# Bottlerocket, AL2, Ubuntu
# https://karpenter.sh/v0.18.0/aws/provisioning/#amazon-machine-image-ami-family
ami_family: AL2
# Karpenter provisioner block device mappings.
block_device_mappings:
- deviceName: /dev/xvda
ebs:
volumeSize: 200Gi
volumeType: gp3
encrypted: true
deleteOnTermination: true
# Set acceptable (In) and unacceptable (Out) Kubernetes and Karpenter values for node provisioning based on
# Well-Known Labels and cloud-specific settings. These can include instance types, zones, computer architecture,
# and capacity type (such as AWS spot or on-demand).
# See https://karpenter.sh/v0.18.0/provisioner/#specrequirements for more details
requirements:
- key: "karpenter.sh/capacity-type"
operator: "In"
values:
- "on-demand"
- "spot"
- key: "node.kubernetes.io/instance-type"
operator: "In"
# See https://aws.amazon.com/ec2/instance-explorer/ and https://aws.amazon.com/ec2/instance-types/
# Values limited by DenyEC2InstancesWithoutEncryptionInTransit service control policy
# See https://github.com/cloudposse/terraform-aws-service-control-policies/blob/master/catalog/ec2-policies.yaml
# Karpenter recommends allowing at least 20 instance types to ensure availability.
values:
- "c5n.2xlarge"
- "c5n.xlarge"
- "c5n.large"
- "c6i.2xlarge"
- "c6i.xlarge"
- "c6i.large"
- "m5n.2xlarge"
- "m5n.xlarge"
- "m5n.large"
- "m5zn.2xlarge"
- "m5zn.xlarge"
- "m5zn.large"
- "m6i.2xlarge"
- "m6i.xlarge"
- "m6i.large"
- "r5n.2xlarge"
- "r5n.xlarge"
- "r5n.large"
- "r6i.2xlarge"
- "r6i.xlarge"
- "r6i.large"
- key: "kubernetes.io/arch"
operator: "In"
values:
- "amd64"
```
## Variables
### Required Variables
`node_pools` required
Configuration for node pools. See code for details.
**Type:**
```hcl
map(object({
# The name of the Karpenter provisioner. The map key is used if this is not set.
name = optional(string)
# Whether to place EC2 instances launched by Karpenter into VPC private subnets. Set it to `false` to use public subnets.
private_subnets_enabled = bool
# The Disruption spec controls how Karpenter scales down the node group.
# See the example (sadly not the specific `spec.disruption` documentation) at https://karpenter.sh/docs/concepts/nodepools/ for details
disruption = optional(object({
# Describes which types of Nodes Karpenter should consider for consolidation.
# If using 'WhenUnderutilized', Karpenter will consider all nodes for consolidation and attempt to remove or
# replace Nodes when it discovers that the Node is underutilized and could be changed to reduce cost.
# If using `WhenEmpty`, Karpenter will only consider nodes for consolidation that contain no workload pods.
consolidation_policy = optional(string, "WhenUnderutilized")
# The amount of time Karpenter should wait after discovering a consolidation decision (`go` duration string, s, m, or h).
# This value can currently (v0.36.0) only be set when the consolidationPolicy is 'WhenEmpty'.
# You can choose to disable consolidation entirely by setting the string value 'Never' here.
# Earlier versions of Karpenter called this field `ttl_seconds_after_empty`.
consolidate_after = optional(string)
# The amount of time a Node can live on the cluster before being removed (`go` duration string, s, m, or h).
# You can choose to disable expiration entirely by setting the string value 'Never' here.
# This module sets a default of 336 hours (14 days), while the Karpenter default is 720 hours (30 days).
# Note that Karpenter calls this field "expiresAfter", and earlier versions called it `ttl_seconds_until_expired`,
# but we call it "max_instance_lifetime" to match the corresponding field in EC2 Auto Scaling Groups.
max_instance_lifetime = optional(string, "336h")
# Budgets control the the maximum number of NodeClaims owned by this NodePool that can be terminating at once.
# See https://karpenter.sh/docs/concepts/disruption/#disruption-budgets for details.
# A percentage is the percentage of the total number of active, ready nodes not being deleted, rounded up.
# If there are multiple active budgets, Karpenter uses the most restrictive value.
# If left undefined, this will default to one budget with a value of nodes: 10%.
# Note that budgets do not prevent or limit involuntary terminations.
# Example:
# On Weekdays during business hours, don't do any deprovisioning.
# budgets = {
# schedule = "0 9 * * mon-fri"
# duration = 8h
# nodes = "0"
# }
budgets = optional(list(object({
# The schedule specifies when a budget begins being active, using extended cronjob syntax.
# See https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax for syntax details.
# Timezones are not supported. This field is required if Duration is set.
schedule = optional(string)
# Duration determines how long a Budget is active after each Scheduled start.
# If omitted, the budget is always active. This is required if Schedule is set.
# Must be a whole number of minutes and hours, as cron does not work in seconds,
# but since Go's `duration.String()` always adds a "0s" at the end, that is allowed.
duration = optional(string)
# The percentage or number of nodes that Karpenter can scale down during the budget.
nodes = string
# Reasons can be one of Drifted, Underutilized, or Empty
# If omitted, it’s assumed that the budget applies to all reasons.
# See https://karpenter.sh/v1.1/concepts/disruption/#reasons
reasons = optional(list(string))
})), [])
}), {})
# Karpenter provisioner total CPU limit for all pods running on the EC2 instances launched by Karpenter
total_cpu_limit = string
# Karpenter provisioner total memory limit for all pods running on the EC2 instances launched by Karpenter
total_memory_limit = string
# Additional resource limits (e.g., GPU, custom resources) to merge into spec.limits. Example: {"nvidia.com/gpu" = "1"}
gpu_total_limits = optional(map(string), {})
# Set a weight for this node pool.
# See https://karpenter.sh/docs/concepts/scheduling/#weighted-nodepools
weight = optional(number, 50)
labels = optional(map(string))
annotations = optional(map(string))
# Karpenter provisioner taints configuration. See https://aws.github.io/aws-eks-best-practices/karpenter/#create-provisioners-that-are-mutually-exclusive for more details
taints = optional(list(object({
key = string
effect = string
value = optional(string)
})))
startup_taints = optional(list(object({
key = string
effect = string
value = optional(string)
})))
# Karpenter node metadata options. See https://karpenter.sh/docs/concepts/nodeclasses/#specmetadataoptions for more details
metadata_options = optional(object({
httpEndpoint = optional(string, "enabled")
httpProtocolIPv6 = optional(string, "disabled")
httpPutResponseHopLimit = optional(number, 2)
# httpTokens can be either "required" or "optional"
httpTokens = optional(string, "required")
}), {})
# Enable detailed monitoring for EC2 instances. See https://karpenter.sh/docs/concepts/nodeclasses/#specdetailedmonitoring
detailed_monitoring = optional(bool, false)
# User data script to pass to EC2 instances. See https://karpenter.sh/docs/concepts/nodeclasses/#specuserdata
user_data = optional(string, null)
# ami_family dictates the default bootstrapping logic.
# It is only required if you do not specify amiSelectorTerms.alias
ami_family = optional(string, null)
# Selectors for the AMI used by Karpenter provisioner when provisioning nodes.
# Usually use { alias = "@latest" } but version can be pinned instead of "latest".
# Based on the ami_selector_terms, Karpenter will automatically query for the appropriate EKS optimized AMI via AWS Systems Manager (SSM)
ami_selector_terms = list(any)
# Karpenter nodes block device mappings. Controls the Elastic Block Storage volumes that Karpenter attaches to provisioned nodes.
# Karpenter uses default block device mappings for the AMI Family specified.
# For example, the Bottlerocket AMI Family defaults with two block device mappings,
# and normally you only want to scale `/dev/xvdb` where Containers and there storage are stored.
# Most other AMIs only have one device mapping at `/dev/xvda`.
# See https://karpenter.sh/docs/concepts/nodeclasses/#specblockdevicemappings for more details
block_device_mappings = list(object({
deviceName = string
ebs = optional(object({
volumeSize = string
volumeType = string
deleteOnTermination = optional(bool, true)
encrypted = optional(bool, true)
iops = optional(number)
kmsKeyID = optional(string, "alias/aws/ebs")
snapshotID = optional(string)
throughput = optional(number)
}))
}))
# Set acceptable (In) and unacceptable (Out) Kubernetes and Karpenter values for node provisioning based on Well-Known Labels and cloud-specific settings. These can include instance types, zones, computer architecture, and capacity type (such as AWS spot or on-demand). See https://karpenter.sh/v0.18.0/provisioner/#specrequirements for more details
requirements = list(object({
key = string
operator = string
# Operators like "Exists" and "DoesNotExist" do not require a value
values = optional(list(string))
}))
# Any values for spec.template.spec.kubelet allowed by Karpenter.
# Not fully specified, because they are subject to change.
# See:
# https://karpenter.sh/docs/concepts/nodepools/#spectemplatespeckubelet
# https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
kubelet = optional(any, {})
}))
```
`region` (`string`) required
AWS Region
### Optional Variables
`account_map_enabled` (`bool`) optional
Enable account map and remote state lookups.
When `true`, fetch EKS cluster and VPC information from Terraform remote state.
When `false`, use the `eks` and `vpc` variables to provide values directly.
**Default value:** `true`
The name of the EKS component. Used to fetch EKS cluster information from remote state
when `account_map_enabled` is `true`.
DEPRECATED: This variable (along with account_map_enabled=true) is deprecated and
will be removed in a future version. Set `account_map_enabled = false` and use
the direct EKS cluster input variables instead.
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`import_profile_name` (`string`) optional
AWS Profile name to use when importing a resource
**Default value:** `null`
`import_role_arn` (`string`) optional
IAM Role ARN to use when importing a resource
**Default value:** `null`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`vpc` optional
VPC configuration to use when `account_map_enabled` is `false`.
Provides subnet IDs for Karpenter to launch instances in.
**Type:**
```hcl
object({
private_subnet_ids = optional(list(string), [])
public_subnet_ids = optional(list(string), [])
})
```
**Default value:**
```hcl
{
"private_subnet_ids": [],
"public_subnet_ids": []
}
```
`vpc_component_name` (`string`) optional
The name of the VPC component. Used to fetch VPC information from remote state
when `account_map_enabled` is `true`.
DEPRECATED: This variable (along with account_map_enabled=true) is deprecated and
will be removed in a future version. Set `account_map_enabled = false` and use
the direct subnet ID input variables instead.
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`ec2_node_classes`
Deployed Karpenter EC2NodeClass
`node_pools`
Deployed Karpenter NodePool
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_manifest.ec2_node_class`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/manifest) (resource)
- [`kubernetes_manifest.node_pool`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/manifest) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## keda
This component is used to install the KEDA operator.
See this overview of how KEDA works with triggers with a `ScaledObject`, which is a light wrapper around HPAs:
https://keda.sh/docs/2.9/concepts/scaling-deployments/#overview
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
```yaml
components:
terraform:
eks/keda:
vars:
enabled: true
name: keda
create_namespace: true
kubernetes_namespace: "keda"
chart_repository: "https://kedacore.github.io/charts"
chart: "keda"
chart_version: "2.13.2"
chart_values: {}
timeout: 180
```
## Variables
### Required Variables
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"keda"`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `"2.8"`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
`description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `"Used for autoscaling from external metrics configured as triggers."`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://kedacore.github.io/charts"`
`resources` (`any`) optional
A sub-nested map of deployment to resources. e.g. \{ operator = \{ requests = \{ cpu = 100m, memory = 100Mi \}, limits = \{ cpu = 200m, memory = 200Mi \} \} \}
**Default value:** `null`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release.
`service_account_name`
Kubernetes Service Account name
`service_account_namespace`
Kubernetes Service Account namespace
`service_account_policy_arn`
IAM policy ARN
`service_account_policy_id`
IAM policy ID
`service_account_policy_name`
IAM policy name
`service_account_role_arn`
IAM role ARN
`service_account_role_name`
IAM role name
`service_account_role_unique_id`
IAM role unique ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.6.0`
- `kubernetes`, version: `>= 2.9.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`keda` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## loki
Grafana Loki is a set of resources that can be combined into a fully featured logging stack. Unlike other logging
systems, Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels).
Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on a
filesystem.
This component deploys the [grafana/loki](https://github.com/grafana/loki/tree/main/production/helm/loki) helm chart.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
eks/loki:
vars:
enabled: true
name: loki
alb_controller_ingress_group_component_name: eks/alb-controller-ingress-group/internal
```
:::important
We recommend using an internal ALB for logging services. You must connect to the private network to access the Loki
endpoint.
:::
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`additional_schema_config` optional
A list of additional `configs` for the `schemaConfig` for the Loki chart. This list will be merged with the default schemaConfig.config defined by `var.default_schema_config`
**Type:**
```hcl
list(object({
from = string
object_store = string
schema = string
store = string
index = object({
prefix = string
period = string
})
}))
```
**Default value:** `[ ]`
The name of the eks/alb-controller-ingress-group component. This should be an internal facing ALB
**Default value:** `"eks/alb-controller-ingress-group"`
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`basic_auth_enabled` (`bool`) optional
If `true`, enabled Basic Auth for the Ingress service. A user and password will be created and stored in AWS SSM.
**Default value:** `true`
`chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"loki"`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `"Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus."`
`chart_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://grafana.github.io/helm-charts"`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
`default_schema_config` optional
A list of default `configs` for the `schemaConfig` for the Loki chart. For new installations, the default schema config doesn't change. See https://grafana.com/docs/loki/latest/operations/storage/schema/#new-loki-installs
**Type:**
```hcl
list(object({
from = string
object_store = string
schema = string
store = string
index = object({
prefix = string
period = string
})
}))
```
**Default value:**
```hcl
[
{
"from": "2024-04-01",
"index": {
"period": "24h",
"prefix": "index_"
},
"object_store": "s3",
"schema": "v13",
"store": "tsdb"
}
]
```
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
Kubernetes namespace to install the release into
**Default value:** `"monitoring"`
`ssm_path_template` (`string`) optional
A string template to be used to create paths in AWS SSM to store basic auth credentials for this service
**Default value:** `"/%s/basic-auth/%s"`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `300`
`verify` (`bool`) optional
Verify the package before installing it. Helm uses a provenance file to verify the integrity of the chart; this must be hosted alongside the chart
**Default value:** `false`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`basic_auth_username`
If enabled, the username for basic auth
`id`
The ID of this deployment
`metadata`
Block status of the deployed release
`ssm_path_basic_auth_password`
If enabled, the path in AWS SSM to find the password for basic auth
`url`
The hostname used for this Loki deployment
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
- `random`, version: `>= 2.3`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 2.3`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`alb_controller_ingress_group` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`basic_auth_ssm_parameters` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`loki` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`loki_storage` | 4.11.0 | [`cloudposse/s3-bucket/aws`](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/4.11.0) | n/a
`loki_tls_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`random_pet.basic_auth_username`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
- [`random_string.basic_auth_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## metrics-server
This component creates a Helm release for [metrics-server](https://github.com/kubernetes-sigs/metrics-server) is a
Kubernetes addon that provides resource usage metrics used in particular by other addons such Horizontal Pod Autoscaler.
## Usage
**Stack Level**: Regional
Once the catalog file is created, the file can be imported as follows.
```yaml
import:
- catalog/eks/metrics-server
...
```
The default catalog values `e.g. stacks/catalog/eks/metrics-server.yaml`
```yaml
components:
terraform:
metrics-server:
backend:
s3:
workspace_key_prefix: metrics-server
vars:
enabled: true
chart_version: 5.10.4
rbac_enabled: true
# You can use `chart_values` to set any other chart options. Treat `chart_values` as the root of the doc.
#
# # For example
# ---
# chart_values:
# enableShield: false
chart_values: {}
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region.
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"metrics-server"`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://kubernetes-sigs.github.io/metrics-server/"`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `"3.11.0"`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `true`.
**Default value:** `true`
`eks_component_name` (`string`) optional
The name of the EKS component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
The namespace to install the release into.
**Default value:** `"metrics-server"`
`metrics_server_component` (`string`) optional
The name of the Metrics Server component
**Default value:** `"eks-metrics-server"`
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`resources` optional
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
**Default value:**
```hcl
{
"limits": {
"cpu": "100m",
"memory": "300Mi"
},
"requests": {
"cpu": "20m",
"memory": "60Mi"
}
}
```
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.14.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`metrics_server` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## node-termination-handler
This component creates a Helm release for
[aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler) on a Kubernetes cluster.
[aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler) is a Kubernetes addon that (by
default) monitors the EC2 IMDS endpoint for scheduled maintenance events, spot instance termination events, and
rebalance recommendation events, and drains and/or cordons nodes upon such events. This ensures that workloads on
Kubernetes are evicted gracefully when a node needs to be terminated.
## Usage
**Stack Level**: Regional
Once the catalog file is created, the file can be imported as follows.
```yaml
import:
- catalog/eks/aws-node-termination-handler
...
```
The default catalog values
```yaml
components:
terraform:
aws-node-termination-handler:
backend:
s3:
workspace_key_prefix: aws-node-termination-handler
vars:
enabled: true
chart_version: 0.15.3
rbac_enabled: true
# You can use `chart_values` to set any other chart options. Treat `chart_values` as the root of the doc.
#
# # For example
# ---
# chart_values:
# enableShield: false
chart_values: {}
```
## Variables
### Required Variables
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region.
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"aws-node-termination-handler"`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://aws.github.io/eks-charts"`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `"0.15.3"`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `false`.
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`resources` optional
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
**Default value:**
```hcl
{
"limits": {
"cpu": "100m",
"memory": "128Mi"
},
"requests": {
"cpu": "50m",
"memory": "64Mi"
}
}
```
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`aws_node_termination_handler` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_namespace.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## prometheus-scraper
This component provisions the an Amazon Managed collector or scraper to connect Amazon Managed Prometheus (AMP) with an
EKS cluster.
A common use case for Amazon Managed Service for Prometheus is to monitor Kubernetes clusters managed by Amazon Elastic
Kubernetes Service (Amazon EKS). Kubernetes clusters, and many applications that run within Amazon EKS, automatically
export their metrics for Prometheus-compatible scrapers to access.
Amazon Managed Service for Prometheus provides a fully managed, agentless scraper, or collector, that automatically
discovers and pulls Prometheus-compatible metrics. You don't have to manage, install, patch, or maintain agents or
scrapers. An Amazon Managed Service for Prometheus collector provides reliable, stable, highly available, automatically
scaled collection of metrics for your Amazon EKS cluster. Amazon Managed Service for Prometheus managed collectors work
with Amazon EKS clusters, including EC2 and Fargate.
An Amazon Managed Service for Prometheus collector creates an Elastic Network Interface (ENI) per subnet specified when
creating the scraper. The collector scrapes the metrics through these ENIs, and uses remote_write to push the data to
your Amazon Managed Service for Prometheus workspace using a VPC endpoint. The scraped data never travels on the public
internet.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
eks/prometheus-scraper:
vars:
enabled: true
name: prometheus-scraper
# This refers to the `managed-prometheus/workspace` Terraform component,
# but the component name can be whatever you choose to name the stack component
prometheus_component_name: prometheus
```
### Authenticating with EKS
In order for this managed collector to authenticate with the EKS cluster, update auth map after deploying.
Note the `scraper_role_arn` and `clusterrole_username` outputs and set them to `rolearn` and `username` respectively
with the `map_additional_iam_roles` input for `eks/cluster`.
```yaml
components:
terraform:
eks/cluster:
vars:
map_additional_iam_roles:
# this role is used to grant the Prometheus scraper access to this cluster. See eks/prometheus-scraper
- rolearn: "arn:aws:iam::111111111111:role/AWSServiceRoleForAmazonPrometheusScraper_111111111111111"
username: "acme-plat-ue2-sandbox-prometheus-scraper"
groups: []
```
Then reapply the given cluster component.
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `"AWS Managed Prometheus (AMP) scrapper roles and role bindings"`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
Kubernetes namespace to install the release into
**Default value:** `"kube-system"`
`prometheus_component_name` (`string`) optional
The name of the Amazon Managed Prometheus workspace component
**Default value:** `"managed-prometheus/workspace"`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `300`
`verify` (`bool`) optional
Verify the package before installing it. Helm uses a provenance file to verify the integrity of the chart; this must be hosted alongside the chart
**Default value:** `false`
`vpc_component_name` (`string`) optional
The name of the vpc component
**Default value:** `"vpc"`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`clusterrole_username`
The username of the ClusterRole used to give the scraper in-cluster permissions
`scraper_role_arn`
The Amazon Resource Name (ARN) of the IAM role that provides permissions for the scraper to discover, collect, and produce metrics
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`prometheus` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`scraper_access` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_prometheus_scraper.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/prometheus_scraper) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## promtail
Promtail is an agent which ships the contents of local logs to a Loki instance.
This component deploys the [grafana/promtail](https://github.com/grafana/helm-charts/tree/main/charts/promtail) Helm
chart and expects `eks/loki` to be deployed.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
eks/promtail:
vars:
enabled: true
name: promtail
```
## Variables
### Required Variables
The name of the eks/alb-controller-ingress-group component. This should be an internal facing ALB
**Default value:** `"eks/alb-controller-ingress-group"`
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart` (`string`) optional
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
**Default value:** `"promtail"`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `"Promtail is an agent which ships the contents of local logs to a Loki instance"`
`chart_repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `"https://grafana.github.io/helm-charts"`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
Kubernetes namespace to install the release into
**Default value:** `"monitoring"`
`loki_component_name` (`string`) optional
The name of the eks/loki component
**Default value:** `"eks/loki"`
`push_api` optional
Describes and configures Promtail to expose a Loki push API server with an Ingress configuration.
- enabled: Set this to `true` to enable this feature
- scrape_config: Optional. This component includes a basic configuration by default, or override the default configuration here.
**Type:**
```hcl
object({
enabled = optional(bool, false)
scrape_config = optional(string, "")
})
```
**Default value:** `{ }`
`scrape_configs` (`list(string)`) optional
A list of local path paths starting with this component's base path for Promtail Scrape Configs
**Default value:**
```hcl
[
"scrape_config/default_kubernetes_pods.yaml"
]
```
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `300`
`verify` (`bool`) optional
Verify the package before installing it. Helm uses a provenance file to verify the integrity of the chart; this must be hosted alongside the chart
**Default value:** `false`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`alb_controller_ingress_group` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`chart_values` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`loki` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`promtail` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_ssm_parameter.basic_auth_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## redis
This component installs `redis` for EKS clusters. This is a Self Hosted Redis Cluster installed on EKS.
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
`stacks/catalog/eks/redis/defaults` file (base component for default Redis settings):
```yaml
components:
terraform:
eks/redis/defaults:
metadata:
component: eks/redis
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: redis
tags:
Team: sre
Service: redis
create_namespace: true
kubernetes_namespace: "redis"
# https://github.com/bitnami/charts/tree/master/bitnami/redis
chart_repository: https://charts.bitnami.com/bitnami
chart_version: "17.1.0"
chart: "redis"
timeout: 180
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
# Set a specific version of Redis using the image.tag and image.repository values.
# Documentation: https://docs.bitnami.com/kubernetes/infrastructure/redis/configuration/change-image-version/
# Defaults: https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml#L81-L82
chart_values:
image:
tag: 7.0.4-debian-11-r11
repository: bitnami/redis
# Disabling Manifest Experiment disables stored metadata with Terraform state
# Otherwise, the state will show changes on all plans
helm_manifest_experiment_enabled: false
```
`stacks/catalog/eks/redis/dev` file (derived component for "dev" specific settings):
```yaml
import:
- catalog/eks/redis/defaults
components:
terraform:
eks/redis/dev:
metadata:
component: eks/redis
inherits:
- eks/redis/defaults
vars: {}
```
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
`chart_repository` (`string`) required
Repository URL where to locate the requested chart.
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region.
`resources` required
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `false`.
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`redis` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_namespace.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## redis-operator
This component installs `redis-operator` for EKS clusters. Redis Operator creates/configures/manages high availability
redis with sentinel automatic failover atop Kubernetes.
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
`stacks/catalog/eks/redis-operator/defaults` file (base component for default redis-operator settings):
```yaml
components:
terraform:
eks/redis-operator/defaults:
metadata:
component: eks/redis-operator
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: redis-operator
tags:
Team: sre
Service: redis-operator
create_namespace: true
kubernetes_namespace: "redis-operator"
# https://github.com/spotahome/redis-operator
chart_repository: https://spotahome.github.io/redis-operator
chart_version: "3.1.4"
chart: "redis-operator"
timeout: 180
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
# Set a specific version of Redis using the image.tag and image.repository values.
# Defaults: https://github.com/spotahome/redis-operator/blob/master/charts/redisoperator/values.yaml#L6
chart_values:
image:
repository: quay.io/spotahome/redis-operator
tag: v1.1.1
```
`stacks/catalog/eks/redis-operator/dev` file (derived component for "dev" specific settings):
```yaml
import:
- catalog/eks/redis-operator/defaults
components:
terraform:
eks/redis-operator/dev:
metadata:
component: eks/redis-operator
inherits:
- eks/redis-operator/defaults
vars: {}
```
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
`chart_repository` (`string`) required
Repository URL where to locate the requested chart.
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region.
`resources` required
The cpu and memory of the deployment's limits and requests.
**Type:**
```hcl
object({
limits = object({
cpu = string
memory = string
})
requests = object({
cpu = string
memory = string
})
})
```
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `false`.
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`rbac_enabled` (`bool`) optional
Service Account for pods.
**Default value:** `true`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `kubernetes`, version: `>= 2.0, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`redis_operator` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_namespace.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## reloader
This component installs the [Stakater Reloader](https://github.com/stakater/Reloader) for EKS clusters. `reloader` can
watch `ConfigMap`s and `Secret`s for changes and use these to trigger rolling upgrades on pods and their associated
`DeploymentConfig`s, `Deployment`s, `Daemonset`s `Statefulset`s and `Rollout`s.
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
```yaml
components:
terraform:
eks/reloader:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: reloader
create_namespace: true
kubernetes_namespace: "reloader"
repository: "https://stakater.github.io/stakater-charts"
chart: "reloader"
chart_version: "v0.0.124"
timeout: 180
```
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used.
**Default value:** `true`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed.
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails.
**Default value:** `true`
`create_namespace` (`bool`) optional
Create the Kubernetes namespace if it does not yet exist
**Default value:** `true`
`description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`repository` (`string`) optional
Repository URL where to locate the requested chart.
**Default value:** `null`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`values` (`any`) optional
YAML-valid specification of values to be passed to the helm_release resource
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`. Defaults to `true`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`metadata`
Block status of the deployed release
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.7.1, != 2.21.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`helm_release.this`](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) (resource)
- [`kubernetes_namespace.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## spacelift-worker-pool
This component provisions the `WorkerPool` part of the
[Kubernetes Operator](https://docs.spacelift.io/concepts/worker-pools/kubernetes-workers#kubernetes-workers) for
[Spacelift Worker Pools](https://docs.spacelift.io/concepts/worker-pools#kubernetes) into an EKS cluster. You can
provision this component multiple times to create multiple worker pools in a single EKS cluster.
## Usage
:::note
Before provisioning the `eks/spacelift-worker-pool` component, the `eks/spacelift-worker-pool-controller` component
must be provisioned first into an EKS cluster to enable the
[Spacelift Worker Pool Kubernetes Controller](https://docs.spacelift.io/concepts/worker-pools#kubernetes). The
`eks/spacelift-worker-pool-controller` component must be provisioned only once per EKS cluster.
:::
The Spacelift worker needs to pull a Docker image from an ECR repository. It will run the Terraform commands inside the
Docker container. In the Cloud Posse reference architecture, this image is the "infra" or "infrastructure" image derived
from [Geodesic](https://github.com/cloudposse/geodesic). The worker service account needs permission to pull the image
from the ECR repository, and the details of where to find the image are configured in the various `ecr_*` variables.
**Stack Level**: Regional
```yaml
# stacks/catalog/eks/spacelift-worker-pool/defaults.yaml
components:
terraform:
eks/spacelift-worker-pool:
enabled: true
name: "spacelift-worker-pool"
space_name: root
# aws_config_file is the path in the Docker container to the AWS_CONFIG_FILE.
# "/etc/aws-config/aws-config-spacelift" is the usual path in the "infrastructure" image.
aws_config_file: "/etc/aws-config/aws-config-spacelift"
spacelift_api_endpoint: "https://yourcompany.app.spacelift.io"
eks_component_name: "eks/cluster"
worker_pool_size: 40
kubernetes_namespace: "spacelift-worker-pool"
kubernetes_service_account_enabled: true
kubernetes_service_account_name: "spacelift-worker-pool"
keep_successful_pods: false
kubernetes_role_api_groups: [""]
kubernetes_role_resources: ["*"]
kubernetes_role_resource_names: null
kubernetes_role_verbs: ["get", "list"]
ecr_component_name: ecr
ecr_environment_name: use1
ecr_stage_name: artifacts
ecr_tenant_name: core
ecr_repo_name: infra
```
## Variables
### Required Variables
`aws_config_file` (`string`) required
The AWS_CONFIG_FILE used by the worker. Can be overridden by `/.spacelift/config.yml`.
`ecr_repo_name` (`string`) required
ECR repository name
`kubernetes_namespace` (`string`) required
Name of the Kubernetes Namespace the Spacelift worker pool is deployed in to
`region` (`string`) required
AWS Region
`spacelift_api_endpoint` (`string`) required
The Spacelift API endpoint URL (e.g. https://example.app.spacelift.io)
### Optional Variables
`aws_profile` (`string`) optional
The AWS_PROFILE used by the worker. If not specified, `"${var.namespace}-identity"` will be used.
Can be overridden by `/.spacelift/config.yml`.
**Default value:** `null`
`ecr_component_name` (`string`) optional
ECR component name
**Default value:** `"ecr"`
`ecr_environment_name` (`string`) optional
The name of the environment where `ecr` is provisioned
**Default value:** `""`
`ecr_stage_name` (`string`) optional
The name of the stage where `ecr` is provisioned
**Default value:** `"artifacts"`
`ecr_tenant_name` (`string`) optional
The name of the tenant where `ecr` is provisioned.
If the `tenant` label is not used, leave this as `null`.
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
`grpc_server_resources` optional
Resources for the gRPC server part of the worker pool deployment. The default values are usually sufficient.
**Type:**
```hcl
object({
requests = optional(object({
memory = optional(string, "50Mi")
cpu = optional(string, "50m")
}), {})
limits = optional(object({
memory = optional(string, "500Mi")
cpu = optional(string, "500m")
}), {})
})
```
**Default value:** `{ }`
List of IAM policy documents that are merged together into the exported document with higher precedence.
In merging, statements with non-blank SIDs will override statements with the same SID
from earlier documents in the list and from other "source" documents.
**Default value:** `null`
`iam_permissions_boundary` (`string`) optional
ARN of the policy that is used to set the permissions boundary for the IAM Role
**Default value:** `null`
`iam_source_json_url` (`string`) optional
IAM source JSON policy to download
**Default value:** `null`
List of IAM policy documents that are merged together into the exported document.
Statements defined in `iam_source_policy_documents` must have unique SIDs.
Statements with the same SID as in statements in documents assigned to the
`iam_override_policy_documents` arguments will be overridden.
**Default value:** `null`
`keep_successful_pods` (`bool`) optional
Indicates whether run Pods should automatically be removed as soon
as they complete successfully, or be kept so that they can be inspected later. By default
run Pods are removed as soon as they complete successfully. Failed Pods are not automatically
removed to allow debugging.
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
List of resources for the Kubernetes Role created for the Kubernetes Service Account
**Default value:**
```hcl
[
"*"
]
```
`kubernetes_role_verbs` (`list(string)`) optional
List of verbs that apply to ALL the ResourceKinds for the Kubernetes Role created for the Kubernetes Service Account
**Default value:**
```hcl
[
"get",
"list"
]
```
Kubernetes service account name
**Default value:** `null`
`space_name` (`string`) optional
The name of the Spacelift Space to create the worker pool in
**Default value:** `"root"`
`worker_pool_description` (`string`) optional
Spacelift worker pool description. The default dynamically includes EKS cluster ID and Spacelift Space name.
**Default value:** `null`
`worker_pool_size` (`number`) optional
Worker pool size. The number of workers registered with Spacelift.
**Default value:** `1`
`worker_spec` optional
Configuration for the Workers in the worker pool
**Type:**
```hcl
object({
tmpfs_enabled = optional(bool, false)
resources = optional(object({
limits = optional(object({
cpu = optional(string, "1")
memory = optional(string, "4500Mi")
ephemeral-storage = optional(string, "2G")
}), {})
requests = optional(object({
cpu = optional(string, "750m")
memory = optional(string, "4Gi")
ephemeral-storage = optional(string, "1G")
}), {})
}), {})
annotations = optional(map(string), {})
node_selector = optional(map(string), {})
tolerations = optional(list(object({
key = optional(string)
operator = optional(string)
value = optional(string)
effect = optional(string)
toleration_seconds = optional(number)
})), [])
# activeDeadlineSeconds defines the length of time in seconds before which the Pod will
# be marked as failed. This can be used to set a time limit for your runs.
active_deadline_seconds = optional(number, 4200) # 4200 seconds = 70 minutes
termination_grace_period_seconds = optional(number, 50)
})
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`service_account_name`
Kubernetes Service Account name
`service_account_namespace`
Kubernetes Service Account namespace
`service_account_policy_arn`
IAM policy ARN
`service_account_policy_id`
IAM policy ID
`service_account_policy_name`
IAM policy name
`service_account_role_arn`
IAM role ARN
`service_account_role_name`
IAM role name
`service_account_role_unique_id`
IAM role unique ID
`spacelift_worker_pool_manifest`
Spacelift worker pool Kubernetes manifest
`worker_pool_id`
Spacelift worker pool ID
`worker_pool_name`
Spacelift worker pool name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.18.1, != 2.21.0`
- `spacelift`, version: `>= 0.1.2`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `kubernetes`, version: `>= 2.18.1, != 2.21.0`
- `spacelift`, version: `>= 0.1.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`ecr` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks_iam_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`eks_iam_role` | 2.2.1 | [`cloudposse/eks-iam-role/aws`](https://registry.terraform.io/modules/cloudposse/eks-iam-role/aws/2.2.1) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_manifest.spacelift_worker_pool`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/manifest) (resource)
- [`kubernetes_role_binding_v1.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/role_binding_v1) (resource)
- [`kubernetes_role_v1.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/role_v1) (resource)
- [`kubernetes_secret.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret) (resource)
- [`kubernetes_service_account_v1.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service_account_v1) (resource)
- [`spacelift_worker_pool.default`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/worker_pool) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_ssm_parameter.spacelift_key_id`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.spacelift_key_secret`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`spacelift_spaces.default`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/data-sources/spaces) (data source)
---
## spacelift-worker-pool-controller
This component provisions the controller part of the
[Kubernetes Operator](https://docs.spacelift.io/concepts/worker-pools/kubernetes-workers#kubernetes-workers) for
[Spacelift Worker Pools](https://docs.spacelift.io/concepts/worker-pools#kubernetes) into an EKS cluster. It must be
installed in the cluster before installing the `eks/spacelift-worker-pool` component.
The `eks/spacelift-worker-pool-controller` component must be provisioned only once per EKS cluster. You can deploy the
`eks/spacelift-worker-pool` component multiple times.
## Usage
**Stack Level**: Regional
```yaml
# stacks/catalog/eks/spacelift-worker-pool-controller/defaults.yaml
components:
terraform:
eks/spacelift-worker-pool-controller:
vars:
enabled: true
name: "spacelift-controller"
eks_component_name: eks/cluster
# https://github.com/spacelift-io/spacelift-helm-charts/tree/main/spacelift-workerpool-controller
# https://docs.spacelift.io/concepts/worker-pools#kubernetes
chart: "spacelift-workerpool-controller"
chart_repository: "https://downloads.spacelift.io/helm"
chart_version: "0.19.0"
chart_description: "Helm chart for deploying Spacelift worker pool controller and WorkerPool CRD"
create_namespace_with_kubernetes: true
kubernetes_namespace: "spacelift-worker-pool"
timeout: 180
cleanup_on_fail: true
atomic: true
wait: true
chart_values: {}
```
## Variables
### Required Variables
`chart` (`string`) required
Chart name to be installed. The chart name can be local path, a URL to a chart, or the name of the chart if `repository` is specified. It is also possible to use the `<repository>/<chart>` format here if you are running Terraform on a system that the repository has been added to with `helm repo add` but this is not recommended.
`chart_repository` (`string`) required
Repository URL where to locate the requested chart.
`region` (`string`) required
AWS Region
### Optional Variables
`atomic` (`bool`) optional
If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used
**Default value:** `true`
`chart_description` (`string`) optional
Set release description attribute (visible in the history).
**Default value:** `null`
`chart_values` (`any`) optional
Additional values to yamlencode as `helm_release` values
**Default value:** `{ }`
`chart_version` (`string`) optional
Specify the exact chart version to install. If this is not specified, the latest version is installed
**Default value:** `null`
`cleanup_on_fail` (`bool`) optional
Allow deletion of new resources created in this upgrade when upgrade fails
**Default value:** `true`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`kubernetes_namespace` (`string`) optional
Name of the Kubernetes Namespace this pod is deployed in to
**Default value:** `null`
`timeout` (`number`) optional
Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to `300` seconds
**Default value:** `null`
`wait` (`bool`) optional
Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as `timeout`
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`spacelift_worker_pool_controller_metadata`
Block status of the deployed Spacelift worker pool Kubernetes controller
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.18.1, != 2.21.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`spacelift_worker_pool_controller` | 0.10.1 | [`cloudposse/helm-release/aws`](https://registry.terraform.io/modules/cloudposse/helm-release/aws/0.10.1) | Deploy Spacelift worker pool Kubernetes controller Helm chart https://docs.spacelift.io/concepts/worker-pools#installation https://github.com/spacelift-io/spacelift-helm-charts/tree/main/spacelift-workerpool-controller https://github.com/spacelift-io/spacelift-helm-charts/blob/main/spacelift-workerpool-controller/values.yaml
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## storage-class
This component is responsible for provisioning `StorageClasses` in an EKS cluster. See the list of guides and references
linked at the bottom of this README for more information.
A StorageClass provides part of the configuration for a PersistentVolumeClaim, which copies the configuration when it is
created. Thus, you can delete a StorageClass without affecting existing PersistentVolumeClaims, and changes to a
StorageClass do not propagate to existing PersistentVolumeClaims.
## Usage
**Stack Level**: Regional, per cluster
This component can create storage classes backed by EBS or EFS, and is intended to be used with the corresponding EKS
add-ons `aws-ebs-csi-driver` and `aws-efs-csi-driver` respectively. In the case of EFS, this component also requires
that you have provisioned an EFS filesystem in the same region as your cluster, and expects you have used the `efs`
(previously `eks/efs`) component to do so. The EFS storage classes will get the file system ID from the EFS component's
output.
### Note: Default Storage Class
Exactly one StorageClass can be designated as the default StorageClass for a cluster. This default StorageClass is then
used by PersistentVolumeClaims that do not specify a storage class.
Prior to Kubernetes 1.26, if more than one StorageClass is marked as default, a PersistentVolumeClaim without
`storageClassName` explicitly specified cannot be created. In Kubernetes 1.26 and later, if more than one StorageClass
is marked as default, the last one created will be used, which means you can get by with just ignoring the default "gp2"
StorageClass that EKS creates for you.
EKS always creates a default storage class for the cluster, typically an EBS backed class named `gp2`. Find out what the
default storage class is for your cluster by running this command:
```bash
# You only need to run `set-cluster` when you are changing target clusters
set-cluster admin # replace admin with other role name if desired
kubectl get storageclass
```
This will list the available storage classes, with the default one marked with `(default)` next to its name.
If you want to change the default, you can unset the existing default manually, like this:
```bash
SC_NAME=gp2 # Replace with the name of the storage class you want to unset as default
# You only need to run `set-cluster` when you are changing target clusters
set-cluster admin # replace admin with other role name if desired
kubectl patch storageclass $SC_NAME -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
```
Or you can import the existing default storage class into Terraform and manage or delete it entirely, like this:
```bash
SC_NAME=gp2 # Replace with the name of the storage class you want to unset as default
atmos terraform import eks/storage-class 'kubernetes_storage_class_v1.ebs["'${SC_NAME}'"]' $SC_NAME -s=core-usw2-dev
```
View the parameters of a storage class by running this command:
```bash
SC_NAME=gp2 # Replace with the name of the storage class you want to view
# You only need to run `set-cluster` when you are changing target clusters
set-cluster admin # replace admin with other role name if desired
kubectl get storageclass $SC_NAME -o yaml
```
You can then match that configuration, except that you cannot omit `allow_volume_exansion`.
```yaml
ebs_storage_classes:
gp2:
make_default_storage_class: true
include_tags: false
# Preserve values originally set by eks/cluster.
# Set to "" to omit.
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
encrypted: ""
```
Here's an example snippet for how to use this component.
```yaml
eks/storage-class:
vars:
ebs_storage_classes:
gp2:
make_default_storage_class: false
include_tags: false
# Preserve values originally set by eks/cluster.
# Set to "" to omit.
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
encrypted: ""
gp3:
make_default_storage_class: true
parameters:
type: gp3
efs_storage_classes:
efs-sc:
make_default_storage_class: false
efs_component_name: "efs" # Replace with the name of the EFS component, previously "eks/efs"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region.
### Optional Variables
`ebs_storage_classes` optional
A map of storage class name to EBS parameters to create
**Type:**
```hcl
map(object({
enabled = optional(bool, true)
make_default_storage_class = optional(bool, false)
include_tags = optional(bool, true) # If true, StorageClass will set our tags on created EBS volumes
labels = optional(map(string), null)
reclaim_policy = optional(string, "Delete")
volume_binding_mode = optional(string, "WaitForFirstConsumer")
mount_options = optional(list(string), null)
# Allowed topologies are poorly documented, and poorly implemented.
# According to the API spec https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#storageclass-v1-storage-k8s-io
# it should be a list of objects with a `matchLabelExpressions` key, which is a list of objects with `key` and `values` keys.
# However, the Terraform resource only allows a single object in a matchLabelExpressions block, not a list,
# the EBS driver appears to only allow a single matchLabelExpressions block, and it is entirely unclear
# what should happen if either of the lists has more than one element.
# So we simplify it here to be singletons, not lists, and allow for a future change to the resource to support lists,
# and a future replacement for this flattened object which can maintain backward compatibility.
allowed_topologies_match_label_expressions = optional(object({
key = optional(string, "topology.ebs.csi.aws.com/zone")
values = list(string)
}), null)
allow_volume_expansion = optional(bool, true)
# parameters, see https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md
parameters = object({
fstype = optional(string, "ext4") # "csi.storage.k8s.io/fstype"
type = optional(string, "gp3")
iopsPerGB = optional(string, null)
allowAutoIOPSPerGBIncrease = optional(string, null) # "true" or "false"
iops = optional(string, null)
throughput = optional(string, null)
encrypted = optional(string, "true")
kmsKeyId = optional(string, null) # ARN of the KMS key to use for encryption. If not specified, the default key is used.
blockExpress = optional(string, null) # "true" or "false"
blockSize = optional(string, null)
})
provisioner = optional(string, "ebs.csi.aws.com")
# TODO: support tags
# https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/tagging.md
}))
```
**Default value:** `{ }`
`efs_storage_classes` optional
A map of storage class name to EFS parameters to create
**Type:**
```hcl
map(object({
enabled = optional(bool, true)
make_default_storage_class = optional(bool, false)
labels = optional(map(string), null)
efs_component_name = optional(string, "eks/efs")
reclaim_policy = optional(string, "Delete")
volume_binding_mode = optional(string, "Immediate")
# Mount options are poorly documented.
# TLS is now the default and need not be specified. https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/docs#encryption-in-transit
# Other options include `lookupcache` and `iam`.
mount_options = optional(list(string), null)
parameters = optional(object({
basePath = optional(string, "/efs_controller")
directoryPerms = optional(string, "700")
provisioningMode = optional(string, "efs-ap")
gidRangeStart = optional(string, null)
gidRangeEnd = optional(string, null)
uid = optional(string, null)
gid = optional(string, null)
# Support for cross-account EFS mounts
# See https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/cross_account_mount
# and for gritty details on secrets: https://kubernetes-csi.github.io/docs/secrets-and-credentials-storage-class.html
az = optional(string, null)
provisioner-secret-name = optional(string, null) # "csi.storage.k8s.io/provisioner-secret-name"
provisioner-secret-namespace = optional(string, null) # "csi.storage.k8s.io/provisioner-secret-namespace"
}), {})
provisioner = optional(string, "efs.csi.aws.com")
}))
```
**Default value:** `{ }`
`eks_component_name` (`string`) optional
The name of the EKS component for the cluster in which to create the storage classes
**Default value:** `"eks/cluster"`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `false`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`
**Default value:** `true`
`kubeconfig_context` (`string`) optional
Context to choose from the Kubernetes config file.
If supplied, `kubeconfig_context_format` will be ignored.
**Default value:** `""`
`kubeconfig_context_format` (`string`) optional
A format string to use for creating the `kubectl` context name when
`kubeconfig_file_enabled` is `true` and `kubeconfig_context` is not supplied.
Must include a single `%s` which will be replaced with the cluster name.
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`storage_classes`
Storage classes created by this module
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `helm`, version: `>= 2.0.0, < 3.0.0`
- `kubernetes`, version: `>= 2.22.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `kubernetes`, version: `>= 2.22.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`efs` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_storage_class_v1.ebs`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/storage_class_v1) (resource)
- [`kubernetes_storage_class_v1.efs`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/storage_class_v1) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
---
## tailscale
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
```yaml
components:
terraform:
eks/tailscale:
vars:
enabled: true
name: tailscale
create_namespace: true
kubernetes_namespace: "tailscale"
image_repo: tailscale/k8s-operator
image_tag: unstable
```
## Variables
### Required Variables
`kubernetes_namespace` (`string`) required
The namespace to install the release into.
`region` (`string`) required
AWS Region
### Optional Variables
`chart_values` (`any`) optional
Addition map values to yamlencode as `helm_release` values.
**Default value:** `{ }`
`create_namespace` (`bool`) optional
Create the namespace if it does not yet exist. Defaults to `false`.
**Default value:** `false`
`deployment_name` (`string`) optional
Name of the tailscale deployment, defaults to `tailscale` if this is null
**Default value:** `null`
`eks_component_name` (`string`) optional
The name of the eks component
**Default value:** `"eks/cluster"`
`env` (`map(string)`) optional
Map of ENV vars in the format `key=value`. These ENV vars will be set in the `utils` provider before executing the data source
**Default value:** `null`
Enable storing of the rendered manifest for helm_release so the full diff of what is changing can been seen in the plan
**Default value:** `true`
`image_repo` (`string`) optional
Image repository for the deployment
**Default value:** `"ghcr.io/tailscale/tailscale"`
`image_tag` (`string`) optional
Image Tag for the deployment.
**Default value:** `"latest"`
`import_profile_name` (`string`) optional
AWS Profile name to use when importing a resource
**Default value:** `null`
`import_role_arn` (`string`) optional
IAM Role ARN to use when importing a resource
**Default value:** `null`
`kube_data_auth_enabled` (`bool`) optional
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
**Default value:** `false`
`kube_exec_auth_aws_profile` (`string`) optional
The AWS config profile for `aws eks get-token` to use
**Default value:** `""`
If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`
**Default value:** `false`
`kube_exec_auth_enabled` (`bool`) optional
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
**Default value:** `true`
`kube_exec_auth_role_arn` (`string`) optional
The role ARN for `aws eks get-token` to use
**Default value:** `""`
The Kubernetes API version of the credentials returned by the `exec` auth plugin
**Default value:** `"client.authentication.k8s.io/v1beta1"`
`kubeconfig_file` (`string`) optional
The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`
**Default value:** `""`
`kubeconfig_file_enabled` (`bool`) optional
If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster
**Default value:** `false`
`routes` (`list(string)`) optional
List of CIDR Ranges or IPs to allow Tailscale to connect to
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`deployment`
Tail scale operator deployment K8S resource
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `kubernetes`, version: `>= 2.7.1`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `kubernetes`, version: `>= 2.7.1`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`store_read` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`kubernetes_cluster_role.tailscale_operator`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/cluster_role) (resource)
- [`kubernetes_cluster_role_binding.tailscale_operator`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/cluster_role_binding) (resource)
- [`kubernetes_deployment.operator`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment) (resource)
- [`kubernetes_namespace.default`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) (resource)
- [`kubernetes_role.operator`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/role) (resource)
- [`kubernetes_role.proxies`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/role) (resource)
- [`kubernetes_role_binding.operator`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/role_binding) (resource)
- [`kubernetes_role_binding.proxies`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/role_binding) (resource)
- [`kubernetes_secret.operator_oauth`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret) (resource)
- [`kubernetes_service_account.operator`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service_account) (resource)
- [`kubernetes_service_account.proxies`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service_account) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_eks_cluster.kubernetes`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster) (data source)
- [`aws_eks_cluster_auth.eks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) (data source)
- [`aws_subnet.vpc_subnets`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnet) (data source)
---
## vertical-pod-autoscaler
Description of this component 55
## Usage
**Stack Level**: Regional or Global
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
foo:
vars:
enabled: true
```
## Variables
### Required Variables
### Optional Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`mock`
Mock output example for the Cloud Posse Terraform component template
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## elasticache-redis
This component provisions AWS [ElastiCache Redis](https://aws.amazon.com/elasticache/redis/) clusters.
The `engine` can either be `redis` or `valkey`. For more information, see
[why aws supports valkey](https://aws.amazon.com/blogs/opensource/why-aws-supports-valkey/).
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
`stacks/catalog/elasticache/elasticache-redis/defaults.yaml` file (default settings for all Redis clusters):
```yaml
components:
terraform:
elasticache-redis:
vars:
enabled: true
name: "elasticache-redis"
family: redis7.x
egress_cidr_blocks: ["0.0.0.0/0"]
port: 6379
at_rest_encryption_enabled: true
transit_encryption_enabled: false
apply_immediately: false
automatic_failover_enabled: false
cloudwatch_metric_alarms_enabled: false
snapshot_retention_limit: 1
snapshot_window: "06:00-07:00"
maintenance_window: "tue:08:00-tue:09:00"
# Global defaults for all redis_clusters (can be overridden per cluster)
engine: "redis"
engine_version: "7.0"
instance_type: cache.t4g.small
num_replicas: 1
num_shards: 0
replicas_per_shard: 0
create_parameter_group: true
parameters: []
redis_clusters:
redis-main:
instance_type: cache.t4g.small
engine_version: "7.0"
parameters:
- name: notify-keyspace-events
value: "lK"
```
`stacks/org/ou/account/region.yaml` file (imports defaults and overrides per-cluster settings):
```yaml
import:
- catalog/elasticache/elasticache-redis/defaults.yaml
components:
terraform:
elasticache-redis:
vars:
enabled: true
redis_clusters:
redis-main:
instance_type: cache.t4g.small
# Per-cluster overrides of the global defaults
engine_version: "7.1" # override global default of 7.0
num_replicas: 2 # override global default of 1
num_shards: 3 # override global default of 0 (enables cluster mode)
replicas_per_shard: 1 # override global default of 0
parameters:
- name: notify-keyspace-events
value: lK
```
Alternatively, if any per-cluster defaults are not covered by component-level variables,
use [YAML anchors](https://yaml.org/spec/1.2.2/#3222-anchors-and-aliases) to define shared
values once and merge them into each cluster entry:
```yaml
# stacks/catalog/elasticache/elasticache-redis/defaults.yaml
anchors:
default_redis: &default_redis
engine: "redis"
instance_type: cache.t4g.small
num_replicas: 1
num_shards: 0
replicas_per_shard: 0
components:
terraform:
elasticache-redis:
vars:
enabled: true
name: "elasticache-redis"
family: redis7.x
port: 6379
at_rest_encryption_enabled: true
transit_encryption_enabled: false
apply_immediately: false
automatic_failover_enabled: false
cloudwatch_metric_alarms_enabled: false
snapshot_retention_limit: 1
# Global default engine version for all clusters (can be overridden per cluster)
engine_version: "7.0"
redis_clusters:
redis-main:
<<: *default_redis # merge anchor defaults
num_replicas: 2 # override anchor value
redis-valkey:
<<: *default_redis
engine: "valkey" # override engine to valkey
num_shards: 3 # enable cluster mode
replicas_per_shard: 1
redis-cache:
<<: *default_redis # all anchor defaults apply
```
## Variables
### Required Variables
A list of Security Group rule objects to add to the created security group, in addition to the ones this module normally creates.
**Default value:** `[ ]`
If `true`, the created security group will allow egress on all ports and protocols to all IP address.
If this is false and no egress rules are otherwise specified, then no egress will be allowed.
**Default value:** `true`
`allow_ingress_from_this_vpc` (`bool`) optional
If set to `true`, allow ingress from the VPC CIDR for this account
**Default value:** `true`
List of stages to pull VPC ingress cidr and add to security group
**Default value:** `[ ]`
`auth_token_enabled` (`bool`) optional
Enable auth token
**Default value:** `true`
`auth_token_update_strategy` (`string`) optional
Strategy to use when updating the auth_token. Valid values are `SET`, `ROTATE`, and `DELETE`. Defaults to `ROTATE`.
**Default value:** `"ROTATE"`
`auto_minor_version_upgrade` (`bool`) optional
Specifies whether minor version engine upgrades will be applied automatically to the underlying Cache Cluster instances during the maintenance window. Only supported if the engine version is 6 or higher.
**Default value:** `false`
`availability_zones` (`list(string)`) optional
Availability zone IDs
**Default value:** `[ ]`
`create_parameter_group` (`bool`) optional
Default setting for whether a new parameter group should be created for all Redis clusters. Set to false to use an existing parameter group. Can be overridden per cluster in redis_clusters.
**Default value:** `true`
`data_tiering_enabled` (`bool`) optional
Enables data tiering. Data tiering is only supported for replication groups using the r6gd node type.
**Default value:** `false`
`description` (`string`) optional
Default description for all Redis replication groups. Can be overridden per cluster in redis_clusters.
**Default value:** `null`
Description for the security group rule allowing egress to the CIDR blocks in `egress_cidr_blocks`. Only used when `allow_all_egress` is `false`.
**Default value:** `"Selectively allow outbound traffic"`
`eks_component_names` (`set(string)`) optional
The names of the eks components
**Default value:** `[ ]`
`eks_security_group_enabled` (`bool`) optional
Use the eks default security group
**Default value:** `false`
Subnet group name for the ElastiCache instance
**Default value:** `""`
`engine` (`string`) optional
Default cache engine for all Redis clusters. Valid values: `redis` or `valkey`. Can be overridden per cluster in redis_clusters.
**Default value:** `"redis"`
`engine_version` (`string`) optional
Default engine version for all Redis clusters (e.g. `7.0`). Can be overridden per cluster in redis_clusters.
**Default value:** `null`
`final_snapshot_identifier` (`string`) optional
Default name of the final snapshot to create before deleting all Redis clusters. If null, no final snapshot is created. Can be overridden per cluster in redis_clusters.
**Default value:** `null`
`global_replication_group_id` (`string`) optional
The ID of the global replication group to which this replication group should belong. If this parameter is specified, the replication group is added to the specified global replication group as a secondary replication group; otherwise, the replication group is not part of any global replication group. If global_replication_group_id is set, the num_node_groups parameter cannot be set.
**Default value:** `null`
`ingress_cidr_blocks` (`list(string)`) optional
CIDR blocks for permitted ingress
**Default value:** `[ ]`
Description for the security group rule allowing ingress from the CIDR blocks in `ingress_cidr_blocks`.
**Default value:** `"Selectively allow inbound traffic"`
`instance_type` (`string`) optional
Default instance type for all Redis clusters. Can be overridden per cluster in redis_clusters.
**Default value:** `null`
`kms_key_id` (`string`) optional
The ARN of the key that you wish to use if encrypting at rest. If not supplied, uses service managed encryption. `at_rest_encryption_enabled` must be set to `true`
**Default value:** `null`
The log_delivery_configuration block allows the streaming of Redis SLOWLOG or Redis Engine Log to CloudWatch Logs or Kinesis Data Firehose. Max of 2 blocks.
**Default value:** `[ ]`
`maintenance_window` (`string`) optional
Maintenance window. Format: ddd:hh:mm-ddd:hh:mm (UTC). Defaults to null (AWS chooses the window).
**Default value:** `null`
`multi_az_enabled` (`bool`) optional
Multi AZ (Automatic Failover must also be enabled. If Cluster Mode is enabled, Multi AZ is on by default, and this setting is ignored)
**Default value:** `false`
`network_type` (`string`) optional
The network type of the cluster. Valid values: ipv4, ipv6, dual_stack.
**Default value:** `"ipv4"`
`notification_topic_arn` (`string`) optional
Notification topic arn
**Default value:** `""`
`num_replicas` (`number`) optional
Default number of replicas in the replica set for all Redis clusters. Can be overridden per cluster in redis_clusters.
**Default value:** `1`
`num_shards` (`number`) optional
Default number of shards (node groups) for Redis clusters. Value > 0 enables cluster mode. Can be overridden per cluster in redis_clusters.
**Default value:** `0`
`ok_actions` (`list(string)`) optional
The list of actions to execute when this alarm transitions into an OK state from any other state. Each action is specified as an Amazon Resource Number (ARN)
**Default value:** `[ ]`
`parameter_group_description` (`string`) optional
Managed by Terraform
**Default value:** `null`
`parameter_group_name` (`string`) optional
Default override parameter group name for all Redis clusters. Can be overridden per cluster in redis_clusters.
**Default value:** `null`
`parameters` optional
Default list of Redis parameters to configure for all clusters. Can be overridden per cluster in redis_clusters.
**Type:**
```hcl
list(object({
name = string
value = string
}))
```
**Default value:** `[ ]`
`replicas_per_shard` (`number`) optional
Default number of replica nodes per shard for Redis clusters. Valid values are 0 to 5. Can be overridden per cluster in redis_clusters.
**Default value:** `0`
`replication_group_id` (`string`) optional
Default replication group ID for all Redis clusters. Must be 1-20 alphanumeric characters or hyphens, start with a letter, and not end with or contain consecutive hyphens. Can be overridden per cluster in redis_clusters.
**Default value:** `""`
The usage limits for the serverless cache. Expected keys are `data_storage` (with `maximum`, `minimum`, `unit`) and `ecpu_per_second` (with `maximum`, `minimum`).
**Default value:** `{ }`
`serverless_enabled` (`bool`) optional
Flag to enable/disable creation of a serverless redis cluster
**Default value:** `false`
The list of ARN(s) of the snapshot that the new serverless cache will be created from. Available for Redis only.
**Default value:** `[ ]`
`serverless_snapshot_time` (`string`) optional
The daily time (in UTC, format HH:MM) that snapshots will be created from the serverless cache.
**Default value:** `"06:00"`
`serverless_user_group_id` (`string`) optional
User Group ID to associate with the serverless replication group
**Default value:** `null`
`snapshot_arns` (`list(string)`) optional
Default list of ARNs of Redis RDB snapshot files in S3 to restore into all new clusters. Can be overridden per cluster in redis_clusters.
**Default value:** `[ ]`
`snapshot_name` (`string`) optional
Default name of a snapshot to restore into all new Redis clusters. Changing this forces a new resource. Can be overridden per cluster in redis_clusters.
**Default value:** `null`
`snapshot_retention_limit` (`number`) optional
The number of days for which ElastiCache will retain automatic cache cluster snapshots before deleting them.
**Default value:** `0`
`snapshot_window` (`string`) optional
The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot. Format: hh:mm-hh:mm. Defaults to null (AWS chooses the window). Has no effect when snapshot_retention_limit is 0.
**Default value:** `null`
`transit_encryption_mode` (`string`) optional
Transit encryption mode. Valid values are 'preferred' and 'required'
**Default value:** `null`
`user_group_ids` (`list(string)`) optional
User Group ID to associate with the replication group
**Default value:** `null`
`vpc_component_name` (`string`) optional
The name of a VPC component
**Default value:** `"vpc"`
`vpc_ingress_component_name` (`string`) optional
The name of a Ingress VPC component
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`redis_clusters`
Redis cluster objects
`security_group_id`
The security group ID of the ElastiCache Redis cluster
Name of DNS subdomain to prepend to Route53 zone DNS name
`instance_type` (`string`) required
Elastic cache instance type
`num_replicas` (`number`) required
Number of replicas in replica set
### Optional Variables
`create_parameter_group` (`bool`) optional
Whether new parameter group should be created. Set to false if you want to use existing parameter group
**Default value:** `true`
`description` (`string`) optional
Description of elasticache replication group
**Default value:** `null`
`engine` (`string`) optional
Name of the cache engine to use: either `redis` or `valkey`
**Default value:** `"redis"`
`engine_version` (`string`) optional
Version of the cache engine to use
**Default value:** `"6.0.5"`
`final_snapshot_identifier` (`string`) optional
The name of your final node group (shard) snapshot. ElastiCache creates the snapshot from the primary node in the cluster. If omitted, no final snapshot will be made.
**Default value:** `null`
`kms_alias_name_ssm` (`string`) optional
KMS alias name for SSM
**Default value:** `"alias/aws/ssm"`
`num_shards` (`number`) optional
Number of node groups (shards) for this Redis cluster. Value > 0 sets cluster mode to true. Changing this number will trigger an online resizing operation before other settings modifications
**Default value:** `0`
`parameter_group_name` (`string`) optional
Override the default parameter group name
**Default value:** `null`
`parameters` optional
Parameters to configure cluster parameter group
**Type:**
```hcl
list(object({
name = string
value = string
}))
```
**Default value:** `[ ]`
`replicas_per_shard` (`number`) optional
Number of replica nodes in each node group. Valid values are 0 to 5. Changing this number will force a new resource
**Default value:** `0`
`replication_group_id` (`string`) optional
Replication group ID with the following constraints:
A name must contain from 1 to 20 alphanumeric characters or hyphens.
The first character must be a letter.
A name cannot end with a hyphen or contain two consecutive hyphens.
**Default value:** `""`
`snapshot_arns` (`list(string)`) optional
A single-element string list containing an Amazon Resource Name (ARN) of a Redis RDB snapshot file stored in Amazon S3. Example: arn:aws:s3:::my_bucket/snapshot1.rdb
**Default value:** `[ ]`
`snapshot_name` (`string`) optional
The name of a snapshot from which to restore data into the new node group. Changing the snapshot_name forces a new resource.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cluster_endpoint`
Redis primary endpoint
`cluster_host`
Redis hostname
`cluster_id`
Redis cluster ID
`cluster_port`
Redis port
`cluster_security_group_id`
Cluster Security Group ID
`cluster_ssm_path_auth_token`
SSM path of Redis auth_token
`transit_encryption_mode`
TLS in-transit encryption mode for Redis cluster
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `random`, version: `>= 3.0`
### Providers
- `random`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`parameter_store_write` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`redis` | 2.0.0 | [`cloudposse/elasticache-redis/aws`](https://registry.terraform.io/modules/cloudposse/elasticache-redis/aws/2.0.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`random_password.auth_token`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
## Data Sources
The following data sources are used by this module:
None
---
## elasticsearch
This component is responsible for provisioning an Elasticsearch cluster with built-in integrations with Kibana and Logstash.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
elasticsearch:
vars:
enabled: true
name: foobar
instance_type: "t3.medium.elasticsearch"
elasticsearch_version: "7.9"
encrypt_at_rest_enabled: true
dedicated_master_enabled: false
elasticsearch_subdomain_name: "es"
kibana_subdomain_name: "kibana"
ebs_volume_size: 40
create_iam_service_linked_role: true
kibana_hostname_enabled: true
domain_hostname_enabled: true
```
## Variables
### Required Variables
Whether to create `AWSServiceRoleForAmazonElasticsearchService` service-linked role.
Set this to `false` if you already have an ElasticSearch cluster created in the AWS account and `AWSServiceRoleForAmazonElasticsearchService` already exists.
See https://github.com/terraform-providers/terraform-provider-aws/issues/5218 for more information.
`dedicated_master_enabled` (`bool`) required
Indicates whether dedicated master nodes are enabled for the cluster
`domain_hostname_enabled` (`bool`) required
Explicit flag to enable creating a DNS hostname for ES. If `true`, then `var.dns_zone_id` is required.
The name of the environment where the `dns-delegated` component is deployed
**Default value:** `"gbl"`
`elasticsearch_domain_name` (`string`) optional
The name of the Elasticsearch domain. Must be at least 3 and no more than 28 characters long. Valid characters are a-z (lowercase letters), 0-9, and - (hyphen).
**Default value:** `""`
List of actions to allow for the IAM roles, _e.g._ `es:ESHttpGet`, `es:ESHttpPut`, `es:ESHttpPost`
**Default value:**
```hcl
[
"es:ESHttpGet",
"es:ESHttpPut",
"es:ESHttpPost",
"es:ESHttpHead",
"es:Describe*",
"es:List*"
]
```
Whether to enable Elasticsearch log cleanup Lambda
**Default value:** `true`
`elasticsearch_password` (`string`) optional
Password for the elasticsearch user
**Default value:** `""`
`elasticsearch_saml_options` optional
Manages SAML authentication options for an AWS OpenSearch Domain
enabled: Whether to enable SAML authentication for the OpenSearch Domain
entity_id: The entity ID of the IdP
metadata_content: The metadata of the IdP
**Type:**
```hcl
object({
enabled = optional(bool, false)
entity_id = optional(string)
metadata_content = optional(string)
})
```
**Default value:** `{ }`
`kibana_subdomain_name` (`string`) optional
The name of the subdomain for Kibana in the DNS zone (_e.g._ `kibana`, `ui`, `ui-es`, `search-ui`, `kibana.elasticsearch`)
**Default value:** `null`
Whether to enable node-to-node encryption
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`domain_arn`
ARN of the Elasticsearch domain
`domain_endpoint`
Domain-specific endpoint used to submit index, search, and data upload requests
`domain_hostname`
Elasticsearch domain hostname to submit index, search, and data upload requests
`domain_id`
Unique identifier for the Elasticsearch domain
`domain_name`
Name of the Elasticsearch domain
`elasticsearch_user_iam_role_arn`
The ARN of the IAM role to allow access to Elasticsearch cluster
`elasticsearch_user_iam_role_name`
The name of the IAM role to allow access to Elasticsearch cluster
`kibana_endpoint`
Domain-specific endpoint for Kibana without https scheme
`kibana_hostname`
Kibana hostname
`master_password_ssm_key`
SSM key of Elasticsearch master password
`security_group_id`
Security Group ID to control access to the Elasticsearch domain
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `random`, version: `>= 3.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `random`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`elasticsearch` | 2.1.0 | [`cloudposse/elasticsearch/aws`](https://registry.terraform.io/modules/cloudposse/elasticsearch/aws/2.1.0) | n/a
`elasticsearch_log_cleanup` | 0.16.1 | [`cloudposse/lambda-elasticsearch-cleanup/aws`](https://registry.terraform.io/modules/cloudposse/lambda-elasticsearch-cleanup/aws/0.16.1) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_elasticsearch_domain_saml_options.elasticsearch`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticsearch_domain_saml_options) (resource)
- [`aws_opensearch_domain_saml_options.opensearch`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/opensearch_domain_saml_options) (resource)
- [`aws_ssm_parameter.admin_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.elasticsearch_domain_endpoint`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.elasticsearch_kibana_endpoint`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`random_password.elasticsearch_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
## Data Sources
The following data sources are used by this module:
---
## eventbridge
The `eventbridge` component is a Terraform module that defines a CloudWatch EventBridge rule. The rule is pointed at
cloudwatch by default.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
eventbridge/ecs-alerts:
metadata:
component: eventbridge
vars:
name: ecs-faults
enabled: true
cloudwatch_event_rule_description: "ECS failures and warnings"
cloudwatch_event_rule_pattern:
source:
- aws.ecs
detail:
$or:
- eventType:
- WARN
- ERROR
- agentConnected:
- false
- containers:
exitCode:
- anything-but:
- 0
```
## Variables
### Required Variables
Description of the CloudWatch Event Rule. If empty, will default to `module.this.id`
**Default value:** `""`
`cloudwatch_event_rule_pattern` (`any`) optional
Pattern of the CloudWatch Event Rule
**Default value:**
```hcl
{
"source": [
"aws.ec2"
]
}
```
`event_log_retention_in_days` (`number`) optional
Number of days to retain the event logs
**Default value:** `3`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cloudwatch_event_rule_arn`
The ARN of the CloudWatch Event Rule
`cloudwatch_event_rule_name`
The name of the CloudWatch Event Rule
`cloudwatch_logs_log_group_arn`
The ARN of the CloudWatch Log Group
`cloudwatch_logs_log_group_name`
The name of the CloudWatch Log Group
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`cloudwatch_event` | 0.9.1 | [`cloudposse/cloudwatch-events/aws`](https://registry.terraform.io/modules/cloudposse/cloudwatch-events/aws/0.9.1) | n/a
`cloudwatch_logs` | 0.6.9 | [`cloudposse/cloudwatch-logs/aws`](https://registry.terraform.io/modules/cloudposse/cloudwatch-logs/aws/0.6.9) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cloudwatch_log_resource_policy.eventbridge_cloudwatch_logs_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_resource_policy) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.eventbridge_cloudwatch_logs_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
---
## github-action-token-rotator
This component provisions the
[Github Action Token Rotator](https://github.com/cloudposse/terraform-aws-github-action-token-rotator).
It creates an AWS Lambda function that rotates GitHub Action tokens stored in AWS Systems Manager Parameter Store.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component. This is generally deployed once and to the automation account's primary region.
`stacks/catalog/github-action-token-rotator.yaml` file:
```yaml
components:
terraform:
github-action-token-rotator:
vars:
enabled: true
github_org_name: my-org
github_app_installation_id: 11111111
github_app_id: 222222
parameter_store_private_key_path: /github/runners/my-org/privateKey
parameter_store_token_path: /github/runners/my-org/registrationToken
```
Follow the manual steps using the
[guide in the upstream module](https://github.com/cloudposse/terraform-aws-github-action-token-rotator#quick-start)
and use `chamber` to add the secrets to the appropriate stage.
## Variables
### Required Variables
Path to read the GitHub App private key from parameter store
`parameter_store_token_path` (`string`) required
Path to store the token in parameter store
`region` (`string`) required
AWS Region
### Optional Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`github_action_token_rotator`
GitHub action token rotator module outputs.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`github_action_token_rotator` | 0.1.0 | [`cloudposse/github-action-token-rotator/aws`](https://registry.terraform.io/modules/cloudposse/github-action-token-rotator/aws/0.1.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## github-oidc-provider
This component authorizes the GitHub OIDC provider as an identity provider for an AWS account. It is intended to be used with
`aws-teams`, `aws-team-roles`, and/or `github-actions-iam-role.mixin.tf`.
## Usage
**Stack Level**: Global
Here's an example snippet for how to use this component.
- This must be installed in the `identity` account in order to use standard SAML roles with role chaining.
- This must be installed in each individual account where you want to provision a service role for a GitHub action that will be assumed directly by the action.
For security, since this component adds an identity provider, only SuperAdmin can install it.
```yaml
components:
terraform:
github-oidc-provider:
vars:
enabled: true
```
## Configuring the GitHub OIDC Provider
This component was created to add the GitHub OIDC provider so that GitHub Actions can safely assume roles without the need to store static credentials in the environment. The details of the GitHub OIDC provider are hard coded in the component, however at some point the provider's thumbprint may change, at which point you can use
[get_github_oidc_thumbprint.sh](https://github.com/cloudposse/terraform-aws-components/blob/main/modules/github-oidc-provider/scripts/get_github_oidc_thumbprint.sh)
to get the new thumbprint and add it to the list in `var.thumbprint_list`.
This script will pull one of two thumbprints. There are two possible intermediary certificates for the Actions SSL certificate and either can be returned by the GitHub servers, requiring customers to trust both. This is a known behavior when the intermediary certificates are cross-signed by the CA. Therefore, run this script until both values are retrieved. Add both to `var.thumbprint_list`.
For more, see https://github.blog/changelog/2023-06-27-github-actions-update-on-oidc-integration-with-aws/
## FAQ
### I cannot assume the role from GitHub Actions after deploying
The following error is very common if the GitHub workflow is missing proper permission.
```bash
Error: User: arn:aws:sts::***:assumed-role/acme-core-use1-auto-actions-runner@actions-runner-system/token-file-web-identity is not authorized to perform: sts:TagSession on resource: arn:aws:iam::999999999999:role/acme-plat-use1-dev-gha
```
In order to use a web identity, GitHub Action pipelines must have the following permission. See
[GitHub Action documentation for more](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services#adding-permissions-settings).
```yaml
permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`superadmin` (`bool`) optional
Set `true` if running as the SuperAdmin user
**Default value:** `false`
`thumbprint_list` (`list(string)`) optional
List of OIDC provider certificate thumbprints
**Default value:**
```hcl
[
"6938fd4d98bab03faadb97b34396831e3780aea1",
"1c58a3a8518e8759bf075b76b750d4f2df264fcd"
]
```
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`oidc_provider_arn`
GitHub OIDC provider ARN
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_openid_connect_provider.oidc`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_openid_connect_provider) (resource)
## Data Sources
The following data sources are used by this module:
---
## github-oidc-role
This component is responsible for creating IAM roles for GitHub Actions to assume.
## Usage
**Stack Level**: Global
Here's an example snippet for how to use this component.
```yaml
# stacks/catalog/github-oidc-role/defaults.yaml
components:
terraform:
github-oidc-role/defaults:
metadata:
type: abstract
vars:
enabled: true
name: gha-iam
# Note: inherited lists are not merged, they are replaced
github_actions_allowed_repos:
- MyOrg/* ## allow all repos in MyOrg
```
Example using for gitops (predefined policy):
```yaml
# stacks/catalog/github-oidc-role/gitops.yaml
import:
- catalog/github-oidc-role/defaults
components:
terraform:
github-oidc-role/gitops:
metadata:
component: github-oidc-role
inherits:
- github-oidc-role/defaults
vars:
enabled: true
# Note: inherited lists are not merged, they are replaced
github_actions_allowed_repos:
- "MyOrg/infrastructure"
attributes: ["gitops"]
iam_policies:
- gitops
gitops_policy_configuration:
s3_bucket_component_name: gitops/s3-bucket
dynamodb_component_name: gitops/dynamodb
```
Example using for lambda-cicd (predefined policy):
```yaml
# stacks/catalog/github-oidc-role/lambda-cicd.yaml
import:
- catalog/github-oidc-role/defaults
components:
terraform:
github-oidc-role/lambda-cicd:
metadata:
component: github-oidc-role
inherits:
- github-oidc-role/defaults
vars:
enabled: true
github_actions_allowed_repos:
- MyOrg/example-app-on-lambda-with-gha
attributes: ["lambda-cicd"]
iam_policies:
- lambda-cicd
lambda_cicd_policy_configuration:
enable_ssm_access: true
enable_s3_access: true
s3_bucket_component_name: s3-bucket/github-action-artifacts
s3_bucket_environment_name: gbl
s3_bucket_stage_name: artifacts
s3_bucket_tenant_name: core
```
Example Using an AWS Managed policy and a custom inline policy:
```yaml
# stacks/catalog/github-oidc-role/custom.yaml
import:
- catalog/github-oidc-role/defaults
components:
terraform:
github-oidc-role/custom:
metadata:
component: github-oidc-role
inherits:
- github-oidc-role/defaults
vars:
enabled: true
github_actions_allowed_repos:
- MyOrg/example-app-on-lambda-with-gha
attributes: ["custom"]
iam_policies:
- arn:aws:iam::aws:policy/AdministratorAccess
iam_policy:
- version: "2012-10-17"
statements:
- effect: "Allow"
actions:
- "ec2:*"
resources:
- "*"
```
### Adding Custom Policies
There are two methods for adding custom policies to the IAM role.
1. Through the `iam_policy` input which you can use to add inline policies to the IAM role.
2. By defining policies in Terraform and then attaching them to roles by name.
#### Defining Custom Policies in Terraform
1. Give the policy a unique name, e.g. `docker-publish`. We will use `NAME` as a placeholder for the name in the
instructions below.
2. Create a file in the component directory (i.e. `github-oidc-role`) with the name `policy_NAME.tf`.
3. In that file, conditionally (based on need) create a policy document as follows:
```hcl
locals {
NAME_policy_enabled = contains(var.iam_policies, "NAME")
NAME_policy = local.NAME_policy_enabled ? one(data.aws_iam_policy_document.NAME.*.json) : null
}
data "aws_iam_policy_document" "NAME" {
count = local.NAME_policy_enabled ? 1 : 0
# Define the policy here
}
```
Note that you can also add input variables and outputs to this file if desired. Just make sure that all inputs are
optional.
4. Create a file named `additional-policy-map_override.tf` in the component directory (if it does not already exist).
This is a [terraform override file](https://developer.hashicorp.com/terraform/language/files/override), meaning its
contents will be merged with the main terraform file, and any locals defined in it will override locals defined in
other files. Having your code in this separate override file makes it possible for the component to provide a
placeholder local variable so that it works without customization, while allowing you to customize the component and
still update it without losing your customizations.
5. In that file, redefine the local variable `overridable_additional_custom_policy_map` map as follows:
```hcl
locals {
overridable_additional_custom_policy_map = {
"NAME" = local.NAME_policy
}
}
```
If you have multiple custom policies, using just this one file, add each policy document to the map in the form
`NAME = local.NAME_policy`.
6. With that done, you can now attach that policy by adding the name to the `iam_policies` list. For example:
```yaml
iam_policies:
- "arn:aws:iam::aws:policy/job-function/ViewOnlyAccess"
- "NAME"
```
## Variables
### Required Variables
A list of the GitHub repositories that are allowed to assume this role from GitHub Actions. For example,
["cloudposse/infra-live"]. Can contain "*" as wildcard.
If org part of repo name is omitted, "cloudposse" will be assumed.
**Default value:** `[ ]`
`gitops_policy_configuration` optional
Configuration for the GitOps IAM Policy, valid keys are
- `s3_bucket_component_name` - Component Name of where to store the TF Plans in S3, defaults to `gitops/s3-bucket`
- `dynamodb_component_name` - Component Name of where to store the TF Plans in Dynamodb, defaults to `gitops/dynamodb`
- `s3_bucket_environment_name` - Environment name for the S3 Bucket, defaults to current environment
- `dynamodb_environment_name` - Environment name for the Dynamodb Table, defaults to current environment
**Type:**
```hcl
object({
s3_bucket_component_name = optional(string, "gitops/s3-bucket")
dynamodb_component_name = optional(string, "gitops/dynamodb")
s3_bucket_environment_name = optional(string)
dynamodb_environment_name = optional(string)
})
```
**Default value:** `{ }`
`iam_policies` (`list(string)`) optional
List of policies to attach to the IAM role, should be either an ARN of an AWS Managed Policy or a name of a custom policy e.g. `gitops`
**Default value:** `[ ]`
`iam_policy` optional
IAM policy as list of Terraform objects, compatible with Terraform `aws_iam_policy_document` data source
except that `source_policy_documents` and `override_policy_documents` are not included.
Use inputs `iam_source_policy_documents` and `iam_override_policy_documents` for that.
**Type:**
```hcl
list(object({
policy_id = optional(string, null)
version = optional(string, null)
statements = list(object({
sid = optional(string, null)
effect = optional(string, null)
actions = optional(list(string), null)
not_actions = optional(list(string), null)
resources = optional(list(string), null)
not_resources = optional(list(string), null)
conditions = optional(list(object({
test = string
variable = string
values = list(string)
})), [])
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
not_principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
}))
}))
```
**Default value:** `[ ]`
`lambda_cicd_policy_configuration` optional
Configuration for the lambda-cicd policy. The following keys are supported:
- `enable_kms_access` - (bool) - Whether to allow access to KMS. Defaults to false.
- `enable_ssm_access` - (bool) - Whether to allow access to SSM. Defaults to false.
- `enable_s3_access` - (bool) - Whether to allow access to S3. Defaults to false.
- `s3_bucket_component_name` - (string) - The name of the component to use for the S3 bucket. Defaults to `s3-bucket/github-action-artifacts`.
- `s3_bucket_environment_name` - (string) - The name of the environment to use for the S3 bucket. Defaults to the environment of the current module.
- `s3_bucket_tenant_name` - (string) - The name of the tenant to use for the S3 bucket. Defaults to the tenant of the current module.
- `s3_bucket_stage_name` - (string) - The name of the stage to use for the S3 bucket. Defaults to the stage of the current module.
- `enable_lambda_update` - (bool) - Whether to allow access to update lambda functions. Defaults to false.
**Type:**
```hcl
object({
enable_kms_access = optional(bool, false)
enable_ssm_access = optional(bool, false)
enable_s3_access = optional(bool, false)
s3_bucket_component_name = optional(string, "s3-bucket/github-action-artifacts")
s3_bucket_environment_name = optional(string)
s3_bucket_tenant_name = optional(string)
s3_bucket_stage_name = optional(string)
enable_lambda_update = optional(bool, false)
})
```
**Default value:** `{ }`
`max_session_duration` (`number`) optional
Maximum session duration (in seconds). This setting can have a value from 1 hour to 12 hours
**Default value:** `3600`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`github_actions_iam_role_arn`
ARN of IAM role for GitHub Actions
`github_actions_iam_role_name`
Name of IAM role for GitHub Actions
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dynamodb` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`gha_assume_role` | latest | [`../account-map/modules/team-assume-role-policy`](https://registry.terraform.io/modules/../account-map/modules/team-assume-role-policy/) | n/a
`iam_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`s3_artifacts_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`s3_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_role.github_actions`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.gitops_iam_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.lambda_cicd_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
---
## github-repository
:lock: Managing GitHub repos in a compliant way just got way easier.
One of the most common requirements for companies operating in regulated spaces is having consistent, enforceable repository configurations. Some teams have hacked together homegrown solutions, while others know it's something they need to do but never quite got to.
Our `aws-github-repository` component changes that by providing comprehensive GitHub repository management entirely through infrastructure as code.
**Manage repos and all settings declaratively:**
- Repository configuration (visibility, topics, features, merge settings)
- Environments with deployment protection and reviewers
- Variables and secrets pulling from AWS Secrets Manager and Parameter Store
- Branch protection rules and rule sets
- Webhooks, deploy keys, labels, and custom properties
- Team and user permissions
**Import existing repos and bring them under management:**
- Import existing repositories and their configurations
- Bring legacy repos under consistent management
- Maintain existing settings while adding compliance controls
**Define repository archetypes once:**
- Create abstract components for common configurations
- Use Atmos inheritance to automatically apply baseline compliance
- Every new repo inherits your security and compliance standards
- DRY up infrastructure with reusable templates
All powered by Atmos + a GitHub App for secure, auditable rollout across your organization.
## Usage
**Stack Level**: Regional
This component provides comprehensive GitHub repository management with support for advanced features like environments, webhooks, deploy keys, custom properties, and more.
## Basic Usage
Here's a simple example for creating a basic repository:
```yaml
components:
terraform:
my-basic-repo:
vars:
enabled: true
owner: "my-organization"
repository:
name: "my-basic-repo"
description: "A basic repository with standard settings"
homepage_url: "https://github.com/my-organization/my-basic-repo"
visibility: "private"
default_branch: "main"
topics:
- terraform
- github
- infrastructure
teams:
devops: admin
developers: push
variables:
ENVIRONMENT: "production"
REGION: "us-east-1"
secrets:
AWS_ACCESS_KEY_ID: "nacl:dGVzdC1hY2Nlc3Mta2V5LWlkCg=="
DATABASE_URL: "ssm:///my-basic-repo/database-url"
```
## Advanced Usage with Atmos Inheritance and Imports
The component supports Atmos inheritance and imports to create DRY (Don't Repeat Yourself) configurations. This allows you to define common settings once and reuse them across multiple repositories.
:::tip
For complete working examples of inheritance and imports, see the [`/examples`](https://github.com/cloudposse-terraform-components/aws-github-repository/tree/main/github-repository/examples) folder in this repository.
:::
### Creating Abstract Components
First, create abstract components that define common configurations:
```yaml
# catalog/github/repo/defaults.yaml
components:
terraform:
github/repo/defaults:
metadata:
component: github-repository
type: abstract
vars:
enabled: true
# Common team permissions
teams:
devops: admin
# Default repository settings
repository:
homepage_url: "https://github.com/{{ .vars.owner }}/{{ .vars.repository.name }}"
topics:
- terraform
- github
default_branch: "main"
visibility: "private"
# Common features
auto_init: true
gitignore_template: "TeX"
license_template: "GPL-3.0"
# Merge settings
allow_merge_commit: true
allow_squash_merge: true
allow_rebase_merge: true
allow_auto_merge: true
# Branch protection
delete_branch_on_merge: true
web_commit_signoff_required: true
```
### Creating a Master Defaults Component
Combine abstract components into a master defaults component:
```yaml
# catalog/github/defaults.yaml
import:
- catalog/github/repo/*
- catalog/github/environment/*
- catalog/github/ruleset/*
components:
terraform:
github/defaults:
metadata:
type: abstract
inherits:
- github/repo/defaults
- github/ruleset/branch-protection
- github/environment/defaults
```
### Using Inheritance in Your Stacks
Now you can create repositories that inherit from these defaults:
```yaml
# orgs/mycompany/myteam.yaml
import:
- ./_defaults
- catalog/github/defaults
components:
terraform:
# Simple repository inheriting all defaults
my-simple-app:
metadata:
component: github-repository
inherits:
- github/defaults
vars:
repository:
name: my-simple-app
description: "My simple application"
topics:
- application
- nodejs
# Repository with custom overrides
my-custom-app:
metadata:
component: github-repository
inherits:
- github/defaults
vars:
repository:
name: my-custom-app
description: "My custom application"
topics:
- application
- python
- api
# Override default team permissions
teams:
devops: admin
backend: push
frontend: push
# Add custom variables and secrets
variables:
LANGUAGE: "python"
FRAMEWORK: "fastapi"
secrets:
PYTHON_API_KEY: "asm://my-custom-app-api-key"
```
## Secrets and Variables Management
The component supports setting repository and environment secrets and variables.
Secrets and variables can be set using the following methods:
- Raw values (unencrypted string) (example: `my-secret-value`)
- AWS Secrets Manager (SM) (example: `asm://secret-name`)
- AWS Systems Manager Parameter Store (SSM) (example: `ssm:///my/secret/path`)
In addition to that secrets supports base64 encoded values [encrypted](https://docs.github.com/en/rest/guides/encrypting-secrets-for-the-rest-api?apiVersion=2022-11-28)
with [repository key](https://docs.github.com/en/rest/actions/secrets?apiVersion=2022-11-28#get-a-repository-public-key).
The value should be prefixed with `nacl:` (example: `nacl:dGVzdC12YWx1ZS0yCg==`).
### Environment-Specific Secrets and Variables
```yaml
environments:
production:
variables:
ENVIRONMENT: "production"
CLUSTER_NAME: "prod-cluster"
LOG_LEVEL: "warn"
secrets:
PROD_DATABASE_URL: "ssm:///my-repo/prod/database-url"
PROD_API_KEY: "asm://my-repo-prod-api-key"
PROD_JWT_SECRET: "nacl:cHJvZC1qd3Qtc2VjcmV0Cg=="
staging:
variables:
ENVIRONMENT: "staging"
CLUSTER_NAME: "staging-cluster"
LOG_LEVEL: "info"
secrets:
STAGING_DATABASE_URL: "ssm:///my-repo/staging/database-url"
STAGING_API_KEY: "asm://my-repo-staging-api-key"
```
## Import Mode
The component supports importing existing repository and its configurations:
- collaborators
- variables
- environments
- environment variables
- labels
- custom properties values
- autolink references
- deploy keys
Import mode is enabled by setting `import` input variable to `true`.
```yaml
components:
terraform:
my-imported-repo:
vars:
enabled: true
import: true
owner: "my-organization"
repository:
name: "existing-repo-to-import"
```
The following configurations are not supported for import:
- secrets
- environment secrets
- branch protection policies
- rulesets
## Variables
### Required Variables
Secrets for the repository (if prefixed with nacl: it should be encrypted value using the GitHub public key in Base64 format. Read more: https://docs.github.com/en/actions/security-for-github-actions/encrypted-secrets)
**Default value:** `{ }`
`teams` (`map(string)`) optional
A map of teams and their permissions for the repository
**Default value:** `{ }`
A map of users and their permissions for the repository
**Default value:** `{ }`
`variables` (`map(string)`) optional
Environment variables for the repository
**Default value:** `{ }`
`webhooks` optional
A map of webhooks to configure for the repository
**Type:**
```hcl
map(object({
active = optional(bool, true)
events = list(string)
url = string
content_type = optional(string, "json")
insecure_ssl = optional(bool, false)
secret = optional(string, null)
}))
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`collaborators_invitation_ids`
Collaborators invitation IDs
`full_name`
Full name of the created repository
`git_clone_url`
Git clone URL of the created repository
`html_url`
HTML URL of the created repository
`http_clone_url`
HTTP clone URL of the created repository
`node_id`
Node ID of the created repository
`primary_language`
Primary language of the created repository
`repo_id`
Repository ID of the created repository
`rulesets_etags`
Rulesets etags
`rulesets_node_ids`
Rulesets node IDs
`rulesets_rules_ids`
Rulesets rules IDs
`ssh_clone_url`
SSH clone URL of the created repository
`svn_url`
SVN URL of the created repository
`webhooks_urls`
Webhooks URLs
## Dependencies
### Requirements
- `terraform`, version: `>= 1.7.0`
- `aws`, version: `>= 5.0.0`
- `github`, version: `>= 6.6.0`
### Providers
- `aws`, version: `>= 5.0.0`
- `github`, version: `>= 6.6.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`repository` | 1.0.0 | [`cloudposse/repository/github`](https://registry.terraform.io/modules/cloudposse/repository/github/1.0.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_secretsmanager_secret.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret) (data source)
- [`aws_secretsmanager_secret_version.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret_version) (data source)
- [`aws_ssm_parameter.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`github_actions_environment_variables.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/actions_environment_variables) (data source)
- [`github_actions_variables.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/actions_variables) (data source)
- [`github_issue_labels.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/issue_labels) (data source)
- [`github_repository.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/repository) (data source)
- [`github_repository_autolink_references.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/repository_autolink_references) (data source)
- [`github_repository_custom_properties.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/repository_custom_properties) (data source)
- [`github_repository_deploy_keys.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/repository_deploy_keys) (data source)
- [`github_repository_environments.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/repository_environments) (data source)
---
## github-runners
This component is responsible for provisioning EC2 instances for GitHub runners.
:::tip
We also have a similar component based on
[actions-runner-controller](https://github.com/actions-runner-controller/actions-runner-controller) for Kubernetes.
:::
## Requirements
## Configuration
### API Token
Prior to deployment, the API Token must exist in SSM.
To generate the token, please follow [these instructions](https://cloudposse.atlassian.net/l/c/N4dH05ud). Once
generated, write the API token to the SSM key store at the following location within the same AWS account and region
where the GitHub Actions runner pool will reside.
```
assume-role
chamber write github/runners/ registration-token ghp_secretstring
```
## Background
### Registration
Github Actions Self-Hosted runners can be scoped to the Github Organization, a Single Repository, or a group of
Repositories (Github Enterprise-Only). Upon startup, each runner uses a `REGISTRATION_TOKEN` to call the Github API to
register itself with the Organization, Repository, or Runner Group (Github Enterprise).
### Running Workflows
Once a Self-Hosted runner is registered, you will have to update your workflow with the `runs-on` attribute specify it
should run on a self-hosted runner:
```
name: Test Self Hosted Runners
on:
push:
branches: [main]
jobs:
build:
runs-on: [self-hosted]
```
### Workflow Github Permissions (GITHUB_TOKEN)
Each run of the Github Actions Workflow is assigned a GITHUB_TOKEN, which allows your workflow to perform actions
against Github itself such as cloning a repo, updating the checks API status, etc., and expires at the end of the
workflow run. The GITHUB_TOKEN has two permission "modes" it can operate in `Read and write permissions` ("Permissive"
or "Full Access") and `Read repository contents permission` ("Restricted" or "Read-Only"). By default, the GITHUB_TOKEN
is granted Full Access permissions, but you can change this via the Organization or Repo settings. If you opt for the
Read-Only permissions, you can optionally grant or revoke access to specific APIs via the workflow `yaml` file and a
full list of APIs that can be accessed can be found in the
[documentation](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token)
and is shown below in the table. It should be noted that the downside to this permissions model is that any user with
write access to the repository can escalate permissions for the workflow by updating the `yaml` file, however, the APIs
available via this token are limited. Most notably the GITHUB_TOKEN does not have access to the `users`, `repos`,
`apps`, `billing`, or `collaborators` APIs, so the tokens do not have access to modify sensitive settings or add/remove
users from the Organization/Repository.
> Example of using escalated permissions for the entire workflow
```
name: Pull request labeler
on: [ pull_request_target ]
permissions:
contents: read
pull-requests: write
jobs:
triage:
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v2
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
```
> Example of using escalated permissions for a job
```
name: Create issue on commit
on: [ push ]
jobs:
create_commit:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- name: Create issue using REST API
run: |
curl --request POST \
--url https://api.github.com/repos/${{ github.repository }}/issues \
--header 'authorization: Bearer ${{ secrets.GITHUB_TOKEN }}' \
--header 'content-type: application/json' \
--data '{
"title": "Automated issue for commit: ${{ github.sha }}",
"body": "This issue was automatically created by the GitHub Action workflow **${{ github.workflow }}**. \n\n The commit hash was: _${{ github.sha }}_."
}' \
--fail
```
### Pre-Requisites for Using This Component
In order to use this component, you will have to obtain the `REGISTRATION_TOKEN` mentioned above from your Github
Organization or Repository and store it in SSM Parameter store. In addition, it is recommended that you set the
permissions “mode” for Self-hosted runners to Read-Only. The instructions for doing both are below.
#### Workflow Permissions
1. Browse to
[https://github.com/organizations/\{Org\}/settings/actions](https://github.com/organizations/\{Org\}/settings/actions)
(Organization) or
[https://github.com/\{Org\}/\{Repo\}/settings/actions](https://github.com/\{Org\}/\{Repo\}/settings/actions) (Repository)
2. Set the default permissions for the GITHUB_TOKEN to Read Only
### Creating Registration Token
:::tip
We highly recommend using a GitHub Application with the github-action-token-rotator module to generate the
Registration Token. This will ensure that the token is rotated and that the token is stored in SSM Parameter Store
encrypted with KMS.
:::
#### GitHub Application
Follow the quickstart with the upstream module,
[cloudposse/terraform-aws-github-action-token-rotator](https://github.com/cloudposse/terraform-aws-github-action-token-rotator#quick-start),
or follow the steps below.
1. Create a new GitHub App
1. Add the following permission:
```diff
# Required Permissions for Repository Runners:
## Repository Permissions
+ Actions (read)
+ Administration (read / write)
+ Metadata (read)
# Required Permissions for Organization Runners:
## Repository Permissions
+ Actions (read)
+ Metadata (read)
## Organization Permissions
+ Self-hosted runners (read / write)
```
1. Generate a Private Key
If you are working with Cloud Posse, upload this Private Key, GitHub App ID, and Github App Installation ID to 1Password
and skip the rest. Otherwise, complete the private key setup in `core--auto`.
1. Convert the private key to a PEM file using the following command:
`openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in {DOWNLOADED_FILE_NAME}.pem -out private-key-pkcs8.key`
1. Upload PEM file key to the specified ssm path: `/github/runners/acme/private-key` in `core--auto`
1. Create another sensitive SSM parameter `/github/runners/acme/registration-token` in `core--auto` with
any basic value, such as "foo". This will be overwritten by the rotator.
1. Update the GitHub App ID and Installation ID in the `github-action-token-rotator` catalog.
:::tip
If you change the Private Key saved in SSM, redeploy `github-action-token-rotator`
:::
#### (ClickOps) Obtain the Runner Registration Token
1. Browse to
[https://github.com/organizations/\{Org\}/settings/actions/runners](https://github.com/organizations/\{Org\}/settings/actions/runners)
(Organization) or
[https://github.com/\{Org\}/\{Repo\}/settings/actions/runners](https://github.com/\{Org\}/\{Repo\}/settings/actions/runners)
(Repository)
2. Click the **New Runner** button (Organization) or **New Self Hosted Runner** button (Repository)
3. Copy the Github Runner token from the next screen. Note that this is the only time you will see this token. Note that
if you exit the `New {Self Hosted} Runner` screen and then later return by clicking the `New {Self Hosted} Runner`
button again, the registration token will be invalidated and a new token will be generated.
4. Add the `REGISTRATION_TOKEN` to the `/github/token` SSM parameter in the account where Github runners are hosted
(usually `automation`), encrypted with KMS.
```
chamber write github token
```
# FAQ
## The GitHub Registration Token is not updated in SSM
The `github-action-token-rotator` runs an AWS Lambda function every 30 minutes. This lambda will attempt to use a
private key in its environment configuration to generate a GitHub Registration Token, and then store that token to AWS
SSM Parameter Store.
If the GitHub Registration Token parameter, `/github/runners/acme/registration-token`, is not updated, read through the
following tips:
1. The private key is stored at the given parameter path:
`parameter_store_private_key_path: /github/runners/acme/private-key`
1. The private key is Base 64 encoded. If you pull the key from SSM and decode it, it should begin with
`-----BEGIN PRIVATE KEY-----`
1. If the private key has changed, you must _redeploy_ `github-action-token-rotator`. Run a plan against the component
to make sure there are not changes required.
## The GitHub Registration Token is valid, but the Runners are not registering with GitHub
If you first deployed the `github-action-token-rotator` component initially with an invalid configuration and then
deployed the `github-runners` component, the instance runners will have failed to register with GitHub.
After you correct `github-action-token-rotator` and have a valid GitHub Registration Token in SSM, _destroy and
recreate_ the `github-runners` component.
If you cannot see the runners registered in GitHub, check the system logs on one of EC2 Instances in AWS in
`core--auto`.
## I cannot assume the role from GitHub Actions after deploying
The following error is very common if the GitHub workflow is missing proper permission.
```bash
Error: User: arn:aws:sts::***:assumed-role/acme-core-use1-auto-actions-runner@actions-runner-system/token-file-web-identity is not authorized to perform: sts:TagSession on resource: arn:aws:iam::999999999999:role/acme-plat-use1-dev-gha
```
In order to use a web identity, GitHub Action pipelines must have the following permission. See
[GitHub Action documentation for more](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services#adding-permissions-settings).
```yaml
permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout
```
## FAQ
### Can we scope it to a github org with both private and public repos ?
Yes but this requires Github Enterprise Cloud and the usage of runner groups to scope permissions of runners to specific
repos. If you set the scope to the entire org without runner groups and if the org has both public and private repos,
then the risk of using a self-hosted runner incorrectly is a vulnerability within public repos.
[https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups](https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups)
If you do not have github enterprise cloud and runner groups cannot be utilized, then it’s best to create new github
runners per repo or use the summerwind action-runners-controller via a Github App to set the scope to specific repos.
### How can we see the current spot pricing?
Go to [ec2instances.info](http://ec2instances.info/)
### If we don’t use mixed at all does that mean we can’t do spot?
It’s possible to do spot without using mixed instances but you leave yourself open to zero instance availability with a
single instance type.
For example, if you wanted to use spot and use `t3.xlarge` in `us-east-2` and for some reason, AWS ran out of
`t3.xlarge`, you wouldn't have the option to choose another instance type and so all the GitHub Action runs would stall
until availability returned. If you use on-demand pricing, it’s more expensive, but you’re more likely to get scheduling
priority. For guaranteed availability, reserved instances are required.
### Do the overrides apply to both the on-demand and the spot instances, or only the spot instances?
Since the overrides affect the launch template, I believe they will affect both spot instances and override since
weighted capacity can be set for either or. The override terraform option is on the ASG’s `launch_template`
> List of nested arguments provides the ability to specify multiple instance types. This will override the same
> parameter in the launch template. For on-demand instances, Auto Scaling considers the order of preference of instance
> types to launch based on the order specified in the overrides list. Defined below. And in the terraform resource for
> `instances_distribution`
> `spot_max_price` - (Optional) Maximum price per unit hour that the user is willing to pay for the Spot instances.
> Default: an empty string which means the on-demand price. For a `mixed_instances_policy`, this will do purely
> on-demand
```
mixed_instances_policy:
instances_distribution:
on_demand_allocation_strategy: "prioritized"
on_demand_base_capacity: 1
on_demand_percentage_above_base_capacity: 0
spot_allocation_strategy: "capacity-optimized"
spot_instance_pools: null
spot_max_price: []
```
This will always do spot unless instances are unavailable, then switch to on-demand.
```
mixed_instances_policy:
instances_distribution:
# ...
spot_max_price: 0.05
```
If you want a single instance type, you could still use the mixed instances policy to define that like above, or you can
use these other inputs and comment out the `mixed_instances_policy`
```
instance_type: "t3.xlarge"
# the below is optional in order to set the spot max price
instance_market_options:
market_type = "spot"
spot_options:
block_duration_minutes: 6000
instance_interruption_behavior: terminate
max_price: 0.05
spot_instance_type = persistent
valid_until: null
```
The `overrides` will override the `instance_type` above.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
github-runners:
vars:
cpu_utilization_high_threshold_percent: 5
cpu_utilization_low_threshold_percent: 1
default_cooldown: 300
github_scope: company
instance_type: "t3.small"
max_size: 10
min_size: 1
runner_group: default
scale_down_cooldown_seconds: 2700
wait_for_capacity_timeout: 10m
mixed_instances_policy:
instances_distribution:
on_demand_allocation_strategy: "prioritized"
on_demand_base_capacity: 1
on_demand_percentage_above_base_capacity: 0
spot_allocation_strategy: "capacity-optimized"
spot_instance_pools: null
spot_max_price: null
override:
- instance_type: "t4g.large"
weighted_capacity: null
- instance_type: "m5.large"
weighted_capacity: null
- instance_type: "m5a.large"
weighted_capacity: null
- instance_type: "m5n.large"
weighted_capacity: null
- instance_type: "m5zn.large"
weighted_capacity: null
- instance_type: "m4.large"
weighted_capacity: null
- instance_type: "c5.large"
weighted_capacity: null
- instance_type: "c5a.large"
weighted_capacity: null
- instance_type: "c5n.large"
weighted_capacity: null
- instance_type: "c4.large"
weighted_capacity: null
```
## Variables
### Required Variables
`github_scope` (`string`) required
Scope of the runner (e.g. `cloudposse/example` for repo or `cloudposse` for org)
The name of the environment where `account_map` is provisioned
**Default value:** `"gbl"`
`account_map_stage_name` (`string`) optional
The name of the stage where `account_map` is provisioned
**Default value:** `"root"`
`account_map_tenant_name` (`string`) optional
The name of the tenant where `account_map` is provisioned.
If the `tenant` label is not used, leave this as `null`.
**Default value:** `null`
`ami_filter` (`map(list(string))`) optional
Map of lists used to look up the AMI which will be used for the GitHub Actions Runner.
**Default value:**
```hcl
{
"name": [
"amzn2-ami-hvm-2.*-x86_64-ebs"
]
}
```
`ami_owners` (`list(string)`) optional
The list of owners used to select the AMI of action runner instances.
**Default value:**
```hcl
[
"amazon"
]
```
`block_device_mappings` optional
Specify volumes to attach to the instance besides the volumes specified by the AMI
**Type:**
```hcl
list(object({
device_name = string
no_device = bool
virtual_name = string
ebs = object({
delete_on_termination = bool
encrypted = bool
iops = number
kms_key_id = string
snapshot_id = string
volume_size = number
volume_type = string
})
}))
```
**Default value:** `[ ]`
The value against which the specified statistic is compared
**Default value:** `10`
`default_cooldown` (`number`) optional
The amount of time, in seconds, after a scaling activity completes before another scaling activity can start
**Default value:** `300`
`docker_compose_version` (`string`) optional
The version of docker-compose to install
**Default value:** `"1.29.2"`
`instance_type` (`string`) optional
Default instance type for the action runner.
**Default value:** `"m5.large"`
`max_instance_lifetime` (`number`) optional
The maximum amount of time, in seconds, that an instance can be in service, values must be either equal to 0 or between 604800 and 31536000 seconds
**Default value:** `null`
`mixed_instances_policy` optional
Policy to use a mixed group of on-demand/spot of differing types. Launch template is automatically generated. https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html#mixed_instances_policy-1
**Type:**
```hcl
object({
instances_distribution = object({
on_demand_allocation_strategy = string
on_demand_base_capacity = number
on_demand_percentage_above_base_capacity = number
spot_allocation_strategy = string
spot_instance_pools = number
spot_max_price = string
})
override = list(object({
instance_type = string
weighted_capacity = number
}))
})
```
**Default value:** `null`
`runner_group` (`string`) optional
GitHub runner group
**Default value:** `"default"`
`runner_labels` (`list(string)`) optional
List of labels to add to the GitHub Runner (e.g. 'Amazon Linux 2').
**Default value:** `[ ]`
Shell script to run post installation of github action runner
**Default value:** `""`
`userdata_pre_install` (`string`) optional
Shell script to run before installation of github action runner
**Default value:** `""`
`wait_for_capacity_timeout` (`string`) optional
A maximum duration that Terraform should wait for ASG instances to be healthy before timing out. (See also Waiting for Capacity below.) Setting this to '0' causes Terraform to skip all Capacity Waiting behavior
**Default value:** `"10m"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`autoscaling_group_arn`
The Amazon Resource Name (ARN) of the Auto Scaling Group.
`autoscaling_group_name`
The name of the Auto Scaling Group.
`autoscaling_lifecycle_hook_name`
The name of the Lifecycle Hook for the Auto Scaling Group.
`eventbridge_rule_arn`
The ARN of the Eventbridge rule for the EC2 lifecycle transition.
`eventbridge_target_arn`
The ARN of the Eventbridge target corresponding to the Eventbridge rule for the EC2 lifecycle transition.
`iam_role_arn`
The ARN of the IAM role associated with the Autoscaling Group
`ssm_document_arn`
The ARN of the SSM document.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `cloudinit`, version: `>= 2.2`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `cloudinit`, version: `>= 2.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`autoscale_group` | 0.43.1 | [`cloudposse/ec2-autoscale-group/aws`](https://registry.terraform.io/modules/cloudposse/ec2-autoscale-group/aws/0.43.1) | n/a
`graceful_scale_in` | latest | [`./modules/graceful_scale_in`](https://registry.terraform.io/modules/./modules/graceful_scale_in/) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`sg` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_instance_profile.github_action_runner`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile) (resource)
- [`aws_iam_policy.github_action_runner`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role.github_action_runner`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ami.runner`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami) (data source)
- [`aws_iam_policy_document.github_action_runner`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.instance_assume_role_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_ssm_parameter.github_token`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`cloudinit_config.config`](https://registry.terraform.io/providers/hashicorp/cloudinit/latest/docs/data-sources/config) (data source)
---
## github-webhook
This component provisions a GitHub webhook for a single GitHub repository.
You may want to use this component if you are provisioning webhooks for multiple ArgoCD deployment repositories across
GitHub organizations.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component. This example pulls the value of the webhook from `remote-state`
```yaml
components:
terraform:
webhook/cloudposse/argocd:
metadata:
component: github-webhook
vars:
github_organization: cloudposse
github_repository: argocd-deploy-non-prod
webhook_url: "https://argocd.ue2.dev.plat.cloudposse.org/api/webhook"
remote_state_github_webhook_enabled: true # default value added for visibility
remote_state_component_name: eks/argocd
```
### SSM Stored Value Example
Here's an example snippet for how to use this component with a value stored in SSM
```yaml
components:
terraform:
webhook/cloudposse/argocd:
metadata:
component: github-webhook
vars:
github_organization: cloudposse
github_repository: argocd-deploy-non-prod
webhook_url: "https://argocd.ue2.dev.plat.cloudposse.org/api/webhook"
remote_state_github_webhook_enabled: false
ssm_github_webhook_enabled: true
ssm_github_webhook: "/argocd/github/webhook"
```
### Input Value Example
Here's an example snippet for how to use this component with a value stored in Terraform variables.
```yaml
components:
terraform:
webhook/cloudposse/argocd:
metadata:
component: github-webhook
vars:
github_organization: cloudposse
github_repository: argocd-deploy-non-prod
webhook_url: "https://argocd.ue2.dev.plat.cloudposse.org/api/webhook"
remote_state_github_webhook_enabled: false
ssm_github_webhook_enabled: false
webhook_github_secret: "abcdefg"
```
### ArgoCD Webhooks
For usage with the `eks/argocd` component, see
[Creating Webhooks with `github-webhook`](https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/argocd/README.md#creating-webhooks-with-github-webhook)
in that component's README.
## Variables
### Required Variables
`github_organization` (`string`) required
The name of the GitHub Organization where the repository lives
`github_repository` (`string`) required
The name of the GitHub repository where the webhook will be created
`region` (`string`) required
AWS Region.
`webhook_url` (`string`) required
The URL for the webhook
### Optional Variables
`github_base_url` (`string`) optional
This is the target GitHub base API endpoint. Providing a value is a requirement when working with GitHub Enterprise. It is optional to provide this value and it can also be sourced from the `GITHUB_BASE_URL` environment variable. The value must end with a slash, for example: `https://terraformtesting-ghe.westus.cloudapp.azure.com/`
**Default value:** `null`
`github_token_override` (`string`) optional
Use the value of this variable as the GitHub token instead of reading it from SSM
**Default value:** `null`
`remote_state_component_name` (`string`) optional
If fetching the Github Webhook value from remote-state, set this to the source component name. For example, `eks/argocd`.
**Default value:** `""`
If `true`, pull the GitHub Webhook value from remote-state
**Default value:** `true`
`ssm_github_api_key` (`string`) optional
SSM path to the GitHub API key
**Default value:** `"/argocd/github/api_key"`
`ssm_github_webhook` (`string`) optional
Format string of the SSM parameter path where the webhook will be pulled from. Only used if `var.webhook_github_secret` is not given.
**Default value:** `"/github/webhook"`
`ssm_github_webhook_enabled` (`bool`) optional
If `true`, pull the GitHub Webhook value from AWS SSM Parameter Store using `var.ssm_github_webhook`
**Default value:** `false`
`webhook_github_secret` (`string`) optional
The value to use as the GitHub webhook secret. Set both `var.ssm_github_webhook_enabled` and `var.remote_state_github_webhook_enabled` to `false` in order to use this value
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `github`, version: `>= 4.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `github`, version: `>= 4.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`source` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | This can be any component that has the required output, `github-webhook-value` This is typically eks/argocd
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`github_repository_webhook.default`](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_webhook) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.github_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.webhook`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## global-accelerator
This component provisions AWS Global Accelerator and its listeners.
## Usage
**Stack Level**: Global
Here are some example snippets for how to use this component:
```yaml
global-accelerator:
vars:
enabled: true
flow_logs_enabled: true
flow_logs_s3_bucket: examplecorp-ue1-devplatform-global-accelerator-flow-logs
flow_logs_s3_prefix: logs/
listeners:
- client_affinity: NONE
protocol: TCP
port_ranges:
- from_port: 80
to_port: 80
- from_port: 443
to_port: 443
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region.
### Optional Variables
`flow_logs_enabled` (`bool`) optional
Enable or disable flow logs for the Global Accelerator.
**Default value:** `false`
The component that deploys the S3 Bucket for the Accelerator Flow Logs. Required if `var.flow_logs_enabled` is set to `true`.
**Default value:** `null`
The environment where the S3 Bucket for the Accelerator Flow Logs exists. Required if `var.flow_logs_enabled` is set to `true`.
**Default value:** `null`
`flow_logs_s3_bucket_stage` (`string`) optional
The stage where the S3 Bucket for the Accelerator Flow Logs exists. Required if `var.flow_logs_enabled` is set to `true`.
**Default value:** `null`
`flow_logs_s3_bucket_tenant` (`string`) optional
The tenant where the S3 Bucket for the Accelerator Flow Logs exists. Required if `var.flow_logs_enabled` is set to `true`.
**Default value:** `null`
`flow_logs_s3_prefix` (`string`) optional
The Object Prefix within the S3 Bucket for the Accelerator Flow Logs. Required if `var.flow_logs_enabled` is set to `true`.
**Default value:** `null`
`listeners` optional
list of listeners to configure for the global accelerator.
For more information, see: [aws_globalaccelerator_listener](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/globalaccelerator_listener).
**Type:**
```hcl
list(object({
client_affinity = string
port_ranges = list(object({
from_port = number
to_port = number
}))
protocol = string
}))
```
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`dns_name`
DNS name of the Global Accelerator.
`listener_ids`
Global Accelerator Listener IDs.
`name`
Name of the Global Accelerator.
`static_ips`
Global Static IPs owned by the Global Accelerator.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`flow_logs_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`global_accelerator` | 0.6.1 | [`cloudposse/global-accelerator/aws`](https://registry.terraform.io/modules/cloudposse/global-accelerator/aws/0.6.1) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## global-accelerator-endpoint-group
This component is responsible for provisioning a Global Accelerator Endpoint Group.
This component assumes that the `global-accelerator` component has already been deployed to the same account in the
environment specified by `var.global_accelerator_environment_name`.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
```yaml
components:
terraform:
global-accelerator-endpoint-group:
vars:
enabled: true
config:
endpoint_configuration:
- endpoint_lb_name: my-load-balancer
```
## Variables
### Required Variables
`config` (`any`) required
Endpoint Group configuration.
This object needs to be fully compliant with the `aws_globalaccelerator_endpoint_group` resource, except for the following differences:
* `listener_arn`, which is specified separately, is omitted.
* The values for `endpoint_configuration` and `port_override` within each object in `endpoint_groups` should be lists.
* Inside the `endpoint_configuration` block, `endpoint_lb_name` can be supplied in place of `endpoint_id` as long as it is a valid unique name for an existing ALB or NLB.
For more information, see: [aws_globalaccelerator_endpoint_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/globalaccelerator_endpoint_group).
The name of the environment where the global component `global_accelerator` is provisioned
**Default value:** `"gbl"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
List of permissions granted to the principal. Refer to https://docs.aws.amazon.com/lake-formation/latest/dg/lf-permissions-reference.html for more details
**Default value:**
```hcl
[
"ALL"
]
```
Whether to enable adding Lake Formation permissions to the IAM role that is used to access the Glue database
**Default value:** `true`
`location_uri` (`string`) optional
Location of the database (for example, an HDFS path)
**Default value:** `null`
`parameters` (`map(string)`) optional
Map of key-value pairs that define parameters and properties of the database
**Default value:** `null`
`target_database` optional
Configuration block for a target database for resource linking
**Type:**
```hcl
object({
# If `target_database` is provided (not `null`), all these fields are required
catalog_id = string
database_name = string
})
```
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`catalog_database_arn`
Catalog database ARN
`catalog_database_id`
Catalog database ID
`catalog_database_name`
Catalog database name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `utils`, version: `>= 1.15.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`glue_catalog_database` | 0.4.0 | [`cloudposse/glue/aws//modules/glue-catalog-database`](https://registry.terraform.io/modules/cloudposse/glue/aws/modules/glue-catalog-database/0.4.0) | n/a
`glue_iam_role` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_lakeformation_permissions.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lakeformation_permissions) (resource)
## Data Sources
The following data sources are used by this module:
---
## catalog-table
This component provisions Glue catalog tables.
## Usage
**Stack Level**: Regional
```yaml
components:
terraform:
glue/catalog-table/example:
metadata:
component: glue/catalog-table
vars:
enabled: true
name: example
catalog_table_description: Glue catalog table example
glue_iam_component_name: glue/iam
glue_catalog_database_component_name: glue/catalog-database/example
lakeformation_permissions_enabled: true
lakeformation_permissions:
- "ALL"
storage_descriptor:
location: "s3://awsglue-datasets/examples/medicare/Medicare_Hospital_Provider.csv"
```
## Variables
### Required Variables
Glue catalog database component name where the table metadata resides. Used to get the Glue catalog database from the remote state
`region` (`string`) required
AWS Region
### Optional Variables
`catalog_id` (`string`) optional
ID of the Glue Catalog and database to create the table in. If omitted, this defaults to the AWS Account ID plus the database name
**Default value:** `null`
`catalog_table_description` (`string`) optional
Description of the table
**Default value:** `null`
`catalog_table_name` (`string`) optional
Name of the table
**Default value:** `null`
`glue_iam_component_name` (`string`) optional
Glue IAM component name. Used to get the Glue IAM role from the remote state
**Default value:** `"glue/iam"`
List of permissions granted to the principal. Refer to https://docs.aws.amazon.com/lake-formation/latest/dg/lf-permissions-reference.html for more details
**Default value:**
```hcl
[
"ALL"
]
```
Whether to enable adding Lake Formation permissions to the IAM role that is used to access the Glue table
**Default value:** `true`
`owner` (`string`) optional
Owner of the table
**Default value:** `null`
`parameters` (`map(string)`) optional
Properties associated with this table, as a map of key-value pairs
**Default value:** `null`
`partition_index` optional
Configuration block for a maximum of 3 partition indexes
**Type:**
```hcl
object({
index_name = string
keys = list(string)
})
```
**Default value:** `null`
`partition_keys` (`map(string)`) optional
Configuration block of columns by which the table is partitioned. Only primitive types are supported as partition keys
**Default value:** `null`
`retention` (`number`) optional
Retention time for the table
**Default value:** `null`
`storage_descriptor` (`any`) optional
Configuration block for information about the physical storage of this table
**Default value:** `null`
`table_type` (`string`) optional
Type of this table (`EXTERNAL_TABLE`, `VIRTUAL_VIEW`, etc.). While optional, some Athena DDL queries such as `ALTER TABLE` and `SHOW CREATE TABLE` will fail if this argument is empty
**Default value:** `null`
`target_table` optional
Configuration block of a target table for resource linking
**Type:**
```hcl
object({
catalog_id = string
database_name = string
name = string
})
```
**Default value:** `null`
`view_expanded_text` (`string`) optional
If the table is a view, the expanded text of the view; otherwise null
**Default value:** `null`
`view_original_text` (`string`) optional
If the table is a view, the original text of the view; otherwise null
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`catalog_table_arn`
Catalog table ARN
`catalog_table_id`
Catalog table ID
`catalog_table_name`
Catalog table name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `utils`, version: `>= 1.15.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`glue_catalog_database` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`glue_catalog_table` | 0.4.0 | [`cloudposse/glue/aws//modules/glue-catalog-table`](https://registry.terraform.io/modules/cloudposse/glue/aws/modules/glue-catalog-table/0.4.0) | n/a
`glue_iam_role` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_lakeformation_permissions.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lakeformation_permissions) (resource)
## Data Sources
The following data sources are used by this module:
---
## connection(Connection)
This component provisions Glue connections.
## Usage
**Stack Level**: Regional
```yaml
components:
terraform:
glue/connection/example/redshift:
metadata:
component: glue/connection
vars:
connection_name: "jdbc-redshift"
connection_description: "Glue Connection for Redshift"
connection_type: "JDBC"
db_type: "redshift"
connection_db_name: "analytics"
ssm_path_username: "/glue/redshift/admin_user"
ssm_path_password: "/glue/redshift/admin_password"
ssm_path_endpoint: "/glue/redshift/endpoint"
physical_connection_enabled: true
vpc_component_name: "vpc"
```
## Variables
### Required Variables
`connection_type` (`string`) required
The type of the connection. Supported are: JDBC, MONGODB, KAFKA, and NETWORK. Defaults to JDBC
`region` (`string`) required
AWS Region
`vpc_component_name` (`string`) required
VPC component name
### Optional Variables
`catalog_id` (`string`) optional
The ID of the Data Catalog in which to create the connection. If none is supplied, the AWS account ID is used by default
**Default value:** `null`
`connection_db_name` (`string`) optional
Database name that the Glue connector will reference
**Default value:** `null`
`connection_description` (`string`) optional
Connection description
**Default value:** `null`
`connection_name` (`string`) optional
Connection name. If not provided, the name will be generated from the context
**Default value:** `null`
`connection_properties` (`map(string)`) optional
A map of key-value pairs used as parameters for this connection
**Default value:** `null`
`db_type` (`string`) optional
Database type for the connection URL: `postgres` or `redshift`
**Default value:** `"redshift"`
`match_criteria` (`list(string)`) optional
A list of criteria that can be used in selecting this connection
**Default value:** `null`
`physical_connection_enabled` (`bool`) optional
Flag to enable/disable physical connection
**Default value:** `false`
A convenience that adds to the rules a rule that allows all egress.
If this is false and no egress rules are specified via `rules` or `rule-matrix`, then no egress will be allowed.
**Default value:** `true`
Set `true` to enable terraform `create_before_destroy` behavior on the created security group.
We only recommend setting this `false` if you are importing an existing security group
that you do not want replaced and therefore need full control over its name.
Note that changing this value will always cause the security group to be replaced.
**Default value:** `true`
Additional Security Group rules that allow Glue to communicate with the target database
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`connection_arn`
Glue connection ARN
`connection_id`
Glue connection ID
`connection_name`
Glue connection name
`security_group_arn`
The ARN of the Security Group associated with the Glue connection
`security_group_id`
The ID of the Security Group associated with the Glue connection
`security_group_name`
The name of the Security Group and associated with the Glue connection
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `utils`, version: `>= 1.15.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`glue_connection` | 0.4.0 | [`cloudposse/glue/aws//modules/glue-connection`](https://registry.terraform.io/modules/cloudposse/glue/aws/modules/glue-connection/0.4.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`security_group` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`target_security_group` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | This allows adding the necessary Security Group rules for Glue to communicate with Redshift
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.endpoint`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.user`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_subnet.selected`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnet) (data source)
---
## crawler
This component provisions Glue crawlers.
## Usage
**Stack Level**: Regional
```yaml
components:
terraform:
# The crawler crawls the data in an S3 bucket and puts the results into a table in the Glue Catalog.
# The crawler will read the first 2 MB of data from the file, and recognize the schema.
# After that, the crawler will sync the table.
glue/crawler/example:
metadata:
component: glue/crawler
vars:
enabled: true
name: example
crawler_description: "Glue crawler example"
glue_iam_component_name: "glue/iam"
glue_catalog_database_component_name: "glue/catalog-database/example"
glue_catalog_table_component_name: "glue/catalog-table/example"
schedule: "cron(0 1 * * ? *)"
schema_change_policy:
delete_behavior: LOG
update_behavior: null
```
## Variables
### Required Variables
List of custom classifiers. By default, all AWS classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification
**Default value:** `null`
`configuration` (`string`) optional
JSON string of configuration information
**Default value:** `null`
Glue catalog table component name where metadata resides. Used to get the Glue catalog table from the remote state
**Default value:** `null`
`glue_iam_component_name` (`string`) optional
Glue IAM component name. Used to get the Glue IAM role from the remote state
**Default value:** `"glue/iam"`
`jdbc_target` (`list(any)`) optional
List of nested JBDC target arguments
**Default value:** `null`
`lineage_configuration` optional
Specifies data lineage configuration settings for the crawler
**Type:**
```hcl
object({
crawler_lineage_settings = string
})
```
**Default value:** `null`
`mongodb_target` (`list(any)`) optional
List of nested MongoDB target arguments
**Default value:** `null`
`recrawl_policy` optional
A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run
**Type:**
```hcl
object({
recrawl_behavior = string
})
```
**Default value:** `null`
`s3_target` (`list(any)`) optional
List of nested Amazon S3 target arguments
**Default value:** `null`
`schedule` (`string`) optional
A cron expression for the schedule
**Default value:** `null`
`schema_change_policy` (`map(string)`) optional
Policy for the crawler's update and deletion behavior
**Default value:** `null`
`security_configuration` (`string`) optional
The name of Security Configuration to be used by the crawler
**Default value:** `null`
`table_prefix` (`string`) optional
The table prefix used for catalog tables that are created
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
Glue IAM policy description
**Default value:** `"Policy for AWS Glue with access to EC2, S3, and Cloudwatch Logs"`
`iam_role_description` (`string`) optional
Glue IAM role description
**Default value:** `"Role for AWS Glue with access to EC2, S3, and Cloudwatch Logs"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
The list of connections used for this job
**Default value:** `null`
`default_arguments` (`map(string)`) optional
The map of default arguments for the job. You can specify arguments here that your own job-execution script consumes, as well as arguments that AWS Glue itself consumes
**Default value:** `null`
`execution_property` optional
Execution property of the job
**Type:**
```hcl
object({
# The maximum number of concurrent runs allowed for the job. The default is 1.
max_concurrent_runs = number
})
```
**Default value:** `null`
`glue_iam_component_name` (`string`) optional
Glue IAM component name. Used to get the Glue IAM role from the remote state
**Default value:** `"glue/iam"`
`glue_job_command_name` (`string`) optional
The name of the job command. Defaults to glueetl. Use pythonshell for Python Shell Job Type, or gluestreaming for Streaming Job Type. max_capacity needs to be set if pythonshell is chosen
**Default value:** `"glueetl"`
Glue job script path in the S3 bucket
**Default value:** `null`
`glue_version` (`string`) optional
The version of Glue to use
**Default value:** `"2.0"`
`job_description` (`string`) optional
Glue job description
**Default value:** `null`
`job_name` (`string`) optional
Glue job name. If not provided, the name will be generated from the context
**Default value:** `null`
`max_capacity` (`number`) optional
The maximum number of AWS Glue data processing units (DPUs) that can be allocated when the job runs. Required when `pythonshell` is set, accept either 0.0625 or 1.0. Use `number_of_workers` and `worker_type` arguments instead with `glue_version` 2.0 and above
**Default value:** `null`
`max_retries` (`number`) optional
The maximum number of times to retry the job if it fails
**Default value:** `null`
Non-overridable arguments for this job, specified as name-value pairs
**Default value:** `null`
`notification_property` optional
Notification property of the job
**Type:**
```hcl
object({
# After a job run starts, the number of minutes to wait before sending a job run delay notification
notify_delay_after = number
})
```
**Default value:** `null`
`number_of_workers` (`number`) optional
The number of workers of a defined `worker_type` that are allocated when a job runs
**Default value:** `null`
`security_configuration` (`string`) optional
The name of the Security Configuration to be associated with the job
**Default value:** `null`
`timeout` (`number`) optional
The job timeout in minutes. The default is 2880 minutes (48 hours) for `glueetl` and `pythonshell` jobs, and `null` (unlimited) for `gluestreaming` jobs
**Default value:** `2880`
`worker_type` (`string`) optional
The type of predefined worker that is allocated when a job runs. Accepts a value of `Standard`, `G.1X`, or `G.2X`
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`job_arn`
Glue job ARN
`job_id`
Glue job ID
`job_name`
Glue job name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `utils`, version: `>= 1.15.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`glue_iam_role` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`glue_job` | 0.4.0 | [`cloudposse/glue/aws//modules/glue-job`](https://registry.terraform.io/modules/cloudposse/glue/aws/modules/glue-job/0.4.0) | n/a
`glue_job_s3_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_policy.glue_job_aws_tools_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role_policy_attachment.glue_jobs_aws_tools_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`aws_iam_role_policy_attachment.glue_redshift_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.glue_job_aws_tools_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
---
## registry
This component provisions Glue registries.
## Usage
**Stack Level**: Regional
```yaml
components:
terraform:
glue/registry/example:
metadata:
component: glue/registry
vars:
enabled: true
name: example
registry_name: example
registry_description: "Glue registry example"
```
## Variables
### Required Variables
Glue registry name. If not provided, the name will be generated from the context
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
Glue registry component name. Used to get the Glue registry from the remote state
`region` (`string`) required
AWS Region
### Optional Variables
`compatibility` (`string`) optional
The compatibility mode of the schema. Valid values are NONE, DISABLED, BACKWARD, BACKWARD_ALL, FORWARD, FORWARD_ALL, FULL, and FULL_ALL
**Default value:** `"NONE"`
`data_format` (`string`) optional
The data format of the schema definition. Valid values are `AVRO`, `JSON` and `PROTOBUF`
**Default value:** `"JSON"`
`schema_definition` (`string`) optional
The schema definition using the `data_format` setting
**Default value:** `null`
`schema_description` (`string`) optional
Glue schema description
**Default value:** `null`
`schema_name` (`string`) optional
Glue schema name. If not provided, the name will be generated from the context
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`latest_schema_version`
The latest version of the schema associated with the returned schema definition
`next_schema_version`
The next version of the schema associated with the returned schema definition
`registry_name`
Glue registry name
`schema_arn`
Glue schema ARN
`schema_checkpoint`
The version number of the checkpoint (the last time the compatibility mode was changed)
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires
**Default value:** `null`
`glue_job_component_name` (`string`) optional
Glue workflow job name. Used to get the Glue job from the remote state
**Default value:** `null`
`glue_job_timeout` (`number`) optional
The job run timeout in minutes. It overrides the timeout value of the job
**Default value:** `null`
Whether to start the created trigger
**Default value:** `true`
`trigger_name` (`string`) optional
Glue trigger name. If not provided, the name will be generated from the context
**Default value:** `null`
`type` (`string`) optional
The type of trigger. Options are CONDITIONAL, SCHEDULED or ON_DEMAND
**Default value:** `"CONDITIONAL"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
A map of default run properties for this workflow. These properties are passed to all jobs associated to the workflow
**Default value:** `null`
`max_concurrent_runs` (`number`) optional
Maximum number of concurrent runs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs
**Default value:** `null`
Glue workflow name. If not provided, the name will be generated from the context
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`workflow_arn`
Glue workflow ARN
`workflow_id`
Glue workflow ID
`workflow_name`
Glue workflow name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `utils`, version: `>= 1.15.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`glue_workflow` | 0.4.0 | [`cloudposse/glue/aws//modules/glue-workflow`](https://registry.terraform.io/modules/cloudposse/glue/aws/modules/glue-workflow/0.4.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## guardduty
This component is responsible for configuring GuardDuty within an AWS Organization.
AWS GuardDuty is a managed threat detection service. It is designed to help protect AWS accounts and workloads by
continuously monitoring for malicious activities and unauthorized behaviors. To detect potential security threats,
GuardDuty analyzes various data sources within your AWS environment, such as AWS CloudTrail logs, VPC Flow Logs, and DNS
logs.
Key features and components of AWS GuardDuty include:
- Threat detection: GuardDuty employs machine learning algorithms, anomaly detection, and integrated threat intelligence
to identify suspicious activities, unauthorized access attempts, and potential security threats. It analyzes event
logs and network traffic data to detect patterns, anomalies, and known attack techniques.
- Threat intelligence: GuardDuty leverages threat intelligence feeds from AWS, trusted partners, and the global
community to enhance its detection capabilities. It uses this intelligence to identify known malicious IP addresses,
domains, and other indicators of compromise.
- Real-time alerts: When GuardDuty identifies a potential security issue, it generates real-time alerts that can be
delivered through AWS CloudWatch Events. These alerts can be integrated with other AWS services like Amazon SNS or AWS
Lambda for immediate action or custom response workflows.
- Multi-account support: GuardDuty can be enabled across multiple AWS accounts, allowing centralized management and
monitoring of security across an entire organization's AWS infrastructure. This helps to maintain consistent security
policies and practices.
- Automated remediation: GuardDuty integrates with other AWS services, such as AWS Macie, AWS Security Hub, and AWS
Systems Manager, to facilitate automated threat response and remediation actions. This helps to minimize the impact of
security incidents and reduces the need for manual intervention.
- Security findings and reports: GuardDuty provides detailed security findings and reports that include information
about detected threats, affected AWS resources, and recommended remediation actions. These findings can be accessed
through the AWS Management Console or retrieved via APIs for further analysis and reporting.
GuardDuty offers a scalable and flexible approach to threat detection within AWS environments, providing organizations
with an additional layer of security to proactively identify and respond to potential security risks.
## Supported GuardDuty Protection Features
This component supports the following GuardDuty protection features:
- **S3 Protection**: Monitors S3 data events to detect suspicious activities in your S3 buckets
- **EKS Audit Log Monitoring**: Analyzes Kubernetes audit logs from Amazon EKS clusters
- **Malware Protection**: Scans EBS volumes attached to EC2 instances for malware
- **Lambda Protection**: Monitors Lambda function network activity logs
- **Runtime Monitoring**: Provides runtime threat detection for EC2, ECS, and EKS workloads with automatic security agent management
## SNS Notifications
This component creates its own SNS topic, SQS queue, and KMS key for GuardDuty findings notifications instead of using
the ones from the upstream `cloudposse/guardduty/aws` module. This is a workaround for
[cloudposse/terraform-aws-guardduty#10](https://github.com/cloudposse/terraform-aws-guardduty/issues/10) where the
module's SNS topic encryption doesn't grant EventBridge permission to decrypt messages.
The component creates:
- A custom **KMS key** with proper permissions for EventBridge, SNS, and SQS services
- An **SNS topic** encrypted with the custom KMS key
- An **SQS queue** subscribed to the SNS topic for message processing
- **CloudWatch Event Rules** to route GuardDuty findings to the SNS topic
To enable notifications, set `create_sns_topic: true` and `cloudwatch_enabled: true`.
## Usage
**Stack Level**: Regional
## Prerequisites
Before deploying this component, ensure that GuardDuty trusted access is enabled in AWS Organizations. This can be done
by adding `guardduty.amazonaws.com` to the `aws_service_access_principals` list in your `account` component, or by
running the following command from the management account:
```bash
aws organizations enable-aws-service-access --service-principal guardduty.amazonaws.com
```
## Deployment Overview
This component is complex in that it must be deployed multiple times with different variables set to configure the AWS
Organization successfully.
It is further complicated by the fact that you must deploy each of the the component instances described below to every
region that existed before March 2019 and to any regions that have been opted-in as described in the
[AWS Documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions).
In the examples below, we assume that the AWS Organization Management account is `root` and the AWS Organization
Delegated Administrator account is `security`, both in the `core` tenant.
### Step 1: Deploy to Delegated Administrator Account
First, the component is deployed to the
[Delegated Administrator](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_organizations.html) account in each
region in order to configure the central GuardDuty detector that each account will send its findings to.
```yaml
# core-ue1-security
components:
terraform:
guardduty/delegated-administrator/ue1:
metadata:
component: guardduty
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
```
```bash
atmos terraform apply guardduty/delegated-administrator/ue1 -s core-ue1-security
atmos terraform apply guardduty/delegated-administrator/ue2 -s core-ue2-security
atmos terraform apply guardduty/delegated-administrator/uw1 -s core-uw1-security
# ... other regions
```
### Step 2: Deploy to Organization Management (root) Account
Next, the component is deployed to the AWS Organization Management, a/k/a `root`, Account in order to set the AWS
Organization Designated Administrator account.
Note that you must use the `SuperAdmin` permissions as we are deploying to the AWS Organization Management account. Since
we are using the `SuperAdmin` user, it will already have access to the state bucket, so we set the `role_arn` of the
backend config to null and set `var.privileged` to `true`.
```yaml
# core-ue1-root
components:
terraform:
guardduty/root/ue1:
metadata:
component: guardduty
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
privileged: true
```
```bash
atmos terraform apply guardduty/root/ue1 -s core-ue1-root
atmos terraform apply guardduty/root/ue2 -s core-ue2-root
atmos terraform apply guardduty/root/uw1 -s core-uw1-root
# ... other regions
```
### Step 3: Deploy Organization Settings in Delegated Administrator Account
Finally, the component is deployed to the Delegated Administrator Account again in order to create the organization-wide
configuration for the AWS Organization, but with `var.admin_delegated` set to `true` to indicate that the delegation has
already been performed from the Organization Management account.
```yaml
# core-ue1-security
components:
terraform:
guardduty/org-settings/ue1:
metadata:
component: guardduty
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: use1
region: us-east-1
admin_delegated: true
```
```bash
atmos terraform apply guardduty/org-settings/ue1 -s core-ue1-security
atmos terraform apply guardduty/org-settings/ue2 -s core-ue2-security
atmos terraform apply guardduty/org-settings/uw1 -s core-uw1-security
# ... other regions
```
### Enabling GuardDuty Protection Features
You can enable various GuardDuty protection features by setting the corresponding variables. Here's an example with
all protection features enabled:
```yaml
# core-ue1-security
components:
terraform:
guardduty/org-settings/ue1:
metadata:
component: guardduty
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: use1
region: us-east-1
admin_delegated: true
# Protection features
s3_protection_enabled: true
kubernetes_audit_logs_enabled: true
malware_protection_scan_ec2_ebs_volumes_enabled: true
lambda_network_logs_enabled: true
# Runtime Monitoring with automatic agent management
runtime_monitoring_enabled: true
runtime_monitoring_additional_config:
eks_addon_management_enabled: true
ecs_fargate_agent_management_enabled: true
ec2_agent_management_enabled: true
```
> **Note**: You cannot enable both `eks_runtime_monitoring_enabled` and `runtime_monitoring_enabled` at the same time.
> Use `runtime_monitoring_enabled` if you want runtime monitoring across EC2, ECS, and EKS resources.
### Enabling SNS Notifications
To enable SNS notifications for GuardDuty findings, set `create_sns_topic` and `cloudwatch_enabled` to `true`:
```yaml
# core-ue1-security
components:
terraform:
guardduty/delegated-administrator/ue1:
metadata:
component: guardduty
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
# Enable SNS notifications
create_sns_topic: true
cloudwatch_enabled: true
```
This will create:
- A KMS key with permissions for EventBridge, SNS, and SQS
- An encrypted SNS topic for GuardDuty findings
- An SQS queue subscribed to the SNS topic
- CloudWatch Event Rules to route findings to SNS
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
`account_map_tenant` (`string`) optional
The tenant where the `account_map` component required by remote-state is deployed
**Default value:** `"core"`
`admin_delegated` (`bool`) optional
A flag to indicate if the AWS Organization-wide settings should be created. This can only be done after the GuardDuty
Administrator account has already been delegated from the AWS Org Management account (usually 'root'). See the
Deployment section of the README for more information.
**Default value:** `false`
Indicates the auto-enablement configuration of GuardDuty for the member accounts in the organization. Valid values are `ALL`, `NEW`, `NONE`.
For more information, see:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration#auto_enable_organization_members
**Default value:** `"NEW"`
`cloudwatch_enabled` (`bool`) optional
Flag to indicate whether CloudWatch logging should be enabled for GuardDuty
**Default value:** `false`
The detail-type pattern used to match events that will be sent to SNS.
For more information, see:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEventsandEventPatterns.html
https://docs.aws.amazon.com/eventbridge/latest/userguide/event-types.html
https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html
**Default value:** `"GuardDuty Finding"`
`create_sns_topic` (`bool`) optional
Flag to indicate whether an SNS topic should be created for notifications. If you want to send findings to a new SNS
topic, set this to true and provide a valid configuration for subscribers.
**Default value:** `false`
The name of the component that created the GuardDuty detector.
**Default value:** `"guardduty/delegated-administrator"`
`detector_features` optional
A map of detector features for streaming foundational data sources to detect communication with known malicious domains and IP addresses and identify anomalous behavior.
For more information, see:
https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-features-activation-model.html#guardduty-features
feature_name:
The name of the detector feature. Possible values include: S3_DATA_EVENTS, EKS_AUDIT_LOGS, EBS_MALWARE_PROTECTION, RDS_LOGIN_EVENTS, EKS_RUNTIME_MONITORING, LAMBDA_NETWORK_LOGS, RUNTIME_MONITORING. Specifying both EKS Runtime Monitoring (EKS_RUNTIME_MONITORING) and Runtime Monitoring (RUNTIME_MONITORING) will cause an error. You can add only one of these two features because Runtime Monitoring already includes the threat detection for Amazon EKS resources. For more information, see: https://docs.aws.amazon.com/guardduty/latest/APIReference/API_DetectorFeatureConfiguration.html.
status:
The status of the detector feature. Valid values include: ENABLED or DISABLED.
additional_configuration:
Optional list of additional configurations for a feature in your GuardDuty account. For more information, see: https://docs.aws.amazon.com/guardduty/latest/APIReference/API_DetectorAdditionalConfiguration.html.
addon_name:
The name of the add-on for which the configuration applies. Possible values include: EKS_ADDON_MANAGEMENT, ECS_FARGATE_AGENT_MANAGEMENT, and EC2_AGENT_MANAGEMENT. For more information, see: https://docs.aws.amazon.com/guardduty/latest/APIReference/API_DetectorAdditionalConfiguration.html.
status:
The status of the add-on. Valid values include: ENABLED or DISABLED.
**Type:**
```hcl
map(object({
feature_name = string
status = string
additional_configuration = optional(list(object({
addon_name = string
status = string
})), [])
}))
```
**Default value:** `{ }`
If `true`, enables EKS Runtime Monitoring.
Note: Do not enable both EKS_RUNTIME_MONITORING and RUNTIME_MONITORING as Runtime Monitoring already includes
threat detection for Amazon EKS resources.
For more information, see:
https://docs.aws.amazon.com/guardduty/latest/ug/eks-runtime-monitoring.html
**Default value:** `false`
The frequency of notifications sent for finding occurrences. If the detector is a GuardDuty member account, the value
is determined by the GuardDuty master account and cannot be modified, otherwise it defaults to SIX_HOURS.
For standalone and GuardDuty master accounts, it must be configured in Terraform to enable drift detection.
Valid values for standalone and master accounts: FIFTEEN_MINUTES, ONE_HOUR, SIX_HOURS."
For more information, see:
https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html#guardduty_findings_cloudwatch_notification_frequency
**Default value:** `null`
`findings_notification_arn` (`string`) optional
The ARN for an SNS topic to send findings notifications to. This is only used if create_sns_topic is false.
If you want to send findings to an existing SNS topic, set this to the ARN of the existing topic and set
create_sns_topic to false.
**Default value:** `null`
`global_environment` (`string`) optional
Global environment name
**Default value:** `"gbl"`
`kubernetes_audit_logs_enabled` (`bool`) optional
If `true`, enables Kubernetes audit logs as a data source for Kubernetes protection.
For more information, see:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_detector#audit_logs
**Default value:** `false`
`lambda_network_logs_enabled` (`bool`) optional
If `true`, enables Lambda network logs as a data source for Lambda protection.
For more information, see:
https://docs.aws.amazon.com/guardduty/latest/ug/lambda-protection.html
**Default value:** `false`
Configure whether Malware Protection is enabled as data source for EC2 instances EBS Volumes in GuardDuty.
For more information, see:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_detector#malware-protection
**Default value:** `false`
The name of the AWS Organization management account
**Default value:** `null`
`privileged` (`bool`) optional
true if the default provider already has access to the backend
**Default value:** `false`
`root_account_stage` (`string`) optional
The stage name for the Organization root (management) account. This is used to lookup account IDs from account names
using the `account-map` component.
**Default value:** `"root"`
If `true`, enables Runtime Monitoring for EC2, ECS, and EKS resources.
Note: Runtime Monitoring already includes threat detection for Amazon EKS resources, so you should not enable both
RUNTIME_MONITORING and EKS_RUNTIME_MONITORING features.
For more information, see:
https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring.html
**Default value:** `false`
`s3_protection_enabled` (`bool`) optional
If `true`, enables S3 protection.
For more information, see:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_detector#s3-logs
**Default value:** `true`
`subscribers` optional
A map of subscription configurations for SNS topics
For more information, see:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sns_topic_subscription#argument-reference
protocol:
The protocol to use. The possible values for this are: sqs, sms, lambda, application. (http or https are partially
supported, see link) (email is an option but is unsupported in terraform, see link).
endpoint:
The endpoint to send data to, the contents will vary with the protocol. (see link for more information)
endpoint_auto_confirms:
Boolean indicating whether the end point is capable of auto confirming subscription e.g., PagerDuty. Default is
false.
raw_message_delivery:
Boolean indicating whether or not to enable raw message delivery (the original message is directly passed, not
wrapped in JSON with the original message in the message property). Default is false.
**Type:**
```hcl
map(object({
protocol = string
endpoint = string
endpoint_auto_confirms = bool
raw_message_delivery = bool
}))
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cloudwatch_event_rule_arn`
The ARN of the CloudWatch Event Rule for GuardDuty findings
`cloudwatch_event_rule_id`
The ID of the CloudWatch Event Rule for GuardDuty findings
`delegated_administrator_account_id`
The AWS Account ID of the AWS Organization delegated administrator account
`guardduty_delegated_detector_arn`
The ARN of the GuardDuty detector from the delegated administrator account (via remote state)
`guardduty_delegated_detector_id`
The ID of the GuardDuty detector from the delegated administrator account (via remote state)
`guardduty_detector_arn`
The ARN of the GuardDuty detector created by the component in this account
`guardduty_detector_id`
The ID of the GuardDuty detector created by the component in this account
`root_kms_key_alias`
The alias of the KMS key used for encrypting the GuardDuty SNS topic
`root_kms_key_arn`
The ARN of the KMS key used for encrypting the GuardDuty SNS topic
`root_kms_key_id`
The ID of the KMS key used for encrypting the GuardDuty SNS topic
`root_sns_topic_arn`
The ARN of the root-level SNS topic created for GuardDuty findings
`root_sns_topic_id`
The ID of the root-level SNS topic created for GuardDuty findings
`root_sns_topic_name`
The name of the root-level SNS topic created for GuardDuty findings
`root_sqs_queue_arn`
The ARN of the SQS queue subscribed to the GuardDuty SNS topic
`root_sqs_queue_name`
The name of the SQS queue subscribed to the GuardDuty SNS topic
`root_sqs_queue_url`
The URL of the SQS queue subscribed to the GuardDuty SNS topic
`sns_topic_name`
The name of the SNS topic created for GuardDuty findings
`sns_topic_subscriptions`
The SNS topic subscriptions for GuardDuty findings
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 5.0, < 6.0.0`
- `awsutils`, version: `>= 0.16.0, < 6.0.0`
### Providers
- `aws`, version: `>= 5.0, < 6.0.0`
- `awsutils`, version: `>= 0.16.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`findings_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`guardduty` | 1.0.0 | [`cloudposse/guardduty/aws`](https://registry.terraform.io/modules/cloudposse/guardduty/aws/1.0.0) | If we are are in the AWS Org designated administrator account, enable the GuardDuty detector and optionally create an SNS topic for notifications and CloudWatch event rules for findings. NOTE: We set create_sns_topic=false in the module and create our own SNS topic instead. This is because of https://github.com/cloudposse/terraform-aws-guardduty/issues/10 The module's SNS topic encryption doesn't grant EventBridge permission to decrypt messages.
`guardduty_delegated_detector` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | KMS key for encrypting the GuardDuty SNS topic This is required because of https://github.com/cloudposse/terraform-aws-guardduty/issues/10 The default AWS-managed key doesn't grant EventBridge permission to decrypt messages.
`queue_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`sns_topic` | 1.2.0 | [`cloudposse/sns-topic/aws`](https://registry.terraform.io/modules/cloudposse/sns-topic/aws/1.2.0) | n/a
`sqs` | 4.3.1 | [`terraform-aws-modules/sqs/aws`](https://registry.terraform.io/modules/terraform-aws-modules/sqs/aws/4.3.1) | SQS queue for GuardDuty findings This queue is subscribed to the SNS topic to receive GuardDuty findings
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cloudwatch_event_rule.findings`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_rule) (resource)
- [`aws_cloudwatch_event_target.imported_findings`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_target) (resource)
- [`aws_guardduty_organization_admin_account.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_admin_account) (resource)
- [`aws_guardduty_organization_configuration.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration) (resource)
- [`aws_guardduty_organization_configuration_feature.additional`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration_feature) (resource)
- [`aws_guardduty_organization_configuration_feature.ebs_malware_protection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration_feature) (resource)
- [`aws_guardduty_organization_configuration_feature.eks_audit_logs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration_feature) (resource)
- [`aws_guardduty_organization_configuration_feature.eks_runtime_monitoring`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration_feature) (resource)
- [`aws_guardduty_organization_configuration_feature.lambda_network_logs`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration_feature) (resource)
- [`aws_guardduty_organization_configuration_feature.runtime_monitoring`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration_feature) (resource)
- [`aws_guardduty_organization_configuration_feature.s3_data_events`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/guardduty_organization_configuration_feature) (resource)
- [`aws_sns_topic_policy.sns_topic_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sns_topic_policy) (resource)
- [`aws_sqs_queue_policy.sqs_queue_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sqs_queue_policy) (resource)
- [`awsutils_guardduty_organization_settings.this`](https://registry.terraform.io/providers/cloudposse/awsutils/latest/docs/resources/guardduty_organization_settings) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_caller_identity.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.sns_topic_combined_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_organizations_organization.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/organizations_organization) (data source)
- [`aws_region.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) (data source)
---
## iam-policy
Terraform component that composes IAM policy documents and creates an AWS IAM policy.
- Uses the Cloud Posse `cloudposse/iam-policy/aws` module to merge multiple policy document sources with override semantics.
- Exports the rendered policy JSON (`output.json`) and the ARN of the created policy (`output.policy_arn`).
- Supports adding base (`iam_source_policy_documents`) and override (`iam_override_policy_documents`) JSON documents, plus first-class typed `iam_policy` statements.
Note: Description of this component 55
## Usage
**Stack Level**: Regional or Global
Minimal Atmos stack example showing inline statements plus merged JSON documents:
```yaml
components:
terraform:
policy:
vars:
description: "Example IAM policy"
# Optional: typed statements (compatible with aws_iam_policy_document)
iam_policy:
- policy_id: "EC2DescribeInstances"
statements:
- sid: "EC2DescribeInstances"
effect: "Allow"
actions: ["ec2:DescribeInstances"]
resources: ["*"]
# Optional: base source policy documents (JSON strings)
iam_source_policy_documents:
- |
{"Version":"2012-10-17","Statement":[{"Sid":"KMS","Effect":"Allow","Action":["kms:*"] ,"Resource":"*"}]}
# Optional: higher-precedence override documents (JSON strings)
iam_override_policy_documents:
- |
{"Version":"2012-10-17","Statement":[{"Sid":"S3ReadWrite","Effect":"Allow","Action":["s3:GetObject","s3:ListBucket","s3:ListBucketMultipartUploads","s3:ListBucketVersions","s3:ListMultipartUploadParts","s3:PutObject","s3:HeadObject"],"Resource":"*"}]}
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`description` (`string`) optional
Description of created IAM policy
**Default value:** `null`
List of IAM policy documents (as JSON strings) that are merged together into the exported document with higher precedence.
In merging, statements with non-blank SIDs will override statements with the same SID
from earlier documents in the list and from other "source" documents.
**Default value:** `null`
`iam_policy` optional
IAM policy as list of Terraform objects, compatible with Terraform `aws_iam_policy_document` data source
except that `source_policy_documents` and `override_policy_documents` are not included.
Use inputs `iam_source_policy_documents` and `iam_override_policy_documents` for that.
**Type:**
```hcl
list(object({
policy_id = optional(string, null)
version = optional(string, null)
statements = list(object({
sid = optional(string, null)
effect = optional(string, null)
actions = optional(list(string), null)
not_actions = optional(list(string), null)
resources = optional(list(string), null)
not_resources = optional(list(string), null)
conditions = optional(list(object({
test = string
variable = string
values = list(string)
})), [])
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
not_principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
}))
}))
```
**Default value:** `[ ]`
List of IAM policy documents (as JSON strings) that are merged together into the exported document.
Statements defined in `iam_source_policy_documents` must have unique SIDs and be distinct from SIDs
in `iam_policy`.
Statements in these documents will be overridden by statements with the same SID in `iam_override_policy_documents`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`json`
JSON body of the IAM policy document
`policy_arn`
ARN of created IAM policy
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.22.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## iam-role
This component is responsible for provisioning simple IAM roles. If a more complicated IAM role and policy are desired
then it is better to use a separate component specific to that role.
## Usage
**Stack Level**: Global
Abstract
```yaml
# stacks/catalog/iam-role.yaml
components:
terraform:
iam-role/defaults:
metadata:
type: abstract
component: iam-role
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
```
Use-case: An IAM role for AWS Workspaces Directory since this service does not have a service linked role.
```yaml
# stacks/catalog/aws-workspaces/directory/iam-role.yaml
import:
- catalog/iam-role
components:
terraform:
aws-workspaces/directory/iam-role:
metadata:
component: iam-role
inherits:
- iam-role/defaults
vars:
name: workspaces_DefaultRole
# Added _ here to allow the _ character
regex_replace_chars: /[^a-zA-Z0-9-_]/
# Keep the current name casing
label_value_case: none
# Use the "name" without the other context inputs i.e. namespace, tenant, environment, attributes
use_fullname: false
role_description: |
Used with AWS Workspaces Directory. The name of the role does not match the normal naming convention because this name is a requirement to work with the service. This role has to be used until AWS provides the respective service linked role.
principals:
Service:
- workspaces.amazonaws.com
# This will prevent the creation of a managed IAM policy
policy_document_count: 0
managed_policy_arns:
- arn:aws:iam::aws:policy/AmazonWorkSpacesServiceAccess
- arn:aws:iam::aws:policy/AmazonWorkSpacesSelfServiceAccess
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`role_description` (`string`) required
The description of the IAM role that is visible in the IAM role manager
### Optional Variables
`assume_role_actions` (`list(string)`) optional
The IAM action to be granted by the AssumeRole policy
**Default value:**
```hcl
[
"sts:AssumeRole",
"sts:SetSourceIdentity",
"sts:TagSession"
]
```
`assume_role_conditions` optional
List of conditions for the assume role policy
**Type:**
```hcl
list(object({
test = string
variable = string
values = list(string)
}))
```
**Default value:** `[ ]`
`assume_role_policy` (`string`) optional
A JSON assume role policy document. If set, this will be used as the assume role policy and the principals, assume_role_conditions, and assume_role_actions variables will be ignored.
**Default value:** `null`
`eks_oidc_issuer_url` (`string`) optional
The OIDC issuer URL from the EKS cluster (without https:// prefix).
Format: oidc.eks.<region>.amazonaws.com/id/<cluster-id>
If not specified, it will be derived from eks_oidc_provider_arn.
**Default value:** `""`
`eks_oidc_provider_arn` (`string`) optional
ARN of the EKS OIDC provider. Required when eks_oidc_provider_enabled is true.
Format: arn:aws:iam::<account-id>:oidc-provider/oidc.eks.<region>.amazonaws.com/id/<cluster-id>
**Default value:** `""`
`eks_oidc_provider_enabled` (`bool`) optional
Enable EKS OIDC provider for IRSA (IAM Roles for Service Accounts)
**Default value:** `false`
`github_oidc_provider_arn` (`string`) optional
ARN of the GitHub OIDC provider
**Default value:** `""`
Create EC2 Instance Profile for the role
**Default value:** `false`
`managed_policy_arns` (`set(string)`) optional
List of managed policies to attach to created role
**Default value:** `[ ]`
`max_session_duration` (`number`) optional
The maximum session duration (in seconds) for the role. Can have a value from 1 hour to 12 hours
**Default value:** `3600`
`path` (`string`) optional
Path to the role and policy. See [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) for more information.
**Default value:** `"/"`
`permissions_boundary` (`string`) optional
ARN of the policy that is used to set the permissions boundary for the role
**Default value:** `""`
`policy_description` (`string`) optional
The description of the IAM policy that is visible in the IAM policy manager
**Default value:** `""`
`policy_documents` (`list(string)`) optional
List of JSON IAM policy documents
**Default value:** `[ ]`
`policy_name` (`string`) optional
The name of the IAM policy that is visible in the IAM policy manager
**Default value:** `null`
`policy_statements` optional
Map of IAM policy statements (YAML-friendly structure) where the key is the statement ID (sid).
All statements will be combined into a single policy document with version "2012-10-17".
This policy document will be merged with policy_documents.
Each statement must have 'effect' and either 'actions' or 'not_actions'.
**Type:**
```hcl
map(object({
effect = string
actions = optional(list(string))
not_actions = optional(list(string))
resources = optional(any)
not_resources = optional(any)
principal = optional(any)
not_principal = optional(any)
condition = optional(any)
}))
```
**Default value:** `{ }`
`principals` (`map(list(string))`) optional
Map of service name as key and a list of ARNs to allow assuming the role as value (e.g. map(`AWS`, list(`arn:aws:iam:::role/admin`)))
**Default value:** `{ }`
`service_account_name` (`string`) optional
The name of the Kubernetes service account allowed to assume this role.
Use '*' to allow any service account in the namespace.
Defaults to module.this.name if not specified.
**Default value:** `null`
`service_account_namespace` (`string`) optional
The Kubernetes namespace of the service account allowed to assume this role.
Defaults to module.this.name if not specified.
**Default value:** `null`
`trusted_github_org` (`string`) optional
The GitHub organization unqualified repos are assumed to belong to. Keeps `*` from meaning all orgs and all repos.
**Default value:** `""`
`trusted_github_repos` (`list(string)`) optional
A list of GitHub repositories allowed to access this role.
Format is either "orgName/repoName" or just "repoName",
in which case "cloudposse" will be used for the "orgName".
Wildcard ("*") is allowed for "repoName".
**Default value:** `[ ]`
`use_fullname` (`bool`) optional
If set to 'true' then the full ID for the IAM role name (e.g. `[var.namespace]-[var.environment]-[var.stage]`) will be used.
Otherwise, `var.name` will be used for the IAM role name.
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`eks_assume_role_policy`
JSON encoded string representing the EKS OIDC "Assume Role" policy
`eks_oidc_provider_arn`
ARN of the EKS OIDC provider (pass-through for reference)
`eks_service_account_subject`
The service account subject claim used in the trust policy
`github_assume_role_policy`
JSON encoded string representing the "Assume Role" policy configured by the inputs
`role`
IAM role module outputs
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`role` | 0.23.0 | [`cloudposse/iam-role/aws`](https://registry.terraform.io/modules/cloudposse/iam-role/aws/0.23.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.assume_role_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.eks_oidc_provider_assume`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.github_oidc_provider_assume`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
---
## iam-service-linked-roles
This component is responsible for provisioning
[IAM Service-Linked Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html).
## Usage
**Stack Level**: Global
```yaml
components:
terraform:
iam-service-linked-roles:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
service_linked_roles:
spot_amazonaws_com:
aws_service_name: "spot.amazonaws.com"
description: "AWSServiceRoleForEC2Spot Service-Linked Role for EC2 Spot"
spotfleet_amazonaws_com:
aws_service_name: "spotfleet.amazonaws.com"
description: "AWSServiceRoleForEC2SpotFleet Service-Linked Role for EC2 Spot Fleet"
```
## Service-Linked Roles for EC2 Spot and EC2 Spot Fleet
If you want to use EC2 Spot or Spot Fleet, you will need to provision the following Service-Linked Roles:
- Service-Linked Role for EC2 Spot
- Service-Linked Role for EC2 Spot Fleet
This is only necessary if this is the first time you're using EC2 Spot and Spot Fleet in the account.
Note that if the Service-Linked Roles already exist in the AWS account (for example, if you used EC2 Spot or Spot Fleet before), and
you try to provision them again, you will see errors like the following:
```text
An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation:
Service role name AWSServiceRoleForEC2Spot has been taken in this account, please try a different suffix
An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation:
Service role name AWSServiceRoleForEC2SpotFleet has been taken in this account, please try a different suffix
```
## Variables
### Required Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`service_linked_roles`
Provisioned Service-Linked roles
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 3.0, < 6.0.0`
### Providers
- `aws`, version: `>= 3.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_service_linked_role.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_service_linked_role) (resource)
## Data Sources
The following data sources are used by this module:
---
## identity-center
This component is responsible for creating [AWS SSO Permission Sets][1] and creating AWS SSO Account Assignments, that
is, assigning IdP (Okta) groups and/or users to AWS SSO permission sets in specific AWS Accounts.
This component assumes that AWS SSO has already been enabled via the AWS Console (there isn't terraform or AWS CLI
support for this currently) and that the IdP has been configured to sync users and groups to AWS SSO.
## Usage
### Migration from v1.x
Version 2.0.0 introduces breaking changes. See [src/MIGRATION.md](https://github.com/cloudposse-terraform-components/aws-identity-center/tree/main/identity-center/src/MIGRATION.md) for detailed migration instructions.
Key changes:
- Removed `aws.root` provider - deploy directly to root account instead of delegating to identity
- Removed `sso_account_assignments_root` module
- Moved policy files (`policy-TerraformUpdateAccess.tf`, `policy-Identity-role-TeamAccess.tf`) to mixins
- Added static account map support via `account_map_enabled` variable
### Static Account Map Support
This component supports two modes for account ID resolution:
- **`account_map_enabled = true`** (default): Uses the `account-map` component to look up account IDs dynamically via remote state
- **`account_map_enabled = false`**: Uses the static `account_map` variable, eliminating the dependency on the `account-map` component
For most deployments, we recommend using static account mappings (`account_map_enabled: false`) for simplicity.
### Clickops
1. Go to root admin account
1. Select primary region
1. Go to AWS SSO
1. Enable AWS SSO
#### Delegation no longer recommended
Previously, Cloud Posse recommended delegating SSO to the identity account by following the next 2 steps:
1. Click Settings > Management
1. Delegate Identity as an administrator. This can take up to 30 minutes to take effect.
However, this is no longer recommended. Because the delegated SSO administrator cannot make changes in the `root`
account and this component needs to be able to make changes in the `root` account, any purported security advantage
achieved by delegating SSO to the `identity` account is lost.
Nevertheless, it is also not worth the effort to remove the delegation. If you have already delegated SSO to the
`identity`, continue on, leaving the stack configuration in the `gbl-identity` stack rather than the currently
recommended `gbl-root` stack.
### Google Workspace
:::important
> Your identity source is currently configured as 'External identity provider'. To add new groups or edit their
> memberships, you must do this using your external identity provider.
Groups _cannot_ be created with ClickOps in the AWS console and instead must be created with AWS API.
:::
Google Workspace is now supported by AWS Identity Center, but Group creation is not automatically handled. After
[configuring SAML and SCIM with Google Workspace and IAM Identity Center following the AWS documentation](https://docs.aws.amazon.com/singlesignon/latest/userguide/gs-gwp.html),
add any Group name to `var.groups` to create the Group with Terraform. Once the setup steps as described in the AWS
documentation have been completed and the Groups are created with Terraform, Users should automatically populate each
created Group.
```yaml
components:
terraform:
aws-sso:
vars:
groups:
- "Developers"
- "Dev Ops"
```
### Atmos
**Stack Level**: Global **Deployment**: Must be deployed by root-admin using `atmos` CLI
Add catalog to `gbl-root` root stack.
#### `account_assignments`
The `account_assignments` setting configures access to permission sets for users and groups in accounts, in the
following structure:
```yaml
:
groups:
:
permission_sets:
-
users:
:
permission_sets:
-
```
- The account names (a.k.a. "stages") must already be configured via the `accounts` component.
- The user and group names must already exist in AWS SSO. Usually this is accomplished by configuring them in Okta and
syncing Okta with AWS SSO.
- The permission sets are defined (by convention) in files names `policy-.tf` in the `aws-sso`
component. The definition includes the name of the permission set. See
`components/terraform/aws-sso/policy-AdminstratorAccess.tf` for an example.
#### `identity_roles_accessible` (via mixin)
> **Note:** This feature has been moved to a mixin in v2.0.0. To use it, vendor the `policy-Identity-role-TeamAccess.tf` mixin.
The `aws_teams_accessible` variable (when using the mixin) provides a list of role names corresponding to roles created in the
`iam-primary-roles` component. For each named role, a corresponding permission set will be created which allows the user
to assume that role. The permission set name is generated in Terraform from the role name using a statement like this one:
```hcl
format("Identity%sTeamAccess", replace(title(replace(team, "_", "-")), "-", ""))
```
See [mixins/README.md](https://github.com/cloudposse-terraform-components/aws-identity-center/tree/main/identity-center/mixins/README.md) for details on vendoring this mixin.
### Defining a new permission set
1. Give the permission set a name, capitalized, in CamelCase, e.g. `AuditManager`. We will use `NAME` as a placeholder
for the name in the instructions below. In Terraform, convert the name to lowercase snake case, e.g. `audit_manager`.
2. Create a file in the `aws-sso` directory with the name `policy-NAME.tf`.
3. In that file, create a policy as follows:
```hcl
data "aws_iam_policy_document" "TerraformUpdateAccess" {
# Define the custom policy here
}
locals {
NAME_permission_set = { # e.g. audit_manager_permission_set
name = "NAME", # e.g. AuditManager
description = "",
relay_state = "",
session_duration = "PT1H", # One hour, maximum allowed for chained assumed roles
tags = {},
inline_policy = data.aws_iam_policy_document.NAME.json,
policy_attachments = [] # ARNs of AWS managed IAM policies to attach, e.g. arn:aws:iam::aws:policy/ReadOnlyAccess
customer_managed_policy_attachments = [] # ARNs of customer managed IAM policies to attach
}
}
```
4. Create a file named `additional-permission-sets-list_override.tf` in the `aws-sso` directory (if it does not already
exist). This is a [terraform override file](https://developer.hashicorp.com/terraform/language/files/override),
meaning its contents will be merged with the main terraform file, and any locals defined in it will override locals
defined in other files. Having your code in this separate override file makes it possible for the component to
provide a placeholder local variable so that it works without customization, while allowing you to customize the
component and still update it without losing your customizations.
5. In that file, redefine the local variable `overridable_additional_permission_sets` as follows:
```hcl
locals {
overridable_additional_permission_sets = [
local.NAME_permission_set,
]
}
```
If you have multiple custom policies, add each one to the list.
6. With that done, the new permission set will be created when the changes are applied. You can then use it just like
the others.
7. If you want the permission set to be able to use Terraform, enable access to the Terraform state read/write (default)
role in `tfstate-backend`.
### Using Mixins
Mixins provide a way to extend the component with additional permission sets without modifying the core component code.
This makes it easier to keep your components up-to-date with upstream changes while maintaining custom functionality.
#### Available Mixins
This component provides several mixins in the [`mixins/`](https://github.com/cloudposse-terraform-components/aws-identity-center/tree/main/identity-center/mixins) directory:
- **`provider-root.tf`** - AWS root provider alias for migration scenarios (v1.x to v2.x upgrades)
- **`policy-TerraformUpdateAccess.tf`** - Permission set for Terraform state access
- **`policy-Identity-role-TeamAccess.tf`** - Permission sets for team role assumption
- **`policy-PartnerCentral.tf`** - AWS Partner Central permission sets for AWS Partner Network (APN) integration
See the [mixins/README.md](https://github.com/cloudposse-terraform-components/aws-identity-center/tree/main/identity-center/mixins/README.md) for a complete list of available mixins and detailed documentation.
#### Vendoring Mixins
**Option 1: Via component.yaml (Recommended)**
Add the mixin to your component's `component.yaml` file:
```yaml
# components/terraform/aws-sso/component.yaml
apiVersion: atmos/v1
kind: ComponentVendorConfig
spec:
source:
uri: github.com/cloudposse-terraform-components/aws-identity-center.git//src?ref={{ .Version }}
version: 1.0.0
included_paths:
- "**/**"
excluded_paths: []
# Mixins are pulled and merged into your component directory
mixins:
- uri: github.com/cloudposse-terraform-components/aws-identity-center.git//mixins/policy-PartnerCentral.tf?ref={{ .Version }}
version: 1.0.0
filename: policy-PartnerCentral.tf
```
**Option 2: Via vendor.yaml**
Use a centralized `vendor.yaml` file:
```yaml
# vendor.yaml
apiVersion: atmos/v1
kind: AtmosVendorConfig
spec:
sources:
- component: "terraform/aws-sso"
source: "github.com/cloudposse-terraform-components/aws-identity-center.git//src?ref={{ .Version }}"
version: "1.0.0"
targets:
- "components/terraform/aws-sso"
mixins:
- source: "github.com/cloudposse-terraform-components/aws-identity-center.git//mixins/policy-PartnerCentral.tf?ref={{ .Version }}"
version: "1.0.0"
filename: "policy-PartnerCentral.tf"
```
Then run:
```bash
atmos vendor pull -c aws-sso
```
#### Activating Vendored Permission Sets
After vendoring a mixin, include the permission sets in your component by updating `additional-permission-sets_override.tf`:
```hcl
# components/terraform/aws-sso/additional-permission-sets_override.tf
locals {
# Add custom permission sets.
# Mixins define local variables (e.g., local.partner_central_permission_sets)
# that you concatenate into this list.
overridable_additional_permission_sets = concat(
local.partner_central_permission_sets, # From policy-PartnerCentral.tf mixin
# Add other permission set locals here as needed
# local.custom_permission_sets,
)
}
```
Each mixin defines a local variable containing its permission sets. For example, `policy-PartnerCentral.tf` defines
`local.partner_central_permission_sets` with 8 permission sets for AWS Partner Central.
#### Creating Custom Mixins
You can create your own mixin files following this pattern:
```hcl
# components/terraform/aws-sso/policy-CustomRole.tf
locals {
custom_permission_sets = [
{
name = "MyCustomRole"
description = "Description of the role"
relay_state = ""
session_duration = ""
tags = {}
inline_policy = ""
policy_attachments = ["arn:${local.aws_partition}:iam::aws:policy/CustomPolicy"]
customer_managed_policy_attachments = []
},
]
}
```
Then reference it in `additional-permission-sets_override.tf`:
```hcl
locals {
overridable_additional_permission_sets = concat(
local.custom_permission_sets,
local.partner_central_permission_sets,
)
}
```
For more details, see [mixins/README.md](https://github.com/cloudposse-terraform-components/aws-identity-center/tree/main/identity-center/mixins/README.md).
#### Basic Example
The basic example shows how to configure the component with static account mappings (recommended for most deployments):
```yaml
components:
terraform:
aws-sso:
vars:
enabled: true
account_map_enabled: false
account_map:
full_account_map:
core-root: "111111111111"
core-audit: "222222222222"
plat-dev: "333333333333"
plat-staging: "444444444444"
plat-prod: "555555555555"
root_account_account_name: "core-root"
account_assignments:
core-root:
groups:
"Administrators":
permission_sets:
- AdministratorAccess
- TerraformApplyAccess
plat-dev:
groups:
"Developers":
permission_sets:
- AdministratorAccess
- ReadOnlyAccess
plat-prod:
groups:
"Developers":
permission_sets:
- ReadOnlyAccess
```
#### Advanced Example with YAML Anchors
The example snippet below shows how to use this module with YAML Anchors for reusable configurations:
```yaml
prod-cloud-engineers: &prod-cloud-engineers
Production Cloud Infrastructure Engineers:
permission_sets:
- AdministratorAccess
- ReadOnlyAccess
components:
terraform:
aws-sso:
vars:
enabled: true
account_map_enabled: false
account_map:
full_account_map:
core-root: "111111111111"
core-audit: "222222222222"
plat-dev: "333333333333"
plat-prod: "444444444444"
root_account_account_name: "core-root"
account_assignments:
core-audit:
groups:
<<: *prod-cloud-engineers
Production Cloud Engineers:
permission_sets:
- ReadOnlyAccess
plat-prod:
groups:
Administrators:
permission_sets:
- AdministratorAccess
- ReadOnlyAccess
Developers:
permission_sets:
- ReadOnlyAccess
plat-dev:
groups:
Administrators:
permission_sets:
- AdministratorAccess
- ReadOnlyAccess
Developers:
permission_sets:
- AdministratorAccess
- ReadOnlyAccess
```
[][40]
[1]: https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html
[2]: #requirement%5C_terraform
[3]: #requirement%5C_aws
[4]: #requirement%5C_external
[5]: #requirement%5C_local
[6]: #requirement%5C_template
[7]: #requirement%5C_utils
[8]: #provider%5C_aws
[9]: #module%5C_account%5C_map
[10]: #module%5C_permission%5C_sets
[11]: #module%5C_role%5C_prefix
[12]: #module%5C_sso%5C_account%5C_assignments
[13]: #module%5C_this
[14]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document
[15]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document
[16]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document
[17]: #input%5C_account%5C_assignments
[18]: #input%5C_additional%5C_tag%5C_map
[19]: #input%5C_attributes
[20]: #input%5C_context
[21]: #input%5C_delimiter
[22]: #input%5C_enabled
[23]: #input%5C_environment
[24]: #input%5C_global%5C_environment%5C_name
[25]: #input%5C_iam%5C_primary%5C_roles%5C_stage%5C_name
[26]: #input%5C_id%5C_length%5C_limit
[27]: #input%5C_identity%5C_roles%5C_accessible
[28]: #input%5C_label%5C_key%5C_case
[29]: #input%5C_label%5C_order
[30]: #input%5C_label%5C_value%5C_case
[31]: #input%5C_name
[32]: #input%5C_namespace
[33]: #input%5C_privileged
[34]: #input%5C_regex%5C_replace%5C_chars
[35]: #input%5C_region
[36]: #input%5C_root%5C_account%5C_stage%5C_name
[37]: #input%5C_stage
[38]: #input%5C_tags
[39]: https://github.com/cloudposse/terraform-aws-sso
[40]: https://cpco.io/component
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_assignments` optional
Enables access to permission sets for users and groups in accounts, in the following structure:
```yaml
<account-name>:
groups:
<group-name>:
permission_sets:
- <permission-set-name>
users:
<user-name>:
permission_sets:
- <permission-set-name>
```
**Type:**
```hcl
map(map(map(object({
permission_sets = list(string)
}
))))
```
**Default value:** `{ }`
`account_map` optional
Map of account names (tenant-stage format) to account IDs. Used to verify we're targeting the correct AWS account. Optional attributes support component-specific functionality (e.g., audit_account_account_name for cloudtrail, root_account_account_name for aws-sso).
**Type:**
```hcl
object({
full_account_map = map(string)
audit_account_account_name = optional(string, "")
root_account_account_name = optional(string, "")
identity_account_account_name = optional(string, "")
aws_partition = optional(string, "aws")
iam_role_arn_templates = optional(map(string), {})
})
```
**Default value:**
```hcl
{
"audit_account_account_name": "",
"aws_partition": "aws",
"full_account_map": {},
"iam_role_arn_templates": {},
"identity_account_account_name": "",
"root_account_account_name": ""
}
```
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
`account_map_enabled` (`bool`) optional
When true, uses the account-map component to look up account IDs dynamically.
When false, uses the static account_map variable instead. Set to false when
using Atmos Auth profiles and static account mappings.
**Default value:** `false`
`groups` (`list(string)`) optional
List of AWS Identity Center Groups to be created with the AWS API.
When provisioning the Google Workspace Integration with AWS, Groups need to be created with API in order for automatic provisioning to work as intended.
**Default value:** `[ ]`
`idp_groups` (`list(string)`) optional
List of IdP group names to look up and include in the group_ids output.
These groups are managed by your Identity Provider (e.g., Google Workspace, Okta)
and synced to AWS Identity Center. This allows referencing their IDs in other components.
**Default value:** `[ ]`
`session_duration` (`string`) optional
The default duration of the session in seconds for all permission sets. If not set, fallback to the default value in the module, which is 1 hour.
**Default value:** `""`
`tf_access_additional_backends` optional
Map of additional Terraform state backends to grant SSO permission sets access to.
Each entry creates three permission sets: TerraformPlanAccess-<key>, TerraformApplyAccess-<key>, and TerraformStateAccess-<key>.
The map key should be a descriptive name for the backend (e.g., "core", "plat", "prod").
This key will be title-cased and appended to the permission set names with a hyphen.
Example:
```
tf_access_additional_backends = {
core = {
bucket_arn = "arn:aws:s3:::example-core-tfstate"
dynamodb_table_arn = "arn:aws:dynamodb:us-east-1:123456789012:table/example-core-tfstate-lock"
role_arn = "arn:aws:iam::123456789012:role/example-core-gbl-root-tfstate"
}
plat = {
bucket_arn = "arn:aws:s3:::example-plat-tfstate"
role_arn = "arn:aws:iam::123456789012:role/example-plat-gbl-root-tfstate"
}
}
```
**Type:**
```hcl
map(object({
bucket_arn = string
dynamodb_table_arn = optional(string, "")
role_arn = string
}))
```
**Default value:** `{ }`
`tf_access_bucket_arn` (`string`) optional
The ARN of the S3 bucket for the Terraform state backend.
**Default value:** `""`
The ARN of the DynamoDB table for the Terraform state backend.
**Default value:** `""`
`tf_access_role_arn` (`string`) optional
The ARN of the IAM role for accessing the Terraform state backend.
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`group_ids`
Group IDs for Identity Center (includes both manually created and IdP-synced groups)
`permission_sets`
Permission sets
`sso_account_assignments`
SSO account assignments
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | Remote state lookup for the account-map component (or fallback to static mapping). When account_map_enabled is true: - Performs remote state lookup to retrieve account mappings from the account-map component - Uses global tenant/environment/stage from iam_roles module for the lookup When account_map_enabled is false: - Bypasses the remote state lookup (bypass = true) - Returns the static account_map variable as defaults instead - Allows the component to function without the account-map dependency
`iam_roles` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | Dummy module to satisfy references from optional mixin files that use module.iam_roles (e.g. remote-state.tf). When using account-map, vendor the v1-providers.tf mixin which replaces this with the real iam-roles module.
`permission_sets` | 1.2.0 | [`cloudposse/sso/aws//modules/permission-sets`](https://registry.terraform.io/modules/cloudposse/sso/aws/modules/permission-sets/1.2.0) | n/a
`sso_account_assignments` | 1.2.0 | [`cloudposse/sso/aws//modules/account-assignments`](https://registry.terraform.io/modules/cloudposse/sso/aws/modules/account-assignments/1.2.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_identitystore_group.manual`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/identitystore_group) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.dns_administrator_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.eks_read_only`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.terraform_apply_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.terraform_apply_access_additional`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.terraform_plan_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.terraform_plan_access_additional`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.terraform_state_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.terraform_state_access_additional`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_identitystore_group.idp`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/identitystore_group) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_ssoadmin_instances.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssoadmin_instances) (data source)
---
## ipam
This component is responsible for provisioning IPAM per region in a centralized account.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
ipam:
vars:
enabled: true
top_cidr: [10.96.0.0/11]
pool_configurations:
core:
cidr: [10.96.0.0/12]
locale: us-east-2
sub_pools:
network:
cidr: [10.96.0.0/16]
ram_share_accounts: [core-network]
auto:
cidr: [10.97.0.0/16]
ram_share_accounts: [core-auto]
corp:
cidr: [10.98.0.0/16]
ram_share_accounts: [core-corp]
plat:
cidr: [10.112.0.0/12]
locale: us-east-2
sub_pools:
dev:
cidr: [10.112.0.0/16]
ram_share_accounts: [plat-dev]
staging:
cidr: [10.113.0.0/16]
ram_share_accounts: [plat-staging]
prod:
cidr: [10.114.0.0/16]
ram_share_accounts: [plat-prod]
sandbox:
cidr: [10.115.0.0/16]
ram_share_accounts: [plat-sandbox]
```
## Variables
### Required Variables
(Optional) Required if `var.ipam_id` is set. Determines which scope to deploy pools into.
**Default value:** `null`
`ipam_scope_type` (`string`) optional
Which scope type to use. Valid inputs include `public` or `private`. You can alternatively provide your own scope ID.
**Default value:** `"private"`
`pool_configurations` (`any`) optional
A multi-level, nested map describing nested IPAM pools. Can nest up to three levels with the top level being outside the `pool_configurations`. This attribute is quite complex, see README.md for further explanation.
**Default value:** `{ }`
`top_auto_import` (`bool`) optional
`auto_import` setting for top-level pool.
**Default value:** `null`
`top_cidr_authorization_context` (`any`) optional
A signed document that proves that you are authorized to bring the specified IP address range to Amazon using BYOIP. Document is not stored in the state file. For more information, refer to https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_ipam_pool_cidr#cidr_authorization_context.
**Default value:** `null`
`top_description` (`string`) optional
Description of top-level pool.
**Default value:** `""`
Principals to create RAM shares for top-level pool.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`pool_configurations`
Pool configurations
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`ipam` | 2.1.2 | [`aws-ia/ipam/aws`](https://registry.terraform.io/modules/aws-ia/ipam/aws/2.1.2) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_region.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) (data source)
---
## kinesis-firehose-stream
This component provisions a Kinesis Firehose delivery stream and at this time supports CloudWatch to S3 delivery. It enables you to stream logs from EKS CloudWatch to an S3 bucket for long-term storage and analysis.
## Usage
**Stack Level**: Regional
Here's an example of how to set up a Firehose stream to capture EKS CloudWatch logs and deliver them to an S3 bucket:
```yaml
components:
terraform:
# First, ensure you have the required dependencies:
eks/cluster:
vars:
name: eks-cluster
# ... other EKS cluster configuration
eks/cloudwatch:
vars:
name: eks-cloudwatch
# ... other CloudWatch configuration
s3-bucket/cloudwatch:
vars:
name: cloudwatch-logs-bucket
# ... other S3 bucket configuration
# Then configure the Firehose stream:
kinesis-firehose-stream/basic:
metadata:
component: kinesis-firehose-stream
vars:
name: cloudwatch-logs
# Source CloudWatch component name
source_cloudwatch_component_name: eks/cloudwatch
# Destination S3 bucket component name
destination_bucket_component_name: s3-bucket/cloudwatch
# Optional: Enable encryption for the Firehose stream
encryption_enabled: true
```
This configuration will:
1. Create a Kinesis Firehose delivery stream
2. Configure it to receive logs from the specified EKS CloudWatch component
3. Deliver the logs to the specified S3 bucket
4. Optionally enable encryption for the stream
## Variables
### Required Variables
The name of the component that will be using the source cloudwatch
**Default value:** `"eks/cloudwatch"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`kinesis_firehose_stream_arn`
The ARN of the Kinesis Firehose stream
`kinesis_firehose_stream_id`
The ID of the Kinesis Firehose stream
`kinesis_firehose_stream_name`
The name of the Kinesis Firehose stream
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Providers
- `aws`, version: `>= 4.1, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`cloudwatch` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`cloudwatch_subscription_role` | 0.22.0 | [`cloudposse/iam-role/aws`](https://registry.terraform.io/modules/cloudposse/iam-role/aws/0.22.0) | n/a
`firehose_role` | 0.22.0 | [`cloudposse/iam-role/aws`](https://registry.terraform.io/modules/cloudposse/iam-role/aws/0.22.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`s3_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`stream_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cloudwatch_log_subscription_filter.firehose_delivery`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_subscription_filter) (resource)
- [`aws_kinesis_firehose_delivery_stream.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kinesis_firehose_delivery_stream) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.cloudwatch_to_firehose`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.firehose_to_s3`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_kms_alias.s3`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/kms_alias) (data source)
---
## kinesis-stream
This component is responsible for provisioning an Amazon Kinesis data stream.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
`stacks/catalog/kinesis-stream/defaults.yaml` file (base component for all kinesis deployments with default settings):
```yaml
components:
terraform:
kinesis-stream/defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
tags:
Team: sre
Service: kinesis-stream
```
```yaml
import:
- catalog/kinesis-stream/defaults
components:
terraform:
kinesis-example:
metadata:
component: kinesis-stream
inherits:
- kinesis-stream/defaults
vars:
name: kinesis-stream-example
stream_mode: ON_DEMAND
# shard_count: 2 # This does nothing if `stream_mode` is set to `ON_DEMAND`
kms_key_id: "alias/aws/kinesis"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`consumer_count` (`number`) optional
Number of consumers to register with the Kinesis stream.
**Default value:** `0`
`encryption_type` (`string`) optional
The encryption type to use. Acceptable values are `NONE` and `KMS`.
**Default value:** `"KMS"`
`enforce_consumer_deletion` (`bool`) optional
Forcefully delete stream consumers before destroying the stream.
**Default value:** `true`
`kms_key_id` (`string`) optional
The name of the KMS key to use for encryption.
**Default value:** `"alias/aws/kinesis"`
`retention_period` (`number`) optional
Length of time data records are accessible after they are added to the stream. The maximum value is 168 hours. Minimum value is 24.
**Default value:** `24`
`shard_count` (`number`) optional
The number of shards to provision for the stream.
**Default value:** `1`
`shard_level_metrics` (`list(string)`) optional
A list of shard-level CloudWatch metrics to enabled for the stream.
**Default value:**
```hcl
[
"IncomingBytes",
"OutgoingBytes"
]
```
`stream_mode` (`string`) optional
Specifies the capacity mode of the stream. Must be either `PROVISIONED` or `ON_DEMAND`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`name`
Name of the Kinesis stream.
`shard_count`
Number of shards provisioned.
`stream_arn`
ARN of the the Kinesis stream.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kinesis` | 0.4.0 | [`cloudposse/kinesis-stream/aws`](https://registry.terraform.io/modules/cloudposse/kinesis-stream/aws/0.4.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## kms
This component is responsible for provisioning a KMS Key.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
kms:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
```
## Variables
### Required Variables
The display name of the alias. The name must start with the word alias followed by a forward slash. If not specified, the alias name will be auto-generated.
**Default value:** `null`
List of AWS principal ARNs allowed to assume the role.
**Default value:** `[ ]`
`allowed_roles` (`map(list(string))`) optional
Map of account:[role, role...] specifying roles allowed to assume the role.
Roles are symbolic names like `ops` or `terraform`. Use `*` as role for entire account.
**Default value:** `{ }`
`customer_master_key_spec` (`string`) optional
Specifies whether the key contains a symmetric key or an asymmetric key pair and the encryption algorithms or signing algorithms that the key supports. Valid values: `SYMMETRIC_DEFAULT`, `RSA_2048`, `RSA_3072`, `RSA_4096`, `ECC_NIST_P256`, `ECC_NIST_P384`, `ECC_NIST_P521`, or `ECC_SECG_P256K1`.
**Default value:** `"SYMMETRIC_DEFAULT"`
`deletion_window_in_days` (`number`) optional
Duration in days after which the key is deleted after destruction of the resource
**Default value:** `10`
`description` (`string`) optional
The description for the KMS Key.
**Default value:** `"Parameter Store KMS master key"`
`enable_key_rotation` (`bool`) optional
Specifies whether key rotation is enabled
**Default value:** `true`
`key_usage` (`string`) optional
Specifies the intended use of the key. Valid values: `ENCRYPT_DECRYPT` or `SIGN_VERIFY`.
**Default value:** `"ENCRYPT_DECRYPT"`
`multi_region` (`bool`) optional
Indicates whether the KMS key is a multi-Region (true) or regional (false) key.
**Default value:** `false`
`policy` (`string`) optional
A valid KMS policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy.
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`kms_key`
Output for KMS module
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`allowed_role_map` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/roles-to-principals`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/roles-to-principals/v1.537.1) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.key_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
---
## lakeformation
This component is responsible for provisioning Amazon Lake Formation resources.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
`stacks/catalog/lakeformation/defaults.yaml` file (base component for all lakeformation deployments with default
settings):
```yaml
components:
terraform:
lakeformation/defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
tags:
Team: sre
Service: lakeformation
```
```yaml
import:
- catalog/lakeformation/defaults
components:
terraform:
lakeformation-example:
metadata:
component: lakeformation
inherits:
- lakeformation/defaults
vars:
enabled: true
name: lakeformation-example
s3_bucket_arn: arn:aws:s3:::some-test-bucket
create_service_linked_role: true
admin_arn_list:
- arn:aws:iam::012345678912:role/my-admin-role
lf_tags:
left: ["test1", "test2"]
right: ["test3", "test4"]
resources:
database:
name: example_db_1
tags:
left: test1
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`s3_bucket_arn` (`string`) required
Amazon Resource Name (ARN) of the Lake Formation resource, an S3 path.
### Optional Variables
`admin_arn_list` (`list(string)`) optional
(Optional) Set of ARNs of AWS Lake Formation principals (IAM users or roles).
**Default value:** `[ ]`
`catalog_id` (`string`) optional
(Optional) Identifier for the Data Catalog. If not provided, the account ID will be used.
**Default value:** `null`
`create_service_linked_role` (`bool`) optional
Set to 'true' to create service-linked role for Lake Formation (can only be done once!)
**Default value:** `false`
(Optional) Up to three configuration blocks of principal permissions for default create database permissions.
**Default value:** `[ ]`
`lf_tags` (`map(list(string))`) optional
A map of key-value pairs to be used as Lake Formation tags.
**Default value:** `{ }`
`resources` (`map(any)`) optional
A map of Lake Formation resources to create, with related attributes.
**Default value:** `{ }`
`role_arn` (`string`) optional
(Optional) Role that has read/write access to the Lake Formation resource. If not provided, the Lake Formation service-linked role must exist and is used.
**Default value:** `null`
(Optional) List of the resource-owning account IDs that the caller's account can use to share their user access details (user ARNs).
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`lf_tags`
List of LF tags created.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`lakeformation` | 1.0.0 | [`cloudposse/lakeformation/aws`](https://registry.terraform.io/modules/cloudposse/lakeformation/aws/1.0.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_service_linked_role.lakeformation`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_service_linked_role) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_role.lakeformation`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_role) (data source)
---
## lambda
This component is responsible for provisioning Lambda functions.
## Usage
**Stack Level**: Regional
Stack configuration for defaults:
```yaml
components:
terraform:
lambda-defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
```
Sample App Yaml Entry:
```yaml
import:
- catalog/lambda/defaults
components:
terraform:
lambda/hello-world-py:
metadata:
component: lambda
inherits:
- lambda/defaults
vars:
name: hello-world-py
function_name: main
description: Hello Lambda from Python!
handler: lambda.lambda_handler # in go this is the compiled binary, python it's filename.function
memory_size: 256
# https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html
runtime: python3.9
package_type: Zip # `Zip` or `Image`
policy_json: |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListAllBuckets",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
iam_policy:
statements:
- sid: AllowSQSWorkerWriteAccess
effect: Allow
actions:
- sqs:SendMessage
- sqs:SendMessageBatch
resources:
- arn:aws:sqs:*:111111111111:worker-queue
# Filename example
filename: lambdas/hello-world-python/output.zip # generated by zip variable.
zip:
enabled: true
input_dir: hello-world-python
output: hello-world-python/output.zip
# S3 Source Example
# s3_bucket_name: lambda-source # lambda main.tf calculates the rest of the bucket_name
# s3_key: hello-world-go.zip
```
### Notifications:
#### SQS
```yaml
sqs_notifications:
my-service-a:
sqs_component:
component: sqs-queue/my-service-a
my-service-b:
sqs_arn: arn:aws:sqs:us-west-2:111111111111:my-service-b
```
#### S3
```yaml
s3_notifications:
my-service-a:
bucket_component:
component: s3-bucket/my-service-a
events: ["s3:ObjectCreated:*"]
my-service-b:
bucket_name: my-service-b
events: ["s3:ObjectCreated:*", "s3:ObjectRemoved:*"]
```
#### Cron (CloudWatch Event)
```yaml
cloudwatch_event_rules:
schedule-a:
schedule_expression: "rate(5 minutes)"
schedule-b:
schedule_expression: "cron(0 20 * * ? *)"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`architectures` (`list(string)`) optional
Instruction set architecture for your Lambda function. Valid values are ["x86_64"] and ["arm64"].
Default is ["x86_64"]. Removing this attribute, function's architecture stay the same.
**Default value:** `null`
`cicd_s3_key_format` (`string`) optional
The format of the S3 key to store the latest version/sha of the Lambda function. This is used with cicd_ssm_param_name. Defaults to 'stage/\{stage\}/lambda/\{function_name\}/%s.zip'
**Default value:** `null`
`cicd_ssm_param_name` (`string`) optional
The name of the SSM parameter to store the latest version/sha of the Lambda function. This is used with cicd_s3_key_format
**Default value:** `null`
`cloudwatch_event_rules` optional
Creates EventBridge (CloudWatch Events) rules for invoking the Lambda Function along with the required permissions.
**Type:**
```hcl
map(object({
description = optional(string)
event_bus_name = optional(string)
event_pattern = optional(string)
is_enabled = optional(bool)
name_prefix = optional(string)
role_arn = optional(string)
schedule_expression = optional(string)
}))
```
**Default value:** `{ }`
Specifies the number of days you want to retain log events in the specified log group. Possible values are:
1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653, and 0. If you select 0, the events in the
log group are always retained and never expire.
**Default value:** `null`
`custom_iam_policy_arns` (`set(string)`) optional
ARNs of IAM policies to be attached to the Lambda role
**Default value:** `[ ]`
ARN of an SNS topic or SQS queue to notify when an invocation fails. If this option is used, the function's IAM role
must be granted suitable access to write to the target object, which means allowing either the sns:Publish or
sqs:SendMessage action on this ARN, depending on which service is targeted."
**Default value:** `null`
`description` (`string`) optional
Description of what the Lambda Function does.
**Default value:** `null`
`filename` (`string`) optional
The path to the function's deployment package within the local filesystem. Works well with the `zip` variable. If defined, The s3_-prefixed options and image_uri cannot be used.
**Default value:** `null`
`function_name` (`string`) optional
Unique name for the Lambda Function.
**Default value:** `null`
`function_url_enabled` (`bool`) optional
Create a aws_lambda_function_url resource to expose the Lambda function
**Default value:** `false`
`handler` (`string`) optional
The function entrypoint in your code.
**Default value:** `null`
`iam_policy` optional
IAM policy as list of Terraform objects, compatible with Terraform `aws_iam_policy_document` data source
except that `source_policy_documents` and `override_policy_documents` are not included.
Use inputs `iam_source_policy_documents` and `iam_override_policy_documents` for that.
**Type:**
```hcl
list(object({
policy_id = optional(string, null)
version = optional(string, null)
statements = list(object({
sid = optional(string, null)
effect = optional(string, null)
actions = optional(list(string), null)
not_actions = optional(list(string), null)
resources = optional(list(string), null)
not_resources = optional(list(string), null)
conditions = optional(list(object({
test = string
variable = string
values = list(string)
})), [])
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
not_principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
}))
}))
```
**Default value:** `[ ]`
`iam_policy_description` (`string`) optional
Description of the IAM policy for the Lambda IAM role
**Default value:** `"Minimum SSM read permissions for Lambda IAM Role"`
`image_config` (`any`) optional
The Lambda OCI [image configurations](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function#image_config)
block with three (optional) arguments:
- *entry_point* - The ENTRYPOINT for the docker image (type `list(string)`).
- *command* - The CMD for the docker image (type `list(string)`).
- *working_directory* - The working directory for the docker image (type `string`).
**Default value:** `{ }`
`image_uri` (`string`) optional
The ECR image URI containing the function's deployment package. Conflicts with `filename`, `s3_bucket_name`, `s3_key`, and `s3_object_version`.
**Default value:** `null`
`kms_key_arn` (`string`) optional
Amazon Resource Name (ARN) of the AWS Key Management Service (KMS) key that is used to encrypt environment variables.
If this configuration is not provided when environment variables are in use, AWS Lambda uses a default service key.
If this configuration is provided when environment variables are not in use, the AWS Lambda API does not save this
configuration and Terraform will show a perpetual difference of adding the key. To fix the perpetual difference,
remove this configuration.
**Default value:** `""`
`lambda_at_edge_enabled` (`bool`) optional
Enable Lambda@Edge for your Node.js or Python functions. The required trust relationship and publishing of function versions will be configured in this module.
**Default value:** `false`
`lambda_environment` optional
Environment (e.g. ENV variables) configuration for the Lambda function enable you to dynamically pass settings to your function code and libraries.
**Type:**
```hcl
object({
variables = map(string)
})
```
**Default value:** `null`
`layers` (`list(string)`) optional
List of Lambda Layer Version ARNs (maximum of 5) to attach to the Lambda Function.
**Default value:** `[ ]`
`memory_size` (`number`) optional
Amount of memory in MB the Lambda Function can use at runtime.
**Default value:** `128`
`package_type` (`string`) optional
The Lambda deployment package type. Valid values are `Zip` and `Image`.
**Default value:** `"Zip"`
`permissions_boundary` (`string`) optional
ARN of the policy that is used to set the permissions boundary for the role
**Default value:** `""`
`policy_json` (`string`) optional
IAM policy to attach to the Lambda role, specified as JSON. This can be used with or instead of `var.iam_policy`.
**Default value:** `null`
`publish` (`bool`) optional
Whether to publish creation/change as new Lambda Function Version.
**Default value:** `false`
The amount of reserved concurrent executions for this lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations.
**Default value:** `-1`
`runtime` (`string`) optional
The runtime environment for the Lambda function you are uploading.
**Default value:** `null`
`s3_bucket_component` optional
The bucket component to use for the S3 bucket containing the function's deployment package. Conflicts with `s3_bucket_name`, `filename` and `image_uri`.
**Type:**
```hcl
object({
component = string
tenant = optional(string)
stage = optional(string)
environment = optional(string)
})
```
**Default value:** `null`
`s3_bucket_name` (`string`) optional
The name suffix of the S3 bucket containing the function's deployment package. Conflicts with filename and image_uri.
This bucket must reside in the same AWS region where you are creating the Lambda function.
**Default value:** `null`
`s3_key` (`string`) optional
The S3 key of an object containing the function's deployment package. Conflicts with filename and image_uri.
**Default value:** `null`
The object version containing the function's deployment package. Conflicts with filename and image_uri.
**Default value:** `null`
`source_code_hash` (`string`) optional
Used to trigger updates. Must be set to a base64-encoded SHA256 hash of the package file specified with either
filename or s3_key. The usual way to set this is filebase64sha256('file.zip') where 'file.zip' is the local filename
of the lambda function source archive.
**Default value:** `""`
List of AWS Systems Manager Parameter Store parameter names. The IAM role of this Lambda function will be enhanced
with read permissions for those parameters. Parameters must start with a forward slash and can be encrypted with the
default KMS key.
**Default value:** `null`
`timeout` (`number`) optional
The amount of time the Lambda Function has to run in seconds.
**Default value:** `3`
`tracing_config_mode` (`string`) optional
Tracing config mode of the Lambda function. Can be either PassThrough or Active.
**Default value:** `null`
`vpc_config` optional
Provide this to allow your function to access your VPC (if both 'subnet_ids' and 'security_group_ids' are empty then
vpc_config is considered to be empty or unset, see https://docs.aws.amazon.com/lambda/latest/dg/vpc.html for details).
**Type:**
```hcl
object({
security_group_ids = list(string)
subnet_ids = list(string)
})
```
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
ARN of the lambda function
`function_name`
Lambda function name
`invoke_arn`
Invoke ARN of the lambda function
`qualified_arn`
ARN identifying your Lambda Function Version (if versioning is enabled via publish = true)
`role_arn`
Lambda IAM role ARN
`role_name`
Lambda IAM role name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `archive`, version: `>= 2.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `random`, version: `>= 3.0.0`
### Providers
- `archive`, version: `>= 2.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `random`, version: `>= 3.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`cloudwatch_event_rules_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`iam_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`lambda` | 0.6.1 | [`cloudposse/lambda-function/aws`](https://registry.terraform.io/modules/cloudposse/lambda-function/aws/0.6.1) | n/a
`s3_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`s3_bucket_notifications_component` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`sqs_iam_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`sqs_queue` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cloudwatch_event_rule.event_rules`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_rule) (resource)
- [`aws_cloudwatch_event_target.sns`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_target) (resource)
- [`aws_iam_role_policy_attachment.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`aws_iam_role_policy_attachment.sqs_default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`aws_lambda_event_source_mapping.event_source_mapping`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_event_source_mapping) (resource)
- [`aws_lambda_function_url.lambda_url`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function_url) (resource)
- [`aws_lambda_permission.allow_cloudwatch_to_call_lambda`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_permission) (resource)
- [`aws_lambda_permission.s3_notification`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_permission) (resource)
- [`aws_s3_bucket_notification.s3_notifications`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_notification) (resource)
- [`random_pet.zip_recreator`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
## Data Sources
The following data sources are used by this module:
- [`archive_file.lambdazip`](https://registry.terraform.io/providers/hashicorp/archive/latest/docs/data-sources/file) (data source)
- [`aws_ssm_parameter.cicd_ssm_param`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## macie
This component is responsible for configuring Macie within an AWS Organization.
Amazon Macie is a data security service that discovers sensitive data by using machine learning and pattern matching,
provides visibility into data security risks, and enables automated protection against those risks.
To help you manage the security posture of your organization's Amazon Simple Storage Service (Amazon S3) data estate,
Macie provides you with an inventory of your S3 buckets, and automatically evaluates and monitors the buckets for
security and access control. If Macie detects a potential issue with the security or privacy of your data, such as a
bucket that becomes publicly accessible, Macie generates a finding for you to review and remediate as necessary.
Macie also automates discovery and reporting of sensitive data to provide you with a better understanding of the data
that your organization stores in Amazon S3. To detect sensitive data, you can use built-in criteria and techniques that
Macie provides, custom criteria that you define, or a combination of the two. If Macie detects sensitive data in an S3
object, Macie generates a finding to notify you of the sensitive data that Macie found.
In addition to findings, Macie provides statistics and other data that offer insight into the security posture of your
Amazon S3 data, and where sensitive data might reside in your data estate. The statistics and data can guide your
decisions to perform deeper investigations of specific S3 buckets and objects. You can review and analyze findings,
statistics, and other data by using the Amazon Macie console or the Amazon Macie API. You can also leverage Macie
integration with Amazon EventBridge and AWS Security Hub to monitor, process, and remediate findings by using other
services, applications, and systems.
## Usage
**Stack Level**: Regional
## Deployment Overview
This component uses the **delegated administrator** deployment model, which requires a **3-step deployment process**.
The component must be deployed multiple times with different variables to configure the AWS Organization.
Macie follows the same deployment pattern as GuardDuty and Security Hub.
In the examples below, we assume that the AWS Organization Management account is `root` and the AWS Organization
Delegated Administrator account is `security`, both in the `core` tenant.
### Architecture
```text
┌─────────────────────────────────────────────────────────────────────────────┐
│ AWS Organization │
│ │
│ ┌────────────────────────────────────────────────────────────────────────┐ │
│ │ Organization Management Account (root) │ │
│ │ STEP 2: Delegate Macie administration to security account │ │
│ │ - Creates: aws_macie2_organization_admin_account │ │
│ │ - Requires: SuperAdmin permissions (privileged: true) │ │
│ └────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ Delegation │
│ ┌────────────────────────────────────────────────────────────────────────┐ │
│ │ Delegated Administrator Account (security) │ │
│ │ STEP 1 (FIRST): Create Macie account (admin_delegated: false) │ │
│ │ - Creates: aws_macie2_account │ │
│ │ │ │
│ │ STEP 3 (LAST): Configure org settings (admin_delegated: true) │ │
│ │ - Creates: awsutils_macie2_organization_settings │ │
│ │ - Enables member accounts, configures finding publishing │ │
│ └────────────────────────────────────────────────────────────────────────┘ │
│ ▲ │
│ │ Findings │
│ ┌────────────────────────────────────────────────────────────────────────┐ │
│ │ Member Accounts (all other accounts) │ │
│ │ - Automatically enabled by delegated administrator │ │
│ │ - S3 buckets automatically inventoried and monitored │ │
│ │ - Findings sent to security account │ │
│ └────────────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
```
### Deployment Steps Summary
| Step | Account | Variable | Resources Created |
|------|---------|----------|-------------------|
| 1 | Security | `admin_delegated: false` | `aws_macie2_account` |
| 2 | Root | `privileged: true` | `aws_macie2_organization_admin_account` |
| 3 | Security | `admin_delegated: true` | `awsutils_macie2_organization_settings` |
### Step 1: Deploy to Delegated Administrator Account (FIRST)
First, the component is deployed to the
[Delegated Administrator](https://docs.aws.amazon.com/macie/latest/user/accounts-mgmt-ao-integrate.html) account to
create the Macie account. This **must be done before** the root account delegates administration.
```yaml
# core-ue1-security
components:
terraform:
macie/delegated-administrator:
metadata:
component: macie
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
# Not yet delegated - creates Macie account only
admin_delegated: false
```
```bash
atmos terraform apply macie/delegated-administrator -s core-ue1-security
```
### Step 2: Deploy to Organization Management (root) Account
Next, the component is deployed to the AWS Organization Management account to delegate Macie administration
to the security account.
Note that you need `SuperAdmin` permissions as we are deploying to the AWS Organization Management account. Since we are
using the `SuperAdmin` user, it will already have access to the state bucket, so we set the `role_arn` of the backend
config to null and set `var.privileged` to `true`.
```yaml
# core-ue1-root
components:
terraform:
macie/root:
metadata:
component: macie
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
privileged: true
```
```bash
atmos terraform apply macie/root -s core-ue1-root
```
### Step 3: Deploy Organization Settings in Delegated Administrator Account (LAST)
Finally, the component is deployed to the Delegated Administrator Account again to create the organization-wide
configuration. Set `var.admin_delegated` to `true` to indicate that the delegation has been completed.
```yaml
# core-ue1-security
components:
terraform:
macie/org-settings:
metadata:
component: macie
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
admin_delegated: true
```
```bash
atmos terraform apply macie/org-settings -s core-ue1-security
```
### Multi-Region Deployment
Macie is a **regional service**. Deploy to each region where you have S3 buckets to monitor:
```bash
# Deploy to us-east-1 (all 3 steps)
atmos terraform apply macie/delegated-administrator -s core-ue1-security
atmos terraform apply macie/root -s core-ue1-root
atmos terraform apply macie/org-settings -s core-ue1-security
# Deploy to us-west-2 (all 3 steps)
atmos terraform apply macie/delegated-administrator -s core-uw2-security
atmos terraform apply macie/root -s core-uw2-root
atmos terraform apply macie/org-settings -s core-uw2-security
```
## Key Features
- **Sensitive Data Discovery**: Automatically discovers PII, financial data, credentials, and other sensitive
information in S3 using machine learning and pattern matching
- **S3 Bucket Inventory**: Provides comprehensive inventory of S3 buckets with security and access control evaluation
- **Policy Findings**: Detects security issues like publicly accessible buckets, disabled encryption, external sharing
- **Sensitive Data Findings**: Reports discovered sensitive data including location and data type
- **Security Hub Integration**: Findings published to AWS Security Hub for centralized security management
- **EventBridge Integration**: Findings published to EventBridge for automated remediation workflows
- **Multi-account Coverage**: Monitors S3 data across all accounts in the AWS Organization
## Finding Publishing Frequency
The `finding_publishing_frequency` variable controls how often Macie publishes findings to Security Hub and EventBridge:
| Value | Description |
|-------|-------------|
| `FIFTEEN_MINUTES` | Publish every 15 minutes (default, recommended) |
| `ONE_HOUR` | Publish every hour |
| `SIX_HOURS` | Publish every 6 hours |
## Prerequisites
Before deploying this component:
1. **AWS Organizations** must be configured with the `macie.amazonaws.com` service access principal enabled
2. **account-map** component must be deployed to identify security and root accounts
3. **Security Hub** (recommended) should be deployed to receive findings
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_map_tenant` (`string`) optional
The tenant where the `account_map` component required by remote-state is deployed
**Default value:** `"core"`
`admin_delegated` (`bool`) optional
A flag to indicate if the AWS Organization-wide settings should be created. This can only be done after the Macie
Administrator account has already been delegated from the AWS Org Management account (usually 'root'). See the
Deployment section of the README for more information.
**Default value:** `false`
Specifies how often to publish updates to policy findings for the account. This includes publishing updates to AWS
Security Hub and Amazon EventBridge (formerly called Amazon CloudWatch Events). Valid values: FIFTEEN_MINUTES,
ONE_HOUR, or SIX_HOURS. For more information, see:
**Default value:** `"FIFTEEN_MINUTES"`
`global_environment` (`string`) optional
Global environment name
**Default value:** `"gbl"`
`member_accounts` (`list(string)`) optional
List of member account names to enable Macie on
**Default value:** `[ ]`
The name of the AWS Organization management account
**Default value:** `null`
`privileged` (`bool`) optional
true if the default provider already has access to the backend
**Default value:** `false`
`root_account_stage` (`string`) optional
The stage name for the Organization root (management) account. This is used to lookup account IDs from account names
using the `account-map` component.
**Default value:** `"root"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`delegated_administrator_account_id`
The AWS Account ID of the AWS Organization delegated administrator account
`macie_account_id`
The ID of the Macie account created by the component
`macie_service_role_arn`
The Amazon Resource Name (ARN) of the service-linked role that allows Macie to monitor and analyze data in AWS resources for the account.
`member_account_ids`
The AWS Account IDs of the member accounts
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 5.0, < 6.0.0`
- `awsutils`, version: `>= 0.17.0, < 6.0.0`
### Providers
- `aws`, version: `>= 5.0, < 6.0.0`
- `awsutils`, version: `>= 0.17.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_macie2_account.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/macie2_account) (resource)
- [`aws_macie2_organization_admin_account.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/macie2_organization_admin_account) (resource)
- [`awsutils_macie2_organization_settings.this`](https://registry.terraform.io/providers/cloudposse/awsutils/latest/docs/resources/macie2_organization_settings) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
---
## api-key
This component provisions an API Key for an Amazon Managed Grafana workspace.
It is intended for use with the [Grafana Terraform provider](https://registry.terraform.io/providers/grafana/grafana/latest) in
other `managed-grafana` sub-components.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
grafana/api-key:
metadata:
component: managed-grafana/api-key
vars:
enabled: true
grafana_component_name: grafana
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`grafana_component_name` (`string`) optional
The name of the Grafana component
**Default value:** `"managed-grafana/workspace"`
`key_role` (`string`) optional
Specifies the permission level of the API key. Valid values are VIEWER, EDITOR, or ADMIN.
**Default value:** `"ADMIN"`
`minutes_to_live` (`number`) optional
Specifies the time in minutes until the API key expires. Keys can be valid for up to 30 days.
**Default value:** `43200`
`ssm_path_format_api_key` (`string`) optional
The path in AWS SSM to the Grafana API Key provisioned with this component
**Default value:** `"/grafana/%s/api_key"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`ssm_path_grafana_api_key`
The path in AWS SSM to the Grafana API Key provisioned with this component
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `time`, version: `>= 0.11.1`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `time`, version: `>= 0.11.1`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`managed_grafana` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`ssm_parameters` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_grafana_workspace_api_key.key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/grafana_workspace_api_key) (resource)
- [`time_rotating.ttl`](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/rotating) (resource)
- [`time_static.ttl`](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/static) (resource)
## Data Sources
The following data sources are used by this module:
---
## dashboard
This component is responsible for provisioning a dashboard in an Amazon Managed Grafana workspace.
## Usage
**Stack Level**: Regional
:::note
This component requires **OpenTofu 1.7+** or **Terraform 1.9+** for the `templatestring()` function.
Earlier versions will encounter errors like `Function not found: templatestring`.
:::
## Dashboard Configuration Methods
This component supports three mutually exclusive methods for providing dashboard configuration.
**Exactly one** of `dashboard_url`, `dashboard_file`, or `dashboard_yaml` must be set.
| Method | Use Case |
|--------|----------|
| `dashboard_url` | Load dashboards from remote URLs (e.g., Grafana marketplace) |
| `dashboard_file` | Load dashboards from local JSON files in the `dashboards/` directory |
| `dashboard_yaml` | Define dashboards inline in Atmos stack configuration using YAML |
### Loading a Dashboard from URL
Use `dashboard_url` to load a dashboard from a remote endpoint such as the Grafana marketplace.
```yaml
components:
terraform:
grafana/dashboard/prometheus:
metadata:
component: managed-grafana/dashboard
vars:
enabled: true
name: "prometheus-dashboard"
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
dashboard_url: "https://grafana.com/api/dashboards/315/revisions/3/download"
config_input:
"${DS_PROMETHEUS}": "acme-plat-ue2-sandbox-prometheus" # Input Value : Data source UID
```
### Loading a Dashboard from Local File
Use `dashboard_file` to load dashboards from local JSON files stored in the `dashboards/` directory within the component.
```yaml
components:
terraform:
grafana/dashboard/ecs:
metadata:
component: managed-grafana/dashboard
vars:
enabled: true
name: "ecs-dashboard"
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
dashboard_file: "ecs.json"
config_input:
"${DS_CLOUDWATCH}": "acme-plat-ue2-sandbox-cloudwatch"
```
### Defining a Dashboard with YAML (Atmos Native)
Use `dashboard_yaml` to define the dashboard configuration directly in your Atmos stack files. This is the
recommended Atmos-native approach as it enables:
- **Deep merging**: Compose dashboards from multiple stack layers
- **Inheritance**: Define base dashboard configurations and extend them
- **Atmos functions**: Use `!terraform.output`, `!terraform.state`, and other Atmos template functions
```yaml
components:
terraform:
grafana/dashboard/custom:
metadata:
component: managed-grafana/dashboard
vars:
enabled: true
name: "custom-dashboard"
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
dashboard_yaml:
annotations:
list:
- builtIn: 1
datasource:
type: grafana
uid: "-- Grafana --"
enable: true
hide: true
iconColor: "rgba(0, 211, 255, 1)"
name: Annotations & Alerts
type: dashboard
editable: true
fiscalYearStartMonth: 0
graphTooltip: 0
panels:
- datasource:
type: cloudwatch
uid: "${DS_CLOUDWATCH}"
fieldConfig:
defaults:
color:
mode: palette-classic
thresholds:
mode: absolute
steps:
- color: green
value: null
- color: red
value: 80
overrides: []
gridPos:
h: 8
w: 12
x: 0
y: 0
id: 1
options:
legend:
calcs: []
displayMode: list
placement: bottom
showLegend: true
tooltip:
mode: single
sort: none
targets:
- datasource:
type: cloudwatch
uid: "${DS_CLOUDWATCH}"
dimensions:
ClusterName: my-cluster
expression: ""
id: ""
matchExact: true
metricEditorMode: 0
metricName: CPUUtilization
metricQueryType: 0
namespace: AWS/ECS
period: ""
queryMode: Metrics
refId: A
region: default
statistic: Average
title: ECS CPU Utilization
type: timeseries
refresh: ""
schemaVersion: 39
templating:
list: []
time:
from: now-6h
to: now
timepicker: {}
timezone: browser
config_input:
"${DS_CLOUDWATCH}": "acme-plat-ue2-sandbox-cloudwatch"
```
#### Using Inheritance with `dashboard_yaml`
Define a base dashboard in your catalog and extend it in environment-specific stacks:
```yaml
# stacks/catalog/grafana/dashboards/base.yaml
components:
terraform:
grafana/dashboard/base:
metadata:
component: managed-grafana/dashboard
type: abstract
vars:
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
dashboard_yaml:
editable: true
schemaVersion: 39
time:
from: now-6h
to: now
timezone: browser
```
```yaml
# stacks/orgs/acme/plat/sandbox/us-east-2/grafana.yaml
import:
- catalog/grafana/dashboards/base
components:
terraform:
grafana/dashboard/ecs-metrics:
metadata:
component: managed-grafana/dashboard
inherits:
- grafana/dashboard/base
vars:
enabled: true
name: "ecs-metrics"
dashboard_yaml:
panels:
- title: ECS CPU
type: timeseries
# ... panel configuration
```
### Variable Substitution
The `config_input` variable accepts a map of string replacements. These are applied using the `templatestring()` function,
which replaces `${VAR}` placeholders in the dashboard configuration with the corresponding values. This works with all
three dashboard configuration methods.
## Variables
### Required Variables
`dashboard_name` (`string`) required
The name to use for the dashboard. This must be unique.
`region` (`string`) required
AWS Region
### Optional Variables
`additional_config` (`map(any)`) optional
Additional dashboard configuration to be merged with the provided dashboard JSON
**Default value:** `{ }`
`config_input` (`map(string)`) optional
A map of string replacements used to supply input for the dashboard config JSON
**Default value:** `{ }`
`dashboard_file` (`string`) optional
Filename of a local dashboard JSON file in the component's dashboards directory. Must be a simple filename (no path separators). Exactly one of `dashboard_url`, `dashboard_file`, or `dashboard_yaml` must be set.
**Default value:** `""`
`dashboard_url` (`string`) optional
The marketplace URL of the dashboard to be created. Exactly one of `dashboard_url`, `dashboard_file`, or `dashboard_yaml` must be set.
**Default value:** `""`
`dashboard_yaml` (`any`) optional
Dashboard configuration defined as YAML/HCL in Atmos stack configuration. This allows defining dashboards inline using Atmos features like deep merging, inheritance, and Atmos functions. Exactly one of `dashboard_url`, `dashboard_file`, or `dashboard_yaml` must be set.
**Default value:** `null`
The name of the component used to provision an Amazon Managed Grafana API key
**Default value:** `"managed-grafana/api-key"`
`grafana_component_name` (`string`) optional
The name of the component used to provision an Amazon Managed Grafana workspace
**Default value:** `"managed-grafana/workspace"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Dependencies
### Requirements
- `terraform`, version: `>= 1.7.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `grafana`, version: `>= 2.18.0`
- `http`, version: `>= 3.4.2`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `grafana`, version: `>= 2.18.0`
- `http`, version: `>= 3.4.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`config_json` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
`grafana` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`grafana_api_key` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`grafana_dashboard.this`](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/dashboard) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.grafana_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`http_http.grafana_dashboard_json`](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) (data source)
---
## workspace
This component provisions an Amazon Managed Grafana workspace.
Amazon Managed Grafana is a fully managed service for Grafana, a popular open-source analytics platform that enables you
to query, visualize, and alert on your metrics, logs, and traces.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
grafana:
metadata:
component: managed-grafana/workspace
vars:
enabled: true
name: grafana
private_network_access_enabled: true
sso_role_associations:
- role: "ADMIN"
group_ids:
- "11111111-2222-3333-4444-555555555555"
# This grafana workspace will be allowed to assume the cross
# account access role from these prometheus components
prometheus_source_accounts:
- component: prometheus
tenant: plat
stage: sandbox
- component: prometheus
tenant: plat
stage: dev
```
:::note
We would prefer to have a custom URL for the provisioned Grafana workspace, but at the moment it's not supported natively and implementation would be non-trivial. We will continue to monitor that Issue and consider alternatives, such as using CloudFront.
[Issue #6: Support for Custom Domains](https://github.com/aws/amazon-managed-grafana-roadmap/issues/6)
:::
## Variables
### Required Variables
A list of IAM role ARNs that the Grafana workspace role should be allowed to assume. Use this for cross-account access to services like CloudWatch
**Default value:** `[ ]`
If set to `true`, enable the VPC Configuration to allow this workspace to access the private network using outputs from the vpc component
**Default value:** `false`
`prometheus_policy_enabled` (`bool`) optional
Set this to `true` to allow this Grafana workspace to access Amazon Managed Prometheus in this account
**Default value:** `false`
`prometheus_source_accounts` optional
A list of objects that describe an account where Amazon Managed Prometheus is deployed. This component grants this Grafana IAM role permission to assume the Prometheus access role in that target account. Use this for cross-account access
**Type:**
```hcl
list(object({
component = optional(string, "managed-prometheus/workspace")
stage = string
tenant = optional(string, "")
environment = optional(string, "")
}))
```
**Default value:** `[ ]`
`sso_role_associations` optional
A list of role to group ID list associations for granting Amazon Grafana access
**Type:**
```hcl
list(object({
role = string
group_ids = list(string)
}))
```
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`workspace_endpoint`
The returned URL of the Amazon Managed Grafana workspace
`workspace_id`
The ID of the Amazon Managed Grafana workspace
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`managed_grafana` | 0.5.0 | [`cloudposse/managed-grafana/aws`](https://registry.terraform.io/modules/cloudposse/managed-grafana/aws/0.5.0) | n/a
`prometheus` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`security_group` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_grafana_role_association.sso`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/grafana_role_association) (resource)
## Data Sources
The following data sources are used by this module:
---
## cloudwatch(Cloudwatch)
This component creates a CloudWatch data source in Amazon Managed Grafana.
It enables querying CloudWatch logs and metrics from Grafana dashboards, with support for cross-account access.
## Usage
**Stack Level**: Regional (deployed to core-auto where Grafana workspace exists)
This component creates a CloudWatch data source in Grafana for querying AWS CloudWatch logs and metrics.
It supports cross-account access via IAM role assumption, allowing a central Grafana workspace to query
CloudWatch data from multiple AWS accounts.
### Prerequisites
- Amazon Managed Grafana workspace deployed via the `managed-grafana/workspace` component
- Grafana API key deployed via the `managed-grafana/api-key` component
- (Optional) IAM role in target account for cross-account access
### Example Configuration
```yaml
components:
terraform:
grafana/datasource/cloudwatch/defaults:
metadata:
component: managed-grafana/data-source/cloudwatch
type: abstract
vars:
enabled: true
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
grafana/datasource/cloudwatch/plat-dev:
metadata:
component: managed-grafana/data-source/cloudwatch
inherits:
- grafana/datasource/cloudwatch/defaults
vars:
name: plat-dev-cloudwatch
datasource_name: plat-dev-cloudwatch
assume_role_arn: !terraform.state iam-role/grafana-cloudwatch-access plat-use2-dev role.arn
```
### Cross-Account Access
To query CloudWatch data from another AWS account, create an IAM role in the target account
that trusts the Grafana workspace's account and has CloudWatch read permissions.
Then specify the role ARN in `assume_role_arn`.
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`assume_role_arn` (`string`) optional
IAM Role ARN to assume for cross-account CloudWatch access. If empty, uses the Grafana workspace's IAM role.
**Default value:** `""`
`cloudwatch_account_id` (`string`) optional
AWS Account ID where CloudWatch logs are stored (the account to query)
**Default value:** `""`
`cloudwatch_region` (`string`) optional
AWS Region where CloudWatch logs are stored. Defaults to the component's region if not specified.
**Default value:** `""`
`datasource_name` (`string`) optional
Name for the CloudWatch data source in Grafana. Defaults to the component ID if not specified.
**Default value:** `""`
`default_log_groups` (`list(string)`) optional
List of default log groups to make available in Grafana
**Default value:** `[ ]`
The name of the component used to provision an Amazon Managed Grafana API key
**Default value:** `"managed-grafana/api-key"`
`grafana_component_name` (`string`) optional
The name of the component used to provision an Amazon Managed Grafana workspace
**Default value:** `"managed-grafana/workspace"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`id`
The full ID of this CloudWatch data source (orgId:uid)
`name`
The name of this CloudWatch data source
`uid`
The UID of this CloudWatch data source
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `grafana`, version: `>= 2.18.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `grafana`, version: `>= 2.18.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`grafana` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`grafana_api_key` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`grafana_data_source.cloudwatch`](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/data_source) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.grafana_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## loki(Loki)
This component is responsible for provisioning a Loki data source for an Amazon Managed Grafana workspace.
Use this component alongside the `eks/loki` component.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
grafana/datasource/defaults:
metadata:
component: managed-grafana/data-source/managed-prometheus
type: abstract
vars:
enabled: true
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
grafana/datasource/plat-sandbox-loki:
metadata:
component: managed-grafana/data-source/loki
inherits:
- grafana/datasource/defaults
vars:
name: plat-sandbox-loki
loki_tenant_name: plat
loki_stage_name: sandbox
grafana/datasource/plat-dev-loki:
metadata:
component: managed-grafana/data-source/loki
inherits:
- grafana/datasource/defaults
vars:
name: plat-dev-loki
loki_tenant_name: plat
loki_stage_name: dev
grafana/datasource/plat-prod-loki:
metadata:
component: managed-grafana/data-source/loki
inherits:
- grafana/datasource/defaults
vars:
name: plat-prod-loki
loki_tenant_name: plat
loki_stage_name: prod
```
## Variables
### Required Variables
The name of the component used to provision an Amazon Managed Grafana API key
**Default value:** `"managed-grafana/api-key"`
`grafana_component_name` (`string`) optional
The name of the component used to provision an Amazon Managed Grafana workspace
**Default value:** `"managed-grafana/workspace"`
`loki_component_name` (`string`) optional
The name of the loki component
**Default value:** `"eks/loki"`
`loki_environment_name` (`string`) optional
The environment where the loki component is deployed
**Default value:** `""`
`loki_stage_name` (`string`) optional
The stage where the loki component is deployed
**Default value:** `""`
`loki_tenant_name` (`string`) optional
The tenant where the loki component is deployed
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`uid`
The UID of this dashboard
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `grafana`, version: `>= 2.18.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `grafana`, version: `>= 2.18.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`grafana` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`grafana_api_key` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`loki` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`source_account_role` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`grafana_data_source.loki`](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/data_source) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.basic_auth_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.grafana_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## managed-prometheus
This component provisions an Amazon Managed Prometheus data source for an Amazon Managed Grafana workspace.
Use this component alongside the `managed-prometheus/workspace` component.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
grafana/datasource/defaults:
metadata:
component: managed-grafana/data-source/managed-prometheus
type: abstract
vars:
enabled: true
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
prometheus_component_name: prometheus
grafana/datasource/plat-sandbox-prometheus:
metadata:
component: managed-grafana/data-source/managed-prometheus
inherits:
- grafana/datasource/defaults
vars:
name: plat-sandbox-prometheus
prometheus_tenant_name: plat
prometheus_stage_name: sandbox
grafana/datasource/plat-dev-prometheus:
metadata:
component: managed-grafana/data-source/managed-prometheus
inherits:
- grafana/datasource/defaults
vars:
name: plat-dev-prometheus
prometheus_tenant_name: plat
prometheus_stage_name: dev
grafana/datasource/plat-prod-prometheus:
metadata:
component: managed-grafana/data-source/managed-prometheus
inherits:
- grafana/datasource/defaults
vars:
name: plat-prod-prometheus
prometheus_tenant_name: plat
prometheus_stage_name: prod
```
## Variables
### Required Variables
The name of the component used to provision an Amazon Managed Grafana API key
**Default value:** `"managed-grafana/api-key"`
`grafana_component_name` (`string`) optional
The name of the component used to provision an Amazon Managed Grafana workspace
**Default value:** `"managed-grafana/workspace"`
`prometheus_component_name` (`string`) optional
The name of the Amazon Managed Prometheus component to be added as a Grafana data source
**Default value:** `"managed-prometheus/workspace"`
`prometheus_environment_name` (`string`) optional
The environment where the Amazon Managed Prometheus component is deployed
**Default value:** `""`
`prometheus_stage_name` (`string`) optional
The stage where the Amazon Managed Prometheus component is deployed
**Default value:** `""`
`prometheus_tenant_name` (`string`) optional
The tenant where the Amazon Managed Prometheus component is deployed
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`uid`
The UID of this dashboard
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `grafana`, version: `>= 2.18.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `grafana`, version: `>= 2.18.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`grafana` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`grafana_api_key` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | v1.537.1 | [`github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-account-map/src/modules/iam-roles/v1.537.1) | n/a
`prometheus` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`grafana_data_source.managed_prometheus`](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/data_source) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.grafana_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## workspace(Workspace)
This component is responsible for provisioning a workspace for Amazon Managed Service for Prometheus, also known as
Amazon Managed Prometheus (AMP).
This component is intended to be deployed alongside Grafana. For example, use our `managed-grafana/workspace` component.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
We prefer to name the stack component with a simpler name, whereas the Terraform component should remain descriptive.
```yaml
components:
terraform:
prometheus:
metadata:
component: managed-prometheus/workspace
vars:
enabled: true
name: prometheus
# Create cross-account role for core-auto to access AMP
grafana_account_name: core-auto
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_map` optional
Static account map used when account_map_enabled is false.
Provides account name to account ID mapping without requiring the account-map component.
**Type:**
```hcl
object({
full_account_map = map(string)
audit_account_account_name = optional(string, "")
root_account_account_name = optional(string, "")
})
```
**Default value:**
```hcl
{
"audit_account_account_name": "",
"full_account_map": {},
"root_account_account_name": ""
}
```
`account_map_enabled` (`bool`) optional
When true, uses the account-map component to look up account IDs dynamically.
When false, uses the static account_map variable instead. Set to false when
using static account mappings without the account-map component.
**Default value:** `true`
`account_map_environment` (`string`) optional
The name of the environment where `account_map` is provisioned
**Default value:** `"gbl"`
`account_map_stage` (`string`) optional
The name of the stage where `account_map` is provisioned
**Default value:** `"root"`
`account_map_tenant` (`string`) optional
The name of the tenant where `account_map` is provisioned
**Default value:** `"core"`
`alert_manager_definition` (`string`) optional
The alert manager definition that you want to be applied.
**Default value:** `""`
`grafana_account_name` (`string`) optional
The name of the account allowed to access AMP in this account. If defined, this module will create a cross-account IAM role for accessing AMP. Use this for cross-account Grafana. If not defined, no roles will be created.
**Default value:** `""`
`rule_group_namespaces` optional
A list of name, data objects for each Amazon Managed Service for Prometheus (AMP) Rule Group Namespace
**Type:**
```hcl
list(object({
name = string
data = string
}))
```
**Default value:** `[ ]`
`vpc_endpoint_enabled` (`string`) optional
If set to `true`, restrict traffic through a VPC endpoint
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`access_role_arn`
If enabled with `var.allowed_account_id`, the Role ARN used for accessing Amazon Managed Prometheus in this account
`id`
The ID of this component deployment
`workspace_arn`
The ARN of this Amazon Managed Prometheus workspace
`workspace_endpoint`
The endpoint URL of this Amazon Managed Prometheus workspace
`workspace_id`
The ID of this Amazon Managed Prometheus workspace
`workspace_region`
The region where this workspace is deployed
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`managed_prometheus` | 1.0.1 | [`cloudposse/managed-prometheus/aws`](https://registry.terraform.io/modules/cloudposse/managed-prometheus/aws/1.0.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## memorydb
This component provisions an AWS MemoryDB cluster. MemoryDB is a fully managed, Redis-compatible, in-memory database
service.
While Redis is commonly used as a cache, MemoryDB is designed to also function well as a
[vector database](https://docs.aws.amazon.com/memorydb/latest/devguide/vector-search.html). This makes it appropriate
for AI model backends.
## Usage
**Stack Level**: Regional
### Example
Here's an example snippet for how to use this component:
```yaml
components:
terraform:
vpc:
vars:
availability_zones:
- "a"
- "b"
- "c"
ipv4_primary_cidr_block: "10.111.0.0/18"
memorydb:
vars: {}
```
## Variables
### Required Variables
### Optional Variables
`admin_username` (`string`) optional
The username for the MemoryDB admin
**Default value:** `"admin"`
`auto_minor_version_upgrade` (`bool`) optional
Indicates that minor engine upgrades will be applied automatically to the cluster during the maintenance window
**Default value:** `true`
`engine_version` (`string`) optional
The version of the Redis engine to use
**Default value:** `"6.2"`
`maintenance_window` (`string`) optional
The weekly time range during which system maintenance can occur
**Default value:** `null`
`node_type` (`string`) optional
The compute and memory capacity of the nodes in the cluster
**Default value:** `"db.r6g.large"`
`num_replicas_per_shard` (`number`) optional
The number of replicas per shard
**Default value:** `1`
`num_shards` (`number`) optional
The number of shards in the cluster
**Default value:** `1`
`parameter_group_family` (`string`) optional
The name of the parameter group family
**Default value:** `"memorydb_redis6"`
`parameters` (`map(string)`) optional
Key-value mapping of parameters to apply to the parameter group
**Default value:** `{ }`
`port` (`number`) optional
The port on which the cluster accepts connections
**Default value:** `6379`
`security_group_ids` (`list(string)`) optional
List of security group IDs to associate with the MemoryDB cluster
**Default value:** `[ ]`
`snapshot_arns` (`list(string)`) optional
List of ARNs for the snapshots to be restored. NOTE: destroys the existing cluster. Use for restoring.
**Default value:** `[ ]`
`snapshot_retention_limit` (`number`) optional
The number of days for which MemoryDB retains automatic snapshots before deleting them
**Default value:** `null`
`snapshot_window` (`string`) optional
The daily time range during which MemoryDB begins taking daily snapshots
**Default value:** `null`
`sns_topic_arn` (`string`) optional
The ARN of the SNS topic to send notifications to
**Default value:** `null`
`ssm_kms_key_id` (`string`) optional
The KMS key ID to use for SSM parameter encryption. If not specified, the default key will be used.
**Default value:** `null`
`ssm_parameter_name` (`string`) optional
The name of the SSM parameter to store the password in. If not specified, the password will be stored in `/{context.id}/admin_password`
**Default value:** `""`
`tls_enabled` (`bool`) optional
Indicates whether Transport Layer Security (TLS) encryption is enabled for the cluster
**Default value:** `true`
`vpc_component_name` (`string`) optional
The name of the VPC component. This is used to pick out subnets for the MemoryDB cluster
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`admin_acl_arn`
The ARN of the MemoryDB user's ACL
`admin_arn`
The ARN of the MemoryDB user
`admin_password_ssm_parameter_name`
The name of the SSM parameter storing the password for the MemoryDB user
`admin_username`
The username for the MemoryDB user
`arn`
The ARN of the MemoryDB cluster
`cluster_endpoint`
The endpoint of the MemoryDB cluster
`engine_patch_version`
The Redis engine version
`id`
The name of the MemoryDB cluster
`parameter_group_arn`
The ARN of the MemoryDB parameter group
`parameter_group_id`
The name of the MemoryDB parameter group
`shards`
The shard details for the MemoryDB cluster
`subnet_group_arn`
The ARN of the MemoryDB subnet group
`subnet_group_id`
The name of the MemoryDB subnet group
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 5.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`memorydb` | 0.2.0 | [`cloudposse/memorydb/aws`](https://registry.terraform.io/modules/cloudposse/memorydb/aws/0.2.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## mq-broker
This component is responsible for provisioning an AmazonMQ broker and the corresponding security group.
### Migrate `v1` to `v2`
`EKS` component dependency removed. Instead of pulling security groups from EKS remote state, pass it as `var.allowed_security_groups`.
If you are using Atmos, read [atmos shared data](https://atmos.tools/core-concepts/share-data/) manual.
```yaml
components:
terraform:
mq-broker:
vars:
enabled: true
apply_immediately: true
auto_minor_version_upgrade: true
deployment_mode: "ACTIVE_STANDBY_MULTI_AZ"
engine_type: "ActiveMQ"
engine_version: "5.15.14"
host_instance_type: "mq.t3.micro"
publicly_accessible: false
general_log_enabled: true
audit_log_enabled: true
encryption_enabled: true
use_aws_owned_key: true
allowed_security_groups:
- '{{ (atmos.Component "eks" .stack).outputs.eks_cluster_managed_security_group_id }}'
```
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
mq-broker:
vars:
enabled: true
apply_immediately: true
auto_minor_version_upgrade: true
deployment_mode: "ACTIVE_STANDBY_MULTI_AZ"
engine_type: "ActiveMQ"
engine_version: "5.15.14"
host_instance_type: "mq.t3.micro"
publicly_accessible: false
general_log_enabled: true
audit_log_enabled: true
encryption_enabled: true
use_aws_owned_key: true
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`allowed_cidr_blocks` (`list(string)`) optional
List of CIDR blocks that are allowed ingress to the broker's Security Group created in the module
**Default value:** `[ ]`
List of security groups to be allowed to connect to the broker instance
**Default value:** `[ ]`
`apply_immediately` (`bool`) optional
Specifies whether any cluster modifications are applied immediately, or during the next maintenance window
**Default value:** `false`
`audit_log_enabled` (`bool`) optional
Enables audit logging. User management action made using JMX or the ActiveMQ Web Console is logged
**Default value:** `true`
`auto_minor_version_upgrade` (`bool`) optional
Enables automatic upgrades to new minor versions for brokers, as Apache releases the versions
**Default value:** `false`
`deployment_mode` (`string`) optional
The deployment mode of the broker. Supported: SINGLE_INSTANCE and ACTIVE_STANDBY_MULTI_AZ
**Default value:** `"ACTIVE_STANDBY_MULTI_AZ"`
`encryption_enabled` (`bool`) optional
Flag to enable/disable Amazon MQ encryption at rest
**Default value:** `true`
`engine_type` (`string`) optional
Type of broker engine, `ActiveMQ` or `RabbitMQ`
**Default value:** `"ActiveMQ"`
`engine_version` (`string`) optional
The version of the broker engine. See https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/broker-engine.html for more details
**Default value:** `"5.15.14"`
List of existing Security Group IDs to place the broker into. Set `use_existing_security_groups` to `true` to enable using `existing_security_groups` as Security Groups for the broker
**Default value:** `[ ]`
`general_log_enabled` (`bool`) optional
Enables general logging via CloudWatch
**Default value:** `true`
`host_instance_type` (`string`) optional
The broker's instance type. e.g. mq.t2.micro or mq.m4.large
**Default value:** `"mq.t3.micro"`
`kms_mq_key_arn` (`string`) optional
ARN of the AWS KMS key used for Amazon MQ encryption
**Default value:** `null`
`kms_ssm_key_arn` (`string`) optional
ARN of the AWS KMS key used for SSM encryption
**Default value:** `"alias/aws/ssm"`
`maintenance_day_of_week` (`string`) optional
The maintenance day of the week. e.g. MONDAY, TUESDAY, or WEDNESDAY
**Default value:** `"SUNDAY"`
`maintenance_time_of_day` (`string`) optional
The maintenance time, in 24-hour format. e.g. 02:00
**Default value:** `"03:00"`
`maintenance_time_zone` (`string`) optional
The maintenance time zone, in either the Country/City format, or the UTC offset format. e.g. CET
**Default value:** `"UTC"`
SSM parameter name for Application username
**Default value:** `"mq_application_username"`
`overwrite_ssm_parameter` (`bool`) optional
Whether to overwrite an existing SSM parameter
**Default value:** `true`
`publicly_accessible` (`bool`) optional
Whether to enable connections from applications outside of the VPC that hosts the broker's subnets
**Default value:** `false`
`ssm_parameter_name_format` (`string`) optional
SSM parameter name format
**Default value:** `"/%s/%s"`
`ssm_path` (`string`) optional
SSM path
**Default value:** `"mq"`
`use_aws_owned_key` (`bool`) optional
Boolean to enable an AWS owned Key Management Service (KMS) Customer Master Key (CMK) for Amazon MQ encryption that is not in your account
**Default value:** `true`
`use_existing_security_groups` (`bool`) optional
Flag to enable/disable creation of Security Group in the module. Set to `true` to disable Security Group creation and provide a list of existing security Group IDs in `existing_security_groups` to place the broker into
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`admin_username`
AmazonMQ admin username
`application_username`
AmazonMQ application username
`broker_arn`
AmazonMQ broker ARN
`broker_id`
AmazonMQ broker ID
`primary_amqp_ssl_endpoint`
AmazonMQ primary AMQP+SSL endpoint
`primary_console_url`
AmazonMQ active web console URL
`primary_ip_address`
AmazonMQ primary IP address
`primary_mqtt_ssl_endpoint`
AmazonMQ primary MQTT+SSL endpoint
`primary_ssl_endpoint`
AmazonMQ primary SSL endpoint
`primary_stomp_ssl_endpoint`
AmazonMQ primary STOMP+SSL endpoint
`primary_wss_endpoint`
AmazonMQ primary WSS endpoint
`secondary_amqp_ssl_endpoint`
AmazonMQ secondary AMQP+SSL endpoint
`secondary_console_url`
AmazonMQ secondary web console URL
`secondary_ip_address`
AmazonMQ secondary IP address
`secondary_mqtt_ssl_endpoint`
AmazonMQ secondary MQTT+SSL endpoint
`secondary_ssl_endpoint`
AmazonMQ secondary SSL endpoint
`secondary_stomp_ssl_endpoint`
AmazonMQ secondary STOMP+SSL endpoint
`secondary_wss_endpoint`
AmazonMQ secondary WSS endpoint
`security_group_arn`
The ARN of the created security group
`security_group_id`
AmazonMQ security group id
`security_group_name`
The name of the created security group
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `local`, version: `>= 2.4`
- `template`, version: `>= 2.2`
- `utils`, version: `>= 1.10.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`mq_broker` | 3.6.0 | [`cloudposse/mq-broker/aws`](https://registry.terraform.io/modules/cloudposse/mq-broker/aws/3.6.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## msk
This component is responsible for provisioning [Amazon Managed Streaming](https://aws.amazon.com/msk/) clusters for
[Apache Kafka](https://aws.amazon.com/msk/what-is-kafka/).
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
msk:
metadata:
component: "msk"
vars:
enabled: true
name: "msk"
vpc_component_name: "vpc"
dns_delegated_component_name: "dns-delegated"
dns_delegated_environment_name: "gbl"
# https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html
kafka_version: "3.4.0"
public_access_enabled: false
# https://aws.amazon.com/msk/pricing/
broker_instance_type: "kafka.m5.large"
# Number of brokers per AZ
broker_per_zone: 1
# `broker_dns_records_count` specifies how many DNS records to create for the broker endpoints in the DNS zone provided in the `zone_id` variable.
# This corresponds to the total number of broker endpoints created by the module.
# Calculate this number by multiplying the `broker_per_zone` variable by the subnet count.
broker_dns_records_count: 3
broker_volume_size: 500
client_broker: "TLS_PLAINTEXT"
encryption_in_cluster: true
encryption_at_rest_kms_key_arn: ""
enhanced_monitoring: "DEFAULT"
certificate_authority_arns: []
# Authentication methods
client_allow_unauthenticated: true
client_sasl_scram_enabled: false
client_sasl_scram_secret_association_enabled: false
client_sasl_scram_secret_association_arns: []
client_sasl_iam_enabled: false
client_tls_auth_enabled: false
jmx_exporter_enabled: false
node_exporter_enabled: false
cloudwatch_logs_enabled: false
firehose_logs_enabled: false
firehose_delivery_stream: ""
s3_logs_enabled: false
s3_logs_bucket: ""
s3_logs_prefix: ""
properties: {}
autoscaling_enabled: true
storage_autoscaling_target_value: 60
storage_autoscaling_max_capacity: null
storage_autoscaling_disable_scale_in: false
create_security_group: true
security_group_rule_description: "Allow inbound %s traffic"
# A list of IDs of Security Groups to allow access to the cluster security group
allowed_security_group_ids: []
# A list of IPv4 CIDRs to allow access to the cluster security group
allowed_cidr_blocks: []
```
## Variables
### Required Variables
`broker_instance_type` (`string`) required
The instance type to use for the Kafka brokers
`kafka_version` (`string`) required
The desired Kafka software version.
Refer to https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html for more details
A list of Security Group rule objects to add to the created security group, in addition to the ones
this module normally creates. (To suppress the module's rules, set `create_security_group` to false
and supply your own security group(s) via `associated_security_group_ids`.)
The keys and values of the objects are fully compatible with the `aws_security_group_rule` resource, except
for `security_group_id` which will be ignored, and the optional "key" which, if provided, must be unique and known at "plan" time.
For more info see https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule
and https://github.com/cloudposse/terraform-aws-security-group.
**Default value:** `[ ]`
`allow_all_egress` (`bool`) optional
If `true`, the created security group will allow egress on all ports and protocols to all IP addresses.
If this is false and no egress rules are otherwise specified, then no egress will be allowed.
**Default value:** `true`
`allowed_cidr_blocks` (`list(string)`) optional
A list of IPv4 CIDRs to allow access to the security group created by this module.
The length of this list must be known at "plan" time.
**Default value:** `[ ]`
A list of IDs of Security Groups to allow access to the security group created by this module.
The length of this list must be known at "plan" time.
**Default value:** `[ ]`
A list of IDs of Security Groups to associate the created resource with, in addition to the created security group.
These security groups will not be modified and, if `create_security_group` is `false`, must have rules providing the desired access.
**Default value:** `[ ]`
`autoscaling_enabled` (`bool`) optional
To automatically expand your cluster's storage in response to increased usage, you can enable this. [More info](https://docs.aws.amazon.com/msk/latest/developerguide/msk-autoexpand.html)
**Default value:** `true`
`broker_dns_records_count` (`number`) optional
This variable specifies how many DNS records to create for the broker endpoints in the DNS zone provided in the `zone_id` variable.
This corresponds to the total number of broker endpoints created by the module.
Calculate this number by multiplying the `broker_per_zone` variable by the subnet count.
This variable is necessary to prevent the Terraform error:
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created.
**Default value:** `0`
`broker_per_zone` (`number`) optional
Number of Kafka brokers per zone
**Default value:** `1`
`broker_volume_size` (`number`) optional
The size in GiB of the EBS volume for the data drive on each broker node
**Default value:** `1000`
Encryption setting for data in transit between clients and brokers. Valid values: `TLS`, `TLS_PLAINTEXT`, and `PLAINTEXT`
**Default value:** `"TLS"`
`client_sasl_iam_enabled` (`bool`) optional
Enable client authentication via IAM policies. Cannot be set to `true` at the same time as `client_tls_auth_enabled`
**Default value:** `false`
`client_sasl_scram_enabled` (`bool`) optional
Enable SCRAM client authentication via AWS Secrets Manager. Cannot be set to `true` at the same time as `client_tls_auth_enabled`
**Default value:** `false`
Enable the list of AWS Secrets Manager secret ARNs for SCRAM authentication
**Default value:** `true`
`client_tls_auth_enabled` (`bool`) optional
Set `true` to enable the Client TLS Authentication
**Default value:** `false`
`cloudwatch_logs_enabled` (`bool`) optional
Indicates whether you want to enable or disable streaming broker logs to Cloudwatch Logs
**Default value:** `false`
`cloudwatch_logs_log_group` (`string`) optional
Name of the Cloudwatch Log Group to deliver logs to
**Default value:** `null`
`create_security_group` (`bool`) optional
Set `true` to create and configure a new security group. If false, `associated_security_group_ids` must be provided.
**Default value:** `true`
`custom_broker_dns_name` (`string`) optional
Custom Route53 DNS hostname for MSK brokers. Use `%%ID%%` key to specify brokers index in the hostname. Example: `kafka-broker%%ID%%.example.com`
**Default value:** `null`
You may specify a KMS key short ID or ARN (it will always output an ARN) to use for encrypting your data at rest
**Default value:** `""`
`encryption_in_cluster` (`bool`) optional
Whether data communication among broker nodes is encrypted
**Default value:** `true`
`enhanced_monitoring` (`string`) optional
Specify the desired enhanced MSK CloudWatch monitoring level. Valid values: `DEFAULT`, `PER_BROKER`, and `PER_TOPIC_PER_BROKER`
**Default value:** `"DEFAULT"`
`firehose_delivery_stream` (`string`) optional
Name of the Kinesis Data Firehose delivery stream to deliver logs to
**Default value:** `""`
`firehose_logs_enabled` (`bool`) optional
Indicates whether you want to enable or disable streaming broker logs to Kinesis Data Firehose
**Default value:** `false`
`inline_rules_enabled` (`bool`) optional
NOT RECOMMENDED. Create rules "inline" instead of as separate `aws_security_group_rule` resources.
See [#20046](https://github.com/hashicorp/terraform-provider-aws/issues/20046) for one of several issues with inline rules.
See [this post](https://github.com/hashicorp/terraform-provider-aws/pull/9032#issuecomment-639545250) for details on the difference between inline rules and rule resources.
**Default value:** `false`
`jmx_exporter_enabled` (`bool`) optional
Set `true` to enable the JMX Exporter
**Default value:** `false`
`node_exporter_enabled` (`bool`) optional
Set `true` to enable the Node Exporter
**Default value:** `false`
`preserve_security_group_id` (`bool`) optional
When `false` and `security_group_create_before_destroy` is `true`, changes to security group rules
cause a new security group to be created with the new rules, and the existing security group is then
replaced with the new one, eliminating any service interruption.
When `true` or when changing the value (from `false` to `true` or from `true` to `false`),
existing security group rules will be deleted before new ones are created, resulting in a service interruption,
but preserving the security group itself.
**NOTE:** Setting this to `true` does not guarantee the security group will never be replaced,
it only keeps changes to the security group rules from triggering a replacement.
See the [terraform-aws-security-group README](https://github.com/cloudposse/terraform-aws-security-group) for further discussion.
**Default value:** `false`
`properties` (`map(string)`) optional
Contents of the server.properties file. Supported properties are documented in the [MSK Developer Guide](https://docs.aws.amazon.com/msk/latest/developerguide/msk-configuration-properties.html)
**Default value:** `{ }`
`public_access_enabled` (`bool`) optional
Enable public access to MSK cluster (given that all of the requirements are met)
**Default value:** `false`
`s3_logs_bucket` (`string`) optional
Name of the S3 bucket to deliver logs to
**Default value:** `""`
`s3_logs_enabled` (`bool`) optional
Indicates whether you want to enable or disable streaming broker logs to S3
**Default value:** `false`
`s3_logs_prefix` (`string`) optional
Prefix to append to the S3 folder name logs are delivered to
**Default value:** `""`
Set `true` to enable terraform `create_before_destroy` behavior on the created security group.
We only recommend setting this `false` if you are importing an existing security group
that you do not want replaced and therefore need full control over its name.
Note that changing this value will always cause the security group to be replaced.
**Default value:** `true`
How long to retry on `DependencyViolation` errors during security group deletion from
lingering ENIs left by certain AWS services such as Elastic Load Balancing.
**Default value:** `"15m"`
`security_group_description` (`string`) optional
The description to assign to the created Security Group.
Warning: Changing the description causes the security group to be replaced.
**Default value:** `"Managed by Terraform"`
`security_group_name` (`list(string)`) optional
The name to assign to the created security group. Must be unique within the VPC.
If not provided, will be derived from the `null-label.context` passed in.
If `create_before_destroy` is true, will be used as a name prefix.
**Default value:** `[ ]`
Percentage of storage used to trigger autoscaled storage increase
**Default value:** `60`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`bootstrap_brokers`
Comma separated list of one or more hostname:port pairs of Kafka brokers suitable to bootstrap connectivity to the Kafka cluster
`bootstrap_brokers_public_sasl_iam`
Comma separated list of one or more DNS names (or IP addresses) and SASL IAM port pairs for public access to the Kafka cluster using SASL/IAM
`bootstrap_brokers_public_sasl_scram`
Comma separated list of one or more DNS names (or IP addresses) and SASL SCRAM port pairs for public access to the Kafka cluster using SASL/SCRAM
`bootstrap_brokers_public_tls`
Comma separated list of one or more DNS names (or IP addresses) and TLS port pairs for public access to the Kafka cluster using TLS
`bootstrap_brokers_sasl_iam`
Comma separated list of one or more DNS names (or IP addresses) and SASL IAM port pairs for access to the Kafka cluster using SASL/IAM
`bootstrap_brokers_sasl_scram`
Comma separated list of one or more DNS names (or IP addresses) and SASL SCRAM port pairs for access to the Kafka cluster using SASL/SCRAM
`bootstrap_brokers_tls`
Comma separated list of one or more DNS names (or IP addresses) and TLS port pairs for access to the Kafka cluster using TLS
`broker_endpoints`
List of broker endpoints
`cluster_arn`
Amazon Resource Name (ARN) of the MSK cluster
`cluster_name`
The cluster name of the MSK cluster
`config_arn`
Amazon Resource Name (ARN) of the MSK configuration
`current_version`
Current version of the MSK Cluster
`hostnames`
List of MSK Cluster broker DNS hostnames
`latest_revision`
Latest revision of the MSK configuration
`security_group_arn`
The ARN of the created security group
`security_group_id`
The ID of the created security group
`security_group_name`
The name of the created security group
`storage_mode`
Storage mode for supported storage tiers
`zookeeper_connect_string`
Comma separated list of one or more hostname:port pairs to connect to the Apache Zookeeper cluster
`zookeeper_connect_string_tls`
Comma separated list of one or more hostname:port pairs to connect to the Apache Zookeeper cluster via TLS
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kafka` | 2.6.0 | [`cloudposse/msk-apache-kafka-cluster/aws`](https://registry.terraform.io/modules/cloudposse/msk-apache-kafka-cluster/aws/2.6.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## mwaa
This component provisions Amazon managed workflows for Apache Airflow.
The s3 bucket `dag_bucket` stores DAGs to be executed by MWAA.
## Access Modes
### Public
Allows the Airflow UI to be access over the public internet to users granted access by an IAM policy.
### Private
Limits access to users within the VPC to users granted access by an IAM policy.
- MWAA creates a VPC interface endpoint for the Airflow webserver and an interface endpoint for the pgsql metadatabase.
- the endpoints are created in the AZs mapped to your private subnets
- MWAA binds an IP address from your private subnet to the interface endpoint
### Managing access to VPC endpoings on MWAA
MWAA creates a VPC endpoint in each of the private subnets.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
mwaa:
vars:
enabled: true
name: app
dag_processing_logs_enabled: true
dag_processing_logs_level: INFO
environment_class: mw1.small
airflow_version: 2.0.2
```
## Variables
### Required Variables
Environment class for the cluster. Possible options are mw1.small, mw1.medium, mw1.large.
**Default value:** `"mw1.small"`
`execution_role_arn` (`string`) optional
If `create_iam_role` is `false` then set this to the target MWAA execution role
**Default value:** `""`
`max_workers` (`number`) optional
The maximum number of workers that can be automatically scaled up. Value need to be between 1 and 25.
**Default value:** `10`
`min_workers` (`number`) optional
The minimum number of workers that you want to run in your environment.
**Default value:** `1`
`plugins_s3_object_version` (`string`) optional
The plugins.zip file version you want to use.
**Default value:** `null`
`plugins_s3_path` (`string`) optional
The relative path to the plugins.zip file on your Amazon S3 storage bucket. For example, plugins.zip. If a relative path is provided in the request, then plugins_s3_object_version is required
**Default value:** `null`
The requirements.txt file version you
**Default value:** `null`
`requirements_s3_path` (`string`) optional
The relative path to the requirements.txt file on your Amazon S3 storage bucket. For example, requirements.txt. If a relative path is provided in the request, then requirements_s3_object_version is required
**Default value:** `null`
`scheduler_logs_enabled` (`bool`) optional
Enabling or disabling the collection of logs for the schedulers
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
ARN of MWAA environment.
`created_at`
The Created At date of the Amazon MWAA Environment
`execution_role_arn`
IAM Role ARN for Amazon MWAA Execution Role
`logging_configuration`
The Logging Configuration of the MWAA Environment
`s3_bucket_arn`
ID of S3 bucket.
`security_group_id`
ID of the MWAA Security Group(s)
`service_role_arn`
The Service Role ARN of the Amazon MWAA Environment
`status`
The status of the Amazon MWAA Environment
`tags_all`
A map of tags assigned to the resource, including those inherited from the provider for the Amazon MWAA Environment
`webserver_url`
The webserver URL of the Amazon MWAA Environment
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_policy` | 0.5.0 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/0.5.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`mwaa_environment` | 0.15.0 | [`cloudposse/mwaa/aws`](https://registry.terraform.io/modules/cloudposse/mwaa/aws/0.15.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`vpc_ingress` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_policy.mwaa_web_server_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role_policy_attachment.mwaa_web_server_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`aws_iam_role_policy_attachment.secrets_manager_read_write`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
## Data Sources
The following data sources are used by this module:
---
## network-firewall
This component is responsible for provisioning [AWS Network Firewall](https://aws.amazon.com/network-firewal) resources,
including Network Firewall, firewall policy, rule groups, and logging configuration.
## Usage
**Stack Level**: Regional
Example of a Network Firewall with stateful 5-tuple rules:
:::tip
The "5-tuple" means the five items (columns) that each rule (row, or tuple) in a firewall policy uses to define
whether to block or allow traffic: source and destination IP, source and destination port, and protocol.
Refer to
[Standard stateful rule groups in AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-basic.html)
for more details.
:::
```yaml
components:
terraform:
network-firewall:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: network-firewall
# The name of a VPC component where the Network Firewall is provisioned
vpc_component_name: vpc
firewall_subnet_name: "firewall"
stateful_default_actions:
- "aws:alert_strict"
stateless_default_actions:
- "aws:forward_to_sfe"
stateless_fragment_default_actions:
- "aws:forward_to_sfe"
stateless_custom_actions: []
delete_protection: false
firewall_policy_change_protection: false
subnet_change_protection: false
logging_config: []
rule_group_config:
stateful-packet-inspection:
capacity: 50
name: stateful-packet-inspection
description: "Stateful inspection of packets"
type: "STATEFUL"
rule_group:
stateful_rule_options:
rule_order: "STRICT_ORDER"
rules_source:
stateful_rule:
- action: "DROP"
header:
destination: "124.1.1.24/32"
destination_port: 53
direction: "ANY"
protocol: "TCP"
source: "1.2.3.4/32"
source_port: 53
rule_option:
keyword: "sid:1"
- action: "PASS"
header:
destination: "ANY"
destination_port: "ANY"
direction: "ANY"
protocol: "TCP"
source: "10.10.192.0/19"
source_port: "ANY"
rule_option:
keyword: "sid:2"
- action: "PASS"
header:
destination: "ANY"
destination_port: "ANY"
direction: "ANY"
protocol: "TCP"
source: "10.10.224.0/19"
source_port: "ANY"
rule_option:
keyword: "sid:3"
```
Example of a Network Firewall with [Suricata](https://suricata.readthedocs.io/en/suricata-6.0.0/rules/) rules:
:::tip
For [Suricata](https://suricata.io/) rule group type, you provide match and action settings in a string, in a Suricata
compatible specification. The specification fully defines what the stateful rules engine looks for in a traffic flow
and the action to take on the packets in a flow that matches the inspection criteria.
Refer to
[Suricata compatible rule strings in AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-suricata.html)
for more details.
:::
```yaml
components:
terraform:
network-firewall:
metadata:
component: "network-firewall"
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: "network-firewall"
# The name of a VPC component where the Network Firewall is provisioned
vpc_component_name: "vpc"
firewall_subnet_name: "firewall"
delete_protection: false
firewall_policy_change_protection: false
subnet_change_protection: false
# Logging config
logging_enabled: true
flow_logs_bucket_component_name: "network-firewall-logs-bucket-flow"
alert_logs_bucket_component_name: "network-firewall-logs-bucket-alert"
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateless-default-actions.html
# https://docs.aws.amazon.com/network-firewall/latest/APIReference/API_FirewallPolicy.html
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-action.html#rule-action-stateless
stateless_default_actions:
- "aws:forward_to_sfe"
stateless_fragment_default_actions:
- "aws:forward_to_sfe"
stateless_custom_actions: []
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-strict-rule-evaluation-order.html
# https://github.com/aws-samples/aws-network-firewall-strict-rule-ordering-terraform
policy_stateful_engine_options_rule_order: "STRICT_ORDER"
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-default-actions.html
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-default-rule-evaluation-order
# https://docs.aws.amazon.com/network-firewall/latest/APIReference/API_FirewallPolicy.html
stateful_default_actions:
- "aws:alert_established"
# - "aws:alert_strict"
# - "aws:drop_established"
# - "aws:drop_strict"
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-groups.html
rule_group_config:
stateful-inspection:
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-group-managing.html#nwfw-rule-group-capacity
# For stateful rules, `capacity` means the max number of rules in the rule group
capacity: 1000
name: "stateful-inspection"
description: "Stateful inspection of packets"
type: "STATEFUL"
rule_group:
rule_variables:
port_sets: []
ip_sets:
- key: "CIDR_1"
definition:
- "10.10.0.0/11"
- key: "CIDR_2"
definition:
- "10.11.0.0/11"
- key: "SCANNER"
definition:
- "10.12.48.186/32"
# bad actors
- key: "BLOCKED_LIST"
definition:
- "193.142.146.35/32"
- "69.40.195.236/32"
- "125.17.153.207/32"
- "185.220.101.4/32"
- "195.219.212.151/32"
- "162.247.72.199/32"
- "147.185.254.17/32"
- "179.60.147.101/32"
- "157.230.244.66/32"
- "192.99.4.116/32"
- "62.102.148.69/32"
- "185.129.62.62/32"
stateful_rule_options:
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-strict-rule-evaluation-order.html
# All the stateful rule groups are provided to the rule engine as Suricata compatible strings
# Suricata can evaluate stateful rule groups by using the default rule group ordering method,
# or you can set an exact order using the strict ordering method.
# The settings for your rule groups must match the settings for the firewall policy that they belong to.
# With strict ordering, the rule groups are evaluated by order of priority, starting from the lowest number,
# and the rules in each rule group are processed in the order in which they're defined.
rule_order: "STRICT_ORDER"
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-how-to-provide-rules.html
rules_source:
# Suricata rules for the rule group
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html
# https://github.com/aws-samples/aws-network-firewall-terraform/blob/main/firewall.tf#L66
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-suricata.html
# https://coralogix.com/blog/writing-effective-suricata-rules-for-the-sta/
# https://suricata.readthedocs.io/en/suricata-6.0.10/rules/intro.html
# https://suricata.readthedocs.io/en/suricata-6.0.0/rules/header-keywords.html
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-action.html
#
# With Strict evaluation order, the rules in each rule group are processed in the order in which they're defined
#
# Pass – Discontinue inspection of the matching packet and permit it to go to its intended destination
#
# Drop or Alert – Evaluate the packet against all rules with drop or alert action settings.
# If the firewall has alert logging configured, send a message to the firewall's alert logs for each matching rule.
# The first log entry for the packet will be for the first rule that matched the packet.
# After all rules have been evaluated, handle the packet according to the action setting in the first rule that matched the packet.
# If the first rule has a drop action, block the packet. If it has an alert action, continue evaluation.
#
# Reject – Drop traffic that matches the conditions of the stateful rule and send a TCP reset packet back to sender of the packet.
# A TCP reset packet is a packet with no payload and a RST bit contained in the TCP header flags.
# Reject is available only for TCP traffic. This option doesn't support FTP and IMAP protocols.
rules_string: |
alert ip $BLOCKED_LIST any <> any any ( msg:"Alert on blocked traffic"; sid:100; rev:1; )
drop ip $BLOCKED_LIST any <> any any ( msg:"Blocked blocked traffic"; sid:200; rev:1; )
pass ip $SCANNER any -> any any ( msg: "Allow scanner"; sid:300; rev:1; )
alert ip $CIDR_1 any -> $CIDR_2 any ( msg:"Alert on CIDR_1 to CIDR_2 traffic"; sid:400; rev:1; )
drop ip $CIDR_1 any -> $CIDR_2 any ( msg:"Blocked CIDR_1 to CIDR_2 traffic"; sid:410; rev:1; )
pass ip any any <> any any ( msg: "Allow general traffic"; sid:10000; rev:1; )
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`rule_group_config` (`any`) required
Rule group configuration. Refer to [networkfirewall_rule_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/networkfirewall_rule_group) for configuration details
Alert logs bucket component name
**Default value:** `null`
`availability_zone_ids` (`list(string)`) optional
List of Availability Zone IDs where firewall endpoints will be created for a transit gateway-attached firewall.
Only used when 'transit_gateway_component_name' is set, not used when using 'vpc_component_name'.
If not specified (empty list), all available AZs in the region will be automatically selected.
If specified, must use AZ ID format (e.g., 'use1-az1', 'usw2-az2'), not AZ names (e.g., 'us-east-1a').
Example: ["use1-az1", "use1-az2"]
**Default value:** `[ ]`
`delete_protection` (`bool`) optional
A boolean flag indicating whether it is possible to delete the firewall
**Default value:** `false`
AWS Network Firewall description. If not provided, the Network Firewall name will be used
**Default value:** `null`
`network_firewall_name` (`string`) optional
Friendly name to give the Network Firewall. If not provided, the name will be derived from the context.
Changing the name will cause the Firewall to be deleted and recreated.
**Default value:** `null`
Friendly name to give the Network Firewall policy. If not provided, the name will be derived from the context.
Changing the name will cause the policy to be deleted and recreated.
**Default value:** `null`
Indicates how to manage the order of stateful rule evaluation for the policy. Valid values: DEFAULT_ACTION_ORDER, STRICT_ORDER
**Default value:** `null`
Set of configuration blocks describing the custom action definitions that are available for use in the firewall policy's `stateless_default_actions`
**Type:**
```hcl
list(object({
action_name = string
dimensions = list(string)
}))
```
**Default value:** `[ ]`
The name of a Transit Gateway component to attach the Network Firewall to. Either 'vpc_component_name' or 'transit_gateway_component_name' must be provided, but not both
**Default value:** `null`
`vpc_component_name` (`string`) optional
The name of a VPC component where the Network Firewall is provisioned. Either 'vpc_component_name' or 'transit_gateway_component_name' must be provided, but not both
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`az_subnet_endpoint_stats`
List of objects with each object having three items: AZ, subnet ID, VPC endpoint ID. Only applicable in VPC mode
`network_firewall_arn`
Network Firewall ARN
`network_firewall_name`
Network Firewall name
`network_firewall_policy_arn`
Network Firewall policy ARN
`network_firewall_policy_name`
Network Firewall policy name
`network_firewall_status`
Nested list of information about the current status of the Network Firewall
`transit_gateway_attachment_id`
The unique identifier of the transit gateway attachment. Only applicable in Transit Gateway mode
`transit_gateway_owner_account_id`
The AWS account ID that owns the transit gateway. Only applicable in Transit Gateway mode
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 6.5.0`
- `null`, version: `>= 3.0`
### Providers
- `aws`, version: `>= 6.5.0`
- `null`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`alert_logs_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`flow_logs_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`network_firewall` | 1.0.1 | [`cloudposse/network-firewall/aws`](https://registry.terraform.io/modules/cloudposse/network-firewall/aws/1.0.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`transit_gateway` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`null_resource.validate_deployment_mode`](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_availability_zones.available`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones) (data source)
---
## nlb
This component provisions an AWS Network Load Balancer (NLB) using the
upstream Cloud Posse Terraform module. It supports internet-facing or
internal NLBs, TCP/TLS/UDP listeners, optional default target group,
cross-zone load balancing, access logs, subnet EIP mappings, and deletion
protection.
It integrates with other Atmos components via remote state:
- `vpc`: to automatically source the VPC ID and appropriate subnets
(public or private) when not provided explicitly.
- `acm` or `dns-delegated`: to automatically discover an ACM certificate
ARN for TLS listeners when `certificate_arn` is not provided. Behavior
is controlled by `dns_acm_enabled`.
You can also override any of these via input variables, including
providing a specific `certificate_arn` directly.
## Usage
**Stack Level**: Regional
### Example
Here's an example snippet showing how to use this component in an Atmos stack.
```yaml
components:
terraform:
vpc:
vars:
availability_zones: ["a", "b", "c"]
ipv4_primary_cidr_block: "10.100.0.0/18"
nlb:
vars:
# Core settings
internal: false
tcp_enabled: true
tcp_port: 80
tls_enabled: true
tls_port: 443
# Optional: discover cert from other components via remote state
# Toggle behavior with `dns_acm_enabled` or provide `certificate_arn`
dns_acm_enabled: true
acm_component_name: acm
dns_delegated_component_name: dns-delegated
# certificate_arn: "arn:aws:acm:..."
# Optional: map EIPs to subnets
subnet_mapping_enabled: false
eip_allocation_ids: []
# Other common options
cross_zone_load_balancing_enabled: true
deletion_protection_enabled: false
access_logs_enabled: false
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`access_logs_enabled` (`bool`) optional
A boolean flag to enable/disable access_logs
**Default value:** `false`
`acm_component_name` (`string`) optional
Atmos `acm` component name
**Default value:** `"acm"`
`certificate_arn` (`string`) optional
ARN of the certificate for the TLS listener
**Default value:** `null`
Enable cross zone load balancing
**Default value:** `true`
`deletion_protection_enabled` (`bool`) optional
Enable deletion protection for the NLB
**Default value:** `false`
`deregistration_delay` (`number`) optional
Time to wait before deregistering targets
**Default value:** `15`
`dns_acm_enabled` (`bool`) optional
If `true`, use the ACM ARN created by the given `dns-delegated` component. Otherwise, use the ACM ARN created by the given `acm` component. Overridden by `certificate_arn`
**Default value:** `false`
Max length for the target group name
**Default value:** `32`
`target_group_port` (`number`) optional
Port for the default target group
**Default value:** `80`
`target_group_target_type` (`string`) optional
Target type for the default target group
**Default value:** `"ip"`
`tcp_enabled` (`bool`) optional
Enable the TCP listener
**Default value:** `true`
`tcp_port` (`number`) optional
Port for the TCP listener
**Default value:** `80`
`tls_enabled` (`bool`) optional
Enable the TLS listener
**Default value:** `false`
`tls_port` (`number`) optional
Port for the TLS listener
**Default value:** `443`
`udp_enabled` (`bool`) optional
Enable the UDP listener
**Default value:** `false`
`udp_port` (`number`) optional
Port for the UDP listener
**Default value:** `53`
`vpc_component_name` (`string`) optional
Name of the VPC component
**Default value:** `"vpc"`
`vpc_id` (`string`) optional
VPC ID to associate with NLB
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`nlb`
The NLB of the Component
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`acm` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`nlb` | 0.18.2 | [`cloudposse/nlb/aws`](https://registry.terraform.io/modules/cloudposse/nlb/aws/0.18.2) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## opsgenie-team
## Deprecated
This module is deprecated and will be archived on January 17, 2026.
Please see the [pinned issue](https://github.com/cloudposse-terraform-components/aws-opsgenie-team/issues/57) for details and migration guidance.
### Historical Description
This component provisions Opsgenie teams and related services, rules, and schedules.
## Usage
#### Pre-requisites
You need an API Key stored in `/opsgenie/opsgenie_api_key` of SSM; this is configurable using the
`ssm_parameter_name_format` and `ssm_path` variables.
Opsgenie is now part of Atlassian, so you need to make sure you are creating an Opsgenie API Key, which looks like
`abcdef12-3456-7890-abcd-ef0123456789` and not an Atlassian API key, which looks like:
```shell
ATAfT3xFfGF0VFXAfl8EmQNPVv1Hlazp3wsJgTmM8Ph7iP-RtQyiEfw-fkDS2LvymlyUOOhc5XiSx46vQWnznCJolq-GMX4KzdvOSPhEWr-BF6LEkJQC4CSjDJv0N7d91-0gVekNmCD2kXY9haUHUSpO4H7X6QxyImUb9VmOKIWTbQi8rf4CF28=63CB21B9
```
Generate an API Key by going to Settings -> API key management on your Opsgenie control panel, which will have an
address like `https://.app.opsgenie.com/settings/api-key-management`, and click the "Add new API key" button.
Once you have the key, test it with curl to verify that you are at least on a Standard plan with OpsGenie:
```
curl -X GET 'https://api.opsgenie.com/v2/account' \
--header "Authorization: GenieKey $API_KEY"
```
The result should be something similar to below:
```
{
"data": {
"name": "opsgenie",
"plan": {
"maxUserCount": 1500,
"name": "Enterprise",
...
}
```
If you see `Free` or `Essentials` in the plan, then you won't be able to use this component.
#### Getting Started
- Stack Level: Global
This component should only be applied once as the resources it creates are regional, but it works with integrations.
This is typically done via the auto or corp stack (e.g. `gbl-auto.yaml`).
```yaml
# 9-5 Mon-Fri
business_hours: &business_hours
type: "weekday-and-time-of-day"
restrictions:
- start_hour: 9
start_min: 00
start_day: "monday"
end_hour: 17
end_min: 00
end_day: "friday"
# 9-5 Every Day
waking_hours: &waking_hours
type: "time-of-day"
restrictions:
- start_hour: 9
start_min: 00
end_hour: 17
end_min: 00
# This is a partial incident mapping, we use this as a base to add P1 & P2 below. This is not a complete mapping as there is no P0
priority_level_to_incident: &priority_level_to_incident
enabled: true
type: incident
priority: P1
order: 1
notify: # if omitted, this will default to the default schedule
type: schedule
name: default
criteria:
type: "match-all-conditions"
conditions:
- field: priority
operation: equals
expected_value: P0
p1: &p1_is_incident
<<: *priority_level_to_incident
priority: P1
criteria:
type: "match-all-conditions"
conditions:
- field: priority
operation: equals
expected_value: P1
p2: &p2_is_incident
<<: *priority_level_to_incident
priority: P2
criteria:
type: "match-all-conditions"
conditions:
- field: priority
operation: equals
expected_value: P2
components:
terraform:
# defaults
opsgenie-team-defaults:
metadata:
type: abstract
component: opsgenie-team
vars:
schedules:
london_schedule:
enabled: false
description: "London Schedule"
timezone: "Europe/London"
# Routing Rules determine how alerts are routed to the team,
# this includes priority changes, incident mappings, and schedules.
routing_rules:
london_schedule:
enabled: false
type: alert
# https://support.atlassian.com/opsgenie/docs/supported-timezone-ids/
timezone: Europe/London
notify:
type: schedule # could be escalation, could be none
name: london_schedule
time_restriction: *waking_hours
criteria:
type: "match-all-conditions"
conditions:
- field: priority
operation: greater-than
expected_value: P2
# Since Incidents require a service, we create a rule for every `routing_rule` type `incident` for every service on the team.
# This is done behind the scenes by the `opsgenie-team` component.
# These rules below map P1 & P2 to incidents, using yaml anchors from above.
p1: *p1_is_incident
p2: *p2_is_incident
# New team
opsgenie-team-sre:
metadata:
type: real
component: opsgenie-team
inherits:
- opsgenie-team-defaults
vars:
enabled: true
name: sre
# These members will be added with an opsgenie_user
# To clickops members, set this key to an empty list `[]`
members:
- user: user@example.com
role: owner
escalations:
otherteam_escalation:
enabled: true
name: otherteam_escalation
description: Other team escalation
rules:
condition: if-not-acked
notify_type: default
delay: 60
recipients:
- type: team
name: otherteam
yaep_escalation:
enabled: true
name: yaep_escalation
description: Yet another escalation policy
rules:
condition: if-not-acked
notify_type: default
delay: 90
recipients:
- type: user
name: user@example.com
schedule_escalation:
enabled: true
name: schedule_escalation
description: Schedule escalation policy
rules:
condition: if-not-acked
notify_type: default
delay: 30
recipients:
- type: schedule
name: secondary_on_call
```
The API keys relating to the Opsgenie Integrations are stored in SSM Parameter Store and can be accessed via chamber.
```
AWS_PROFILE=foo chamber list opsgenie-team/
```
### ClickOps Work
- After deploying the opsgenie-team component the created team will have a schedule named after the team. This is
purposely left to be clickOps’d so the UI can be used to set who is on call, as that is the usual way (not through
code). Additionally, we do not want a re-apply of the Terraform to delete or shuffle who is planned to be on call,
thus we left who is on-call on a schedule out of the component.
### Known Issues
#### Different API Endpoints in Use
The problem is there are 3 different api endpoints in use
- `/webapp` - the most robust - only exposed to the UI (that we've seen)
- `/v2/` - robust with some differences from `webapp`
- `/v1/` - the oldest and furthest from the live UI.
#### Cannot create users
This module does not create users. Users must have already been created to be added to a team.
#### Cannot Add dependent Services
- Api Currently doesn't support Multiple ServiceIds for incident Rules
#### Cannot Add Stakeholders
- Track the issue: https://github.com/opsgenie/terraform-provider-opsgenie/issues/278
#### No Resource to create Slack Integration
- Track the issue: https://github.com/DataDog/terraform-provider-datadog/issues/67
#### Out of Date Terraform Docs
Another Problem is the terraform docs are not always up to date with the provider code.
The OpsGenie Provider uses a mix of `/v1` and `/v2`. This means there are many things you can only do from the UI.
Listed below in no particular order
- Incident Routing cannot add dependent services - in `v1` and `v2` a `service_incident_rule` object has `serviceId` as
type string, in webapp this becomes `serviceIds` of type `list(string)`
- Opsgenie Provider appears to be inconsistent with how it uses `time_restriction`:
- `restrictions` for type `weekday-and-time-of-day`
- `restriction` for type `time-of-day`
Unfortunately none of this is in the terraform docs, and was found via errors and digging through source code.
Track the issue: https://github.com/opsgenie/terraform-provider-opsgenie/issues/282
#### GMT Style Timezones
We recommend to use the human readable timezone such as `Europe/London`.
- Setting a schedule to a GMT-style timezone with offsets can cause inconsistent plans.
Setting the timezone to `Etc/GMT+1` instead of `Europe/London`, will lead to permadrift as OpsGenie converts the GMT
offsets to regional timezones at deploy-time. In the previous deploy, the GMT style get converted to
`Atlantic/Cape_Verde`.
```hcl
# module.routing["london_schedule"].module.team_routing_rule[0].opsgenie_team_routing_rule.this[0] will be updated in-place
~ resource "opsgenie_team_routing_rule" "this" {
id = "4b4c4454-8ccf-41a9-b856-02bec6419ba7"
name = "london_schedule"
~ timezone = "Atlantic/Cape_Verde" -> "Etc/GMT+1"
# (2 unchanged attributes hidden)
```
Some GMT styles will not cause a timezone change on subsequent applies such as `Etc/GMT+8` for `Asia/Taipei`.
- If the calendar date has crossed daylight savings time, the `Etc/GMT+` GMT style will need to be updated to reflect
the correct timezone.
Track the issue: https://github.com/opsgenie/terraform-provider-opsgenie/issues/258
### Related How-to Guides
[See OpsGenie in the Reference Architecture](https://docs.cloudposse.com/layers/alerting/opsgenie/)
## Variables
### Required Variables
Whether to reuse all existing resources and only create new integrations
**Default value:** `false`
`datadog_integration_enabled` (`bool`) optional
Whether to enable Datadog integration with opsgenie (datadog side)
**Default value:** `true`
`escalations` (`map(any)`) optional
Escalations to configure and create for the team.
**Default value:** `{ }`
`integrations` (`map(any)`) optional
API Integrations for the team. If not specified, `datadog` is assumed.
**Default value:** `{ }`
`integrations_enabled` (`bool`) optional
Whether to enable the integrations submodule or not
**Default value:** `true`
`kms_key_arn` (`string`) optional
AWS KMS key used for writing to SSM
**Default value:** `"alias/aws/ssm"`
`members` (`set(any)`) optional
Members as objects with their role within the team.
**Default value:** `[ ]`
`routing_rules` (`any`) optional
Routing Rules for the team
**Default value:** `null`
`schedules` (`map(any)`) optional
Schedules to create for the team
**Default value:** `{ }`
`services` (`map(any)`) optional
Services to create and register to the team.
**Default value:** `{ }`
`ssm_parameter_name_format` (`string`) optional
SSM parameter name format
**Default value:** `"/%s/%s"`
`ssm_path` (`string`) optional
SSM path
**Default value:** `"opsgenie"`
`team_naming_format` (`string`) optional
OpsGenie Team Naming Format
**Default value:** `"%s_%s"`
`team_options` optional
Configure the team options.
See `opsgenie_team` Terraform resource [documentation](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/resources/team#argument-reference) for more details.
**Type:**
```hcl
object({
description = optional(string)
ignore_members = optional(bool, false)
delete_default_resources = optional(bool, false)
})
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`escalation`
Escalation rules created
`integration`
Integrations created
`routing`
Routing rules created
`team_id`
Team ID
`team_members`
Team members
`team_name`
Team Name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
- `opsgenie`, version: `>= 0.6.7`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `datadog`, version: `>= 3.3.0`
- `opsgenie`, version: `>= 0.6.7`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`datadog_configuration` | v1.535.13 | [`github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys`](https://registry.terraform.io/modules/github.com/cloudposse-terraform-components/aws-datadog-credentials/src/modules/datadog_keys/v1.535.13) | n/a
`escalation` | latest | [`./modules/escalation`](https://registry.terraform.io/modules/./modules/escalation/) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`integration` | latest | [`./modules/integration`](https://registry.terraform.io/modules/./modules/integration/) | n/a
`members_merge` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
`routing` | latest | [`./modules/routing`](https://registry.terraform.io/modules/./modules/routing/) | n/a
`schedule` | 0.16.0 | [`cloudposse/incident-management/opsgenie//modules/schedule`](https://registry.terraform.io/modules/cloudposse/incident-management/opsgenie/modules/schedule/0.16.0) | n/a
`service` | 0.16.0 | [`cloudposse/incident-management/opsgenie//modules/service`](https://registry.terraform.io/modules/cloudposse/incident-management/opsgenie/modules/service/0.16.0) | n/a
`team` | 0.16.0 | [`cloudposse/incident-management/opsgenie//modules/team`](https://registry.terraform.io/modules/cloudposse/incident-management/opsgenie/modules/team/0.16.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`datadog_integration_opsgenie_service_object.fake_service_name`](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/integration_opsgenie_service_object) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.opsgenie_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.opsgenie_team_api_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`opsgenie_team.existing`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/team) (data source)
- [`opsgenie_user.team_members`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/user) (data source)
---
## escalation
## Escalation
Terraform module to configure
[Opsgenie Escalation](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/resources/escalation)
## Usage
[Create Opsgenie Escalation example](https://github.com/cloudposse/terraform-opsgenie-incident-management/tree/main/examples/escalation)
```hcl
module "escalation" {
source = "cloudposse/incident-management/opsgenie//modules/escalation"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
escalation = {
name = module.label.id
owner_team_id = module.owner_team.team_id
rule = {
recipients = [{
type = "team"
id = module.escalation_team.team_id
}]
}
}
}
```
## Variables
### Required Variables
Current OpsGenie Team Name
**Default value:** `null`
`team_naming_format` (`string`) optional
OpsGenie Team Naming Format
**Default value:** `"%s_%s"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`escalation_id`
The ID of the Opsgenie Escalation
`escalation_name`
Name of the Opsgenie Escalation
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `opsgenie`, version: `>= 0.6.7`
### Providers
- `opsgenie`, version: `>= 0.6.7`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`opsgenie_escalation.this`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/resources/escalation) (resource)
## Data Sources
The following data sources are used by this module:
- [`opsgenie_schedule.recipient`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/schedule) (data source)
- [`opsgenie_team.recipient`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/team) (data source)
- [`opsgenie_user.recipient`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/user) (data source)
None
---
## integration
## Integration
This module creates an OpsGenie integrations for a team. By Default, it creates a Datadog integration.
## Variables
### Required Variables
`team_name` (`string`) required
Name of the team to assign this integration to.
`type` (`string`) required
API Integration Type
### Optional Variables
`append_datadog_tags_enabled` (`bool`) optional
Add Datadog Tags to the Tags of alerts from this integration.
**Default value:** `true`
`kms_key_arn` (`string`) optional
AWS KMS key used for writing to SSM
**Default value:** `"alias/aws/ssm"`
`ssm_path_format` (`string`) optional
SSM parameter name format
**Default value:** `"/opsgenie-team/%s"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`ssm_path`
Full SSM path of the team integration key
`type`
Type of the team integration
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `opsgenie`, version: `>= 0.6.7`
### Providers
- `opsgenie`, version: `>= 0.6.7`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`api_integration` | 0.16.0 | [`cloudposse/incident-management/opsgenie//modules/api_integration`](https://registry.terraform.io/modules/cloudposse/incident-management/opsgenie/modules/api_integration/0.16.0) | n/a
`integration_name` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | Fully qualified integration name normalized
`ssm_parameter_store` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | Populate SSM Parameter Store with API Keys for OpsGenie API Integrations. These keys can either be used when setting up OpsGenie integrations manually, Or they can be used programmatically, if their respective Terraform provider supports it.
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`opsgenie_integration_action.datadog`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/resources/integration_action) (resource)
## Data Sources
The following data sources are used by this module:
- [`opsgenie_team.default`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/team) (data source)
None
---
## routing
## Routing
This module creates team routing rules, these are the initial rules that are applied to an alert to determine who gets
notified. This module also creates incident service rules, which determine if an alert is considered a service incident
or not.
## Variables
### Required Variables
`criteria` required
Criteria of the Routing Rule, rules to match or not
**Type:**
```hcl
object({
type = string,
conditions = any
})
```
`incident_properties` (`map(any)`) required
Properties to override on the incident routing rule
`notify` (`map(any)`) required
Notification of team alerting rule
`order` (`number`) required
Order of the alerting rule
`priority` (`string`) required
Priority level of custom Incidents
### Optional Variables
`services` (`map(any)`) optional
Team services to associate with incident routing rules
**Default value:** `null`
`team_name` (`string`) optional
Current OpsGenie Team Name
**Default value:** `null`
`team_naming_format` (`string`) optional
OpsGenie Team Naming Format
**Default value:** `"%s_%s"`
`time_restriction` (`any`) optional
Time restriction of alert routing rule
**Default value:** `null`
`timezone` (`string`) optional
Timezone for this alerting route
**Default value:** `null`
`type` (`string`) optional
Type of Routing Rule Alert or Incident
**Default value:** `"alert"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`service_incident_rule`
Service incident rules for incidents
`team_routing_rule`
Team routing rules for alerts
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `opsgenie`, version: `>= 0.6.7`
### Providers
- `opsgenie`, version: `>= 0.6.7`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`service_incident_rule` | 0.16.0 | [`cloudposse/incident-management/opsgenie//modules/service_incident_rule`](https://registry.terraform.io/modules/cloudposse/incident-management/opsgenie/modules/service_incident_rule/0.16.0) | n/a
`serviceless_incident_rule` | 0.16.0 | [`cloudposse/incident-management/opsgenie//modules/service_incident_rule`](https://registry.terraform.io/modules/cloudposse/incident-management/opsgenie/modules/service_incident_rule/0.16.0) | n/a
`team_routing_rule` | 0.16.0 | [`cloudposse/incident-management/opsgenie//modules/team_routing_rule`](https://registry.terraform.io/modules/cloudposse/incident-management/opsgenie/modules/team_routing_rule/0.16.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`opsgenie_schedule.notification_schedule`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/schedule) (data source)
- [`opsgenie_service.incident_service`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/service) (data source)
- [`opsgenie_team.default`](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/data-sources/team) (data source)
None
---
## organization
This component is responsible for creating or importing a single AWS Organization.
Unlike the monolithic `account` component which manages the entire organization hierarchy,
this component follows the single-resource pattern - it only manages the AWS Organization itself.
:::note
This component should be deployed from the **management/root account** as it creates/manages
the AWS Organization.
:::
## Usage
**Stack Level**: Global (deployed in the management/root account)
### Basic Usage
```yaml
components:
terraform:
aws-organization:
vars:
enabled: true
aws_service_access_principals:
- cloudtrail.amazonaws.com
- guardduty.amazonaws.com
- ram.amazonaws.com
- sso.amazonaws.com
enabled_policy_types:
- SERVICE_CONTROL_POLICY
- TAG_POLICY
```
### Importing an Existing Organization
To import an existing AWS Organization:
1. Get the organization ID:
```bash
aws organizations describe-organization --query 'Organization.Id' --output text
```
2. Set the `import_resource_id` variable:
```yaml
components:
terraform:
aws-organization:
vars:
import_resource_id: "o-xxxxxxxxxx"
aws_service_access_principals:
- cloudtrail.amazonaws.com
- guardduty.amazonaws.com
enabled_policy_types:
- SERVICE_CONTROL_POLICY
```
3. Run `atmos terraform apply`
After successful import, you can remove the `import_resource_id` variable.
> **Note:** If you don't need import functionality, you can exclude `imports.tf` when vendoring the component.
## Related Components
This component is part of a suite of single-resource components for AWS Organizations:
| Component | Purpose |
|-----------|---------|
| `aws-organization` | Creates/imports the AWS Organization (this component) |
| `aws-organizational-unit` | Creates/imports a single OU |
| `aws-account` | Creates/imports a single AWS Account |
| `aws-account-settings` | Configures account settings |
| `aws-scp` | Creates/imports Service Control Policies |
## Variables
### Required Variables
List of AWS service principal names for which you want to enable integration with your organization
**Default value:** `[ ]`
`enabled_policy_types` (`list(string)`) optional
List of Organizations policy types to enable in the Organization Root (e.g., SERVICE_CONTROL_POLICY, TAG_POLICY, BACKUP_POLICY, AISERVICES_OPT_OUT_POLICY)
**Default value:** `[ ]`
`feature_set` (`string`) optional
Feature set of the organization. One of 'ALL' or 'CONSOLIDATED_BILLING'
**Default value:** `"ALL"`
`import_resource_id` (`string`) optional
The ID of an existing AWS Organization to import. If set, the organization will be imported rather than created.
**Default value:** `null`
List of IAM features to enable. Valid values are 'RootCredentialsManagement' and 'RootSessions'. Set to empty list to disable.
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`master_account_arn`
The ARN of the master account
`master_account_email`
The email of the master account
`master_account_id`
The ID of the master account
`non_master_accounts`
List of non-master accounts in the organization
`organization_arn`
The ARN of the organization
`organization_enabled_features`
List of enabled IAM organization features
`organization_id`
The ID of the organization
`organization_root_id`
The ID of the organization root
`roots`
List of organization roots
## Dependencies
### Requirements
- `terraform`, version: `>= 1.7.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_organizations_features.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_organizations_features) (resource)
- [`aws_organizations_organization.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_organization) (resource)
## Data Sources
The following data sources are used by this module:
---
## organizational-unit
This component is responsible for creating or importing a single AWS Organizations Organizational Unit (OU).
Unlike the monolithic `account` component which manages the entire organization hierarchy,
this component follows the single-resource pattern - it only manages a single OU.
:::note
This component should be deployed from the **management/root account** as it creates OUs
within AWS Organizations.
:::
## Usage
**Stack Level**: Global (deployed in the management/root account)
### Basic Usage
```yaml
components:
terraform:
aws-organizational-unit/core:
metadata:
component: aws-organizational-unit
vars:
name: core
parent_id: !terraform.output aws-organization organization_root_id
```
### Using Remote State for Parent ID
Reference the organization root dynamically:
```yaml
components:
terraform:
aws-organizational-unit/core:
metadata:
component: aws-organizational-unit
vars:
name: core
parent_id: !terraform.output aws-organization organization_root_id
aws-organizational-unit/plat:
metadata:
component: aws-organizational-unit
vars:
name: plat
parent_id: !terraform.output aws-organization organization_root_id
```
### Importing an Existing OU
To import an existing Organizational Unit:
1. Get the OU ID:
```bash
aws organizations list-organizational-units-for-parent --parent-id r-xxxx
```
2. Set the `import_resource_id` variable:
```yaml
components:
terraform:
aws-organizational-unit/core:
metadata:
component: aws-organizational-unit
vars:
name: core
parent_id: "r-xxxx"
import_resource_id: "ou-xxxx-xxxxxxxx"
```
3. Run `atmos terraform apply`
After successful import, you can remove the `import_resource_id` variable.
> **Note:** If you don't need import functionality, you can exclude `imports.tf` when vendoring the component.
## Related Components
This component is part of a suite of single-resource components for AWS Organizations:
| Component | Purpose |
|-----------|---------|
| `aws-organization` | Creates/imports the AWS Organization |
| `aws-organizational-unit` | Creates/imports a single OU (this component) |
| `aws-account` | Creates/imports a single AWS Account |
| `aws-account-settings` | Configures account settings |
| `aws-scp` | Creates/imports Service Control Policies |
## Variables
### Required Variables
`parent_id` (`string`) required
The ID of the parent organizational unit or organization root
`region` (`string`) required
AWS Region
### Optional Variables
`import_resource_id` (`string`) optional
The ID of an existing Organizational Unit to import. If set, the OU will be imported rather than created.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`accounts`
List of accounts in this Organizational Unit
`organizational_unit_arn`
The ARN of the Organizational Unit
`organizational_unit_id`
The ID of the Organizational Unit
`organizational_unit_name`
The name of the Organizational Unit
`parent_id`
The parent ID of the Organizational Unit
## Dependencies
### Requirements
- `terraform`, version: `>= 1.7.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_organizations_organizational_unit.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_organizational_unit) (resource)
## Data Sources
The following data sources are used by this module:
---
## philips-labs-github-runners
This component provisions the surrounding infrastructure for GitHub self-hosted runners.
## Prerequisites
- GitHub App installed on the organization
- For more details see
[Philips Lab's Setting up a GitHub App](https://github.com/philips-labs/terraform-aws-github-runner/tree/main#setup-github-app-part-1)
- Ensure you create a **PRIVATE KEY** and store it in SSM, **NOT** to be confused with a **Client Secret**. Private
Keys are created in the GitHub App Configuration and scrolling to the bottom.
- GitHub App ID and private key stored in SSM under `/pl-github-runners/id` (or the value of
`var.github_app_id_ssm_path`)
- GitHub App Private Key stored in SSM (base64 encoded) under `/pl-github-runners/key` (or the value of
`var.github_app_key_ssm_path`)
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
philips-labs-github-runners:
vars:
enabled: true
```
The following will create
- An API Gateway
- Lambdas
- SQS Queue
- EC2 Launch Template instances
The API Gateway is registered as a webhook within the GitHub app. Which scales up or down, via lambdas, the EC2 Launch
Template by the number of messages in the SQS queue.

## Modules
### `webhook-github-app`
This is a fork of https://github.com/philips-labs/terraform-aws-github-runner/tree/main/modules/webhook-github-app.
We customized it until this PR is resolved as it does not update the GitHub App webhook until this is merged.
- https://github.com/philips-labs/terraform-aws-github-runner/pull/3625
This module also requires an environment variable:
- `GH_TOKEN` — a GitHub token must be set
This module also requires the `gh` CLI to be installed. Your Dockerfile can be updated to include the following to
install it:
```dockerfile
ARG GH_CLI_VERSION=2.39.1
# ...
ARG GH_CLI_VERSION
RUN apt-get update && apt-get install -y --allow-downgrades \
gh="${GH_CLI_VERSION}-*"
```
By default, we leave this disabled, as it requires a GitHub token to be set. You can enable it by setting
`var.enable_update_github_app_webhook` to `true`. When enabled, it will update the GitHub App webhook to point to the
API Gateway. This can occur if the API Gateway is deleted and recreated.
When disabled, you will need to manually update the GitHub App webhook to point to the API Gateway. This is output by
the component, and available via the `webhook` output under `endpoint`.
## Variables
### Required Variables
Default lifecycle used for runner instances, can be either `spot` or `on-demand`.
**Default value:** `"spot"`
`release_version` (`string`) optional
Version of the application
**Default value:** `"v5.4.0"`
`runner_extra_labels` (`list(string)`) optional
Extra (custom) labels for the runners (GitHub). Labels checks on the webhook can be enforced by setting `enable_workflow_job_labels_check`. GitHub read-only labels should not be provided.
**Default value:**
```hcl
[
"default"
]
```
Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations.
**Default value:** `-1`
`ssm_paths` optional
The root path used in SSM to store configuration and secrets.
**Type:**
```hcl
object({
root = optional(string, "github-action-runners")
app = optional(string, "app")
runners = optional(string, "runners")
use_prefix = optional(bool, true)
})
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`github_runners`
Information about the GitHub runners.
`queues`
Information about the GitHub runner queues. Such as `build_queue_arn` the ARN of the SQS queue to use for the build queue.
`ssm_parameters`
Information about the SSM parameters to use to register the runner.
`webhook`
Information about the webhook to use to register the runner.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `local`, version: `>= 2.4.0`
- `random`, version: `>= 3.0`
### Providers
- `random`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`github_runner` | 6.1.0 | [`philips-labs/github-runner/aws`](https://registry.terraform.io/modules/philips-labs/github-runner/aws/6.1.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`module_artifact` | 0.8.0 | [`cloudposse/module-artifact/external`](https://registry.terraform.io/modules/cloudposse/module-artifact/external/0.8.0) | n/a
`store_read` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`webhook_github_app` | 6.1.0 | [`philips-labs/github-runner/aws//modules/webhook-github-app`](https://registry.terraform.io/modules/philips-labs/github-runner/aws/modules/webhook-github-app/6.1.0) | n/a
## Resources
The following resources are used by this module:
- [`random_id.webhook_secret`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id) (resource)
## Data Sources
The following data sources are used by this module:
---
## private-link-service
This component provisions AWS VPC Endpoint Services (**provider side**) to expose **YOUR services** to external consumers via AWS PrivateLink.
## What This Component Does
**You are the PROVIDER** - This component creates the infrastructure to expose your services (EKS pods, RDS databases, APIs) to other AWS accounts or VPCs.
```
Your AWS Account (PROVIDER) Consumer's AWS Account
┌─────────────────────────────┐ ┌──────────────────────────┐
│ Your Services │ │ Their Applications │
│ - EKS pods │ │ - Airflow (Astronomer) │
│ - RDS databases │ │ - External systems │
│ - Internal APIs │ │ - Partner services │
│ ↓ │ │ ↑ │
│ Network Load Balancer ──────┼─────────┼─────────┘ │
│ ↓ │ AWS │ │
│ VPC Endpoint Service │ Private │ VPC Endpoint │
│ (this component) │ Link │ (they create) │
│ com.amazonaws.vpce... │ │ │
└─────────────────────────────┘ └──────────────────────────┘
```
**Key Point**: The consumer (e.g., Astronomer) creates a VPC Endpoint in their account that connects to YOUR VPC Endpoint Service. Traffic flows privately over AWS's network, never touching the internet.
## Astronomer Integration
This example shows the full workflow for exposing your EKS services to Astronomer's Airflow cluster via PrivateLink.
### Architecture
```
Astronomer's AWS Account YOUR AWS Account
┌──────────────────────────┐ ┌─────────────────────────────────┐
│ Airflow Workers │ │ EKS Cluster │
│ (run DAGs) │ │ │
│ ↓ │ │ Pods labeled: │
│ VPC Endpoint ────────────┼───Private─────┼→ astronomer: enabled │
│ (Astronomer creates) │ Link │ ↓ │
│ │ │ NLB (eks/nlb component) │
│ │ │ ↓ │
│ │ │ VPC Endpoint Service │
│ │ │ (this component) │
└──────────────────────────┘ └─────────────────────────────────┘
```
### Step 1: Label Your EKS Pods
First, tag the pods you want to expose to Astronomer:
```yaml
components:
terraform:
eks/echo-server:
vars:
# ...
chart_values:
labels:
astronomer: enabled # ← This label exposes pods to Astronomer
```
### Step 2: Create NLB via AWS Load Balancer Controller
Deploy an NLB that targets your labeled pods:
```yaml
components:
terraform:
eks/nlb/astronomer:
metadata:
component: eks/nlb
vars:
enabled: true
name: "nlb"
attributes: ["astronomer"]
# Target pods with the astronomer label
nlb_selector:
astronomer: enabled
```
### Step 3: Create VPC Endpoint Service
Now expose the NLB via PrivateLink:
```yaml
components:
terraform:
private-link-service/astronomer:
metadata:
component: private-link-service
vars:
enabled: true
name: "private-link-service"
attributes: ["astronomer"]
# Reference the NLB created in Step 2
vpc_endpoint_service_network_load_balancer_arns:
- !terraform.output eks/nlb/astronomer nlb_arn
# Allow Astronomer's AWS account (get from their support)
vpc_endpoint_service_allowed_principals:
- "arn:aws:iam::ASTRONOMER-ACCOUNT-ID:role/astronomer-remote-management"
```
### Step 4: Share Service Name with Astronomer
Get the VPC Endpoint Service name:
```bash
vpc_endpoint_service_name = "com.amazonaws.vpce.us-west-2.vpce-svc-0abc123def456789"
```
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
private-link-service:
vars:
enabled: true
name: "private-link-service"
vpc_endpoint_service_network_load_balancer_arns:
- !terraform.output eks/nlb nlb_arn
# Get customer AWS account ID or role ARN from their support team
# Example (get from Astronomer support):
vpc_endpoint_service_allowed_principals:
- "arn:aws:iam::ASTRONOMER-ACCOUNT-ID:role/astronomer-remote-management"
```
## Variables
### Required Variables
The supported IP address types. Valid values: ipv4, ipv6
**Default value:**
```hcl
[
"ipv4"
]
```
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`endpoint_events_sns_topic_arn`
The ARN of the SNS topic for endpoint connection events
`vpc_endpoint_service_arn`
The ARN of the VPC endpoint service
`vpc_endpoint_service_id`
The ID of the VPC endpoint service
`vpc_endpoint_service_name`
The service name that consumers use to connect
`vpc_endpoint_service_state`
The state of the VPC endpoint service
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0`
### Providers
- `aws`, version: `>= 4.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_sns_topic.endpoint_events`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sns_topic) (resource)
- [`aws_vpc_endpoint_connection_notification.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint_connection_notification) (resource)
- [`aws_vpc_endpoint_service.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint_service) (resource)
- [`aws_vpc_endpoint_service_allowed_principal.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint_service_allowed_principal) (resource)
## Data Sources
The following data sources are used by this module:
---
## rds
This component is responsible for provisioning an RDS instance. It seeds relevant database information (hostnames,
username, password, etc.) into AWS SSM Parameter Store.
Security Groups Guidance:
By default this component creates a client security group and adds that security group id to the default attached
security group. Ideally other AWS resources that require RDS access can be granted this client security group.
Additionally you can grant access via specific CIDR blocks or security group ids.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
### PostgreSQL
```yaml
components:
terraform:
rds/defaults:
metadata:
type: abstract
vars:
enabled: true
use_fullname: false
name: my-postgres-db
instance_class: db.t3.micro
database_name: my-postgres-db
# database_user: admin # enable to specify something specific
engine: postgres
engine_version: "15.2"
database_port: 5432
db_parameter_group: "postgres15"
allocated_storage: 10 #GBs
ssm_enabled: true
client_security_group_enabled: true
## The following settings allow the database to be accessed from anywhere
# publicly_accessible: true
# use_private_subnets: false
# allowed_cidr_blocks:
# - 0.0.0.0/0
```
### Microsoft SQL
```yaml
components:
terraform:
rds:
vars:
enabled: true
name: mssql
# SQL Server 2017 Enterprise
engine: sqlserver-ee
engine_version: "14.00.3356.20"
db_parameter_group: "sqlserver-ee-14.0"
license_model: license-included
# Required for MSSQL
database_name: null
database_port: 1433
database_user: mssql
instance_class: db.t3.xlarge
# There are issues with enabling this
multi_az: false
allocated_storage: 20
publicly_accessible: false
ssm_enabled: true
# This does not seem to work correctly
deletion_protection: false
```
### Provisioning from a snapshot
The snapshot identifier variable can be added to provision an instance from a snapshot HOWEVER- Keep in mind these
instances are provisioned from a unique kms key per rds. For clean terraform runs, you must first provision the key for
the destination instance, then copy the snapshot using that kms key.
Example - I want a new instance `rds-example-new` to be provisioned from a snapshot of `rds-example-old`:
1. Use the console to manually make a snapshot of rds instance `rds-example-old`
1. provision the kms key for `rds-example-new`
```
atmos terraform plan rds-example-new -s ue1-staging '-target=module.kms_key_rds.aws_kms_key.default[0]'
atmos terraform apply rds-example-new -s ue1-staging '-target=module.kms_key_rds.aws_kms_key.default[0]'
```
1. Use the console to copy the snapshot to a new name using the above provisioned kms key
1. Add `snapshot_identifier` variable to `rds-example-new` catalog and specify the newly copied snapshot that used the
above key
1. Post provisioning, remove the `snapshot_idenfier` variable and verify terraform runs clean for the copied instance
## Variables
### Required Variables
`allocated_storage` (`number`) required
The allocated storage in GBs
`database_name` (`string`) required
The name of the database to create when the DB instance is created
`database_port` (`number`) required
Database port (_e.g._ `3306` for `MySQL`). Used in the DB Security Group to allow access to the DB instance from the provided `security_group_ids`
`db_parameter_group` (`string`) required
The DB parameter group family name. The value depends on DB engine used. See [DBParameterGroupFamily](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBParameterGroup.html#API_CreateDBParameterGroup_RequestParameters) for instructions on how to retrieve applicable value.
`engine` (`string`) required
Database engine type
`engine_version` (`string`) required
Database engine version, depends on engine type
`instance_class` (`string`) required
Class of RDS instance
`region` (`string`) required
AWS Region
### Optional Variables
`allow_major_version_upgrade` (`bool`) optional
Allow major version upgrade
**Default value:** `false`
`allowed_cidr_blocks` (`list(string)`) optional
The whitelisted CIDRs which to allow `ingress` traffic to the DB instance
**Default value:** `[ ]`
`apply_immediately` (`bool`) optional
Specifies whether any database modifications are applied immediately, or during the next maintenance window
**Default value:** `false`
The IDs of the existing security groups to associate with the DB instance
**Default value:** `[ ]`
`auto_minor_version_upgrade` (`bool`) optional
Allow automated minor version upgrade (e.g. from Postgres 9.5.3 to Postgres 9.5.4)
**Default value:** `true`
`availability_zone` (`string`) optional
The AZ for the RDS instance. Specify one of `subnet_ids`, `db_subnet_group_name` or `availability_zone`. If `availability_zone` is provided, the instance will be placed into the default VPC or EC2 Classic
**Default value:** `null`
`backup_retention_period` (`number`) optional
Backup retention period in days. Must be > 0 to enable backups
**Default value:** `0`
`backup_window` (`string`) optional
When AWS can perform DB snapshots, can't overlap with maintenance window
**Default value:** `"22:00-03:00"`
`ca_cert_identifier` (`string`) optional
The identifier of the CA certificate for the DB instance
**Default value:** `null`
`charset_name` (`string`) optional
The character set name to use for DB encoding. [Oracle & Microsoft SQL only](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#character_set_name). For other engines use `db_parameter`
**Default value:** `null`
`client_security_group_enabled` (`bool`) optional
create a client security group and include in attached default security group
**Default value:** `true`
`copy_tags_to_snapshot` (`bool`) optional
Copy tags from DB to a snapshot
**Default value:** `true`
`database_password` (`string`) optional
Database password for the admin user
**Default value:** `""`
`database_user` (`string`) optional
Database admin user name
**Default value:** `""`
`db_options` optional
A list of DB options to apply with an option group. Depends on DB engine
**Type:**
```hcl
list(object({
db_security_group_memberships = list(string)
option_name = string
port = number
version = string
vpc_security_group_memberships = list(string)
option_settings = list(object({
name = string
value = string
}))
}))
```
**Default value:** `[ ]`
`db_parameter` optional
A list of DB parameters to apply. Note that parameters may differ from a DB family to another
**Type:**
```hcl
list(object({
apply_method = string
name = string
value = string
}))
```
**Default value:** `[ ]`
`db_subnet_group_name` (`string`) optional
Name of DB subnet group. DB instance will be created in the VPC associated with the DB subnet group. Specify one of `subnet_ids`, `db_subnet_group_name` or `availability_zone`
**Default value:** `null`
`deletion_protection` (`bool`) optional
Set to true to enable deletion protection on the RDS instance
**Default value:** `false`
List of log types to enable for exporting to CloudWatch logs. If omitted, no logs will be exported. Valid values (depending on engine): alert, audit, error, general, listener, slowquery, trace, postgresql (PostgreSQL), upgrade (PostgreSQL).
**Default value:** `[ ]`
`final_snapshot_identifier` (`string`) optional
Final snapshot identifier e.g.: some-db-final-snapshot-2019-06-26-06-05
**Default value:** `""`
`host_name` (`string`) optional
The DB host name created in Route53
**Default value:** `"db"`
Specifies whether or mappings of AWS Identity and Access Management (IAM) accounts to database accounts is enabled
**Default value:** `false`
`iops` (`number`) optional
The amount of provisioned IOPS. Setting this implies a storage_type of 'io1'. Default is 0 if rds storage type is not 'io1'
**Default value:** `0`
`kms_alias_name_ssm` (`string`) optional
KMS alias name for SSM
**Default value:** `"alias/aws/ssm"`
`kms_key_arn` (`string`) optional
The ARN of the existing KMS key to encrypt storage
**Default value:** `""`
`license_model` (`string`) optional
License model for this DB. Optional, but required for some DB Engines. Valid values: license-included | bring-your-own-license | general-public-license
**Default value:** `""`
`maintenance_window` (`string`) optional
The window to perform maintenance in. Syntax: 'ddd:hh24:mi-ddd:hh24:mi' UTC
**Default value:** `"Mon:03:00-Mon:04:00"`
`major_engine_version` (`string`) optional
Database MAJOR engine version, depends on engine type
**Default value:** `""`
`max_allocated_storage` (`number`) optional
The upper limit to which RDS can automatically scale the storage in GBs
**Default value:** `0`
`monitoring_interval` (`string`) optional
The interval, in seconds, between points when Enhanced Monitoring metrics are collected for the DB instance. To disable collecting Enhanced Monitoring metrics, specify 0. Valid Values are 0, 1, 5, 10, 15, 30, 60.
**Default value:** `"0"`
`monitoring_role_arn` (`string`) optional
The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to CloudWatch Logs
**Default value:** `null`
`multi_az` (`bool`) optional
Set to true if multi AZ deployment must be supported
**Default value:** `false`
`option_group_name` (`string`) optional
Name of the DB option group to associate
**Default value:** `""`
`parameter_group_name` (`string`) optional
Name of the DB parameter group to associate
**Default value:** `""`
`performance_insights_enabled` (`bool`) optional
Specifies whether Performance Insights are enabled.
**Default value:** `false`
The amount of time in days to retain Performance Insights data. Either 7 (7 days) or 731 (2 years).
**Default value:** `7`
`publicly_accessible` (`bool`) optional
Determines if database can be publicly available (NOT recommended)
**Default value:** `false`
`replicate_source_db` (`any`) optional
If the rds db instance is a replica, supply the source database identifier here
**Default value:** `null`
`security_group_ids` (`list(string)`) optional
The IDs of the security groups from which to allow `ingress` traffic to the DB instance
**Default value:** `[ ]`
`skip_final_snapshot` (`bool`) optional
If true (default), no snapshot will be made before deleting DB
**Default value:** `true`
`snapshot_identifier` (`string`) optional
Snapshot identifier e.g: rds:production-2019-06-26-06-05. If specified, the module create cluster from the snapshot
**Default value:** `null`
`ssm_enabled` (`bool`) optional
If `true` create SSM keys for the database user and password.
**Default value:** `false`
`ssm_key_format` (`string`) optional
SSM path format. The values will will be used in the following order: `var.ssm_key_prefix`, `var.name`, `var.ssm_key_*`
**Default value:** `"/%v/%v/%v"`
`ssm_key_hostname` (`string`) optional
The SSM key to save the hostname. See `var.ssm_path_format`.
**Default value:** `"admin/db_hostname"`
`ssm_key_password` (`string`) optional
The SSM key to save the password. See `var.ssm_path_format`.
**Default value:** `"admin/db_password"`
`ssm_key_port` (`string`) optional
The SSM key to save the port. See `var.ssm_path_format`.
**Default value:** `"admin/db_port"`
`ssm_key_prefix` (`string`) optional
SSM path prefix. Omit the leading forward slash `/`.
**Default value:** `"rds"`
`ssm_key_user` (`string`) optional
The SSM key to save the user. See `var.ssm_path_format`.
**Default value:** `"admin/db_user"`
`storage_encrypted` (`bool`) optional
(Optional) Specifies whether the DB instance is encrypted. The default is false if not specified
**Default value:** `true`
`storage_throughput` (`number`) optional
The storage throughput value for the DB instance. Can only be set when `storage_type` is `gp3`. Cannot be specified if the `allocated_storage` value is below a per-engine threshold.
**Default value:** `null`
`storage_type` (`string`) optional
One of 'standard' (magnetic), 'gp2' (general purpose SSD), or 'io1' (provisioned IOPS SSD)
**Default value:** `"standard"`
`timezone` (`string`) optional
Time zone of the DB instance. timezone is currently only supported by Microsoft SQL Server. The timezone can only be set on creation. See [MSSQL User Guide](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html#SQLServer.Concepts.General.TimeZone) for more information.
**Default value:** `null`
`use_dns_delegated` (`bool`) optional
Use the dns-delegated dns_zone_id
**Default value:** `false`
`use_eks_security_group` (`bool`) optional
Use the eks default security group
**Default value:** `false`
`use_private_subnets` (`bool`) optional
Use private subnets
**Default value:** `true`
`vpc_component_name` (`string`) optional
VPC component name
**Default value:** `"vpc"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`exports`
Map of exports for use in deployment configuration templates
`kms_key_alias`
The KMS key alias
`psql_helper`
A helper output to use with psql for connecting to this RDS instance.
`rds_address`
Address of the instance
`rds_arn`
ARN of the instance
`rds_database_ssm_key_prefix`
SSM prefix
`rds_endpoint`
DNS Endpoint of the instance
`rds_hostname`
DNS host name of the instance
`rds_id`
ID of the instance
`rds_name`
RDS DB name
`rds_option_group_id`
ID of the Option Group
`rds_parameter_group_id`
ID of the Parameter Group
`rds_port`
RDS DB port
`rds_resource_id`
The RDS Resource ID of this instance.
`rds_security_group_id`
ID of the Security Group
`rds_subnet_group_id`
ID of the created Subnet Group
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 2.3`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 2.3`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key_rds` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | n/a
`rds_client_sg` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`rds_instance` | 1.2.0 | [`cloudposse/rds/aws`](https://registry.terraform.io/modules/cloudposse/rds/aws/1.2.0) | n/a
`rds_monitoring_role` | 0.23.0 | [`cloudposse/iam-role/aws`](https://registry.terraform.io/modules/cloudposse/iam-role/aws/0.23.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_parameter.rds_database_hostname`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.rds_database_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.rds_database_port`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.rds_database_user`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`random_password.database_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
- [`random_pet.database_user`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.kms_key_rds`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
---
## redshift
This component provisions an AWS Redshift cluster and seeds relevant database
information (hostnames, username, password, etc.) into AWS SSM Parameter Store.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
redshift:
vars:
enabled: true
name: redshift
database_name: redshift
publicly_accessible: false
node_type: dc2.large
number_of_nodes: 1
cluster_type: single-node
ssm_enabled: true
log_exports:
- userlog
- connectionlog
- useractivitylog
admin_user: redshift
custom_sg_enabled: true
custom_sg_rules:
- type: ingress
key: postgres
description: Allow inbound traffic to the redshift cluster
from_port: 5439
to_port: 5439
protocol: tcp
cidr_blocks:
- 10.0.0.0/8
```
## Variables
### Required Variables
`region` (`string`) required
AWS region
### Optional Variables
`admin_password` (`string`) optional
Password for the master DB user. Required unless a snapshot_identifier is provided
**Default value:** `null`
`admin_user` (`string`) optional
Username for the master DB user. Required unless a snapshot_identifier is provided
**Default value:** `null`
`allow_version_upgrade` (`bool`) optional
Whether or not to enable major version upgrades which are applied during the maintenance window to the Amazon Redshift engine that is running on the cluster
**Default value:** `false`
`cluster_type` (`string`) optional
The cluster type to use. Either `single-node` or `multi-node`
**Default value:** `"single-node"`
`custom_sg_allow_all_egress` (`bool`) optional
Whether to allow all egress traffic or not
**Default value:** `true`
`custom_sg_enabled` (`bool`) optional
Whether to use custom security group or not
**Default value:** `false`
`custom_sg_rules` optional
An array of custom security groups to create and assign to the cluster.
**Type:**
```hcl
list(object({
key = string
type = string
from_port = number
to_port = number
protocol = string
cidr_blocks = list(string)
description = string
}))
```
**Default value:** `[ ]`
`database_name` (`string`) optional
The name of the first database to be created when the cluster is created
**Default value:** `null`
`engine_version` (`string`) optional
The version of the Amazon Redshift engine to use. See https://docs.aws.amazon.com/redshift/latest/mgmt/cluster-versions.html
**Default value:** `"1.0"`
`kms_alias_name_ssm` (`string`) optional
KMS alias name for SSM
**Default value:** `"alias/aws/ssm"`
`node_type` (`string`) optional
The node type to be provisioned for the cluster. See https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#working-with-clusters-overview
**Default value:** `"dc2.large"`
`number_of_nodes` (`number`) optional
The number of compute nodes in the cluster. This parameter is required when the ClusterType parameter is specified as multi-node
**Default value:** `1`
`port` (`number`) optional
The port number on which the cluster accepts incoming connections
**Default value:** `5439`
`publicly_accessible` (`bool`) optional
If true, the cluster can be accessed from a public network
**Default value:** `false`
`security_group_ids` (`list(string)`) optional
An array of security group IDs to associate with the endpoint.
**Default value:** `null`
`ssm_enabled` (`bool`) optional
If `true` create SSM keys for the database user and password.
**Default value:** `false`
`ssm_key_format` (`string`) optional
SSM path format. The values will will be used in the following order: `var.ssm_key_prefix`, `var.name`, `var.ssm_key_*`
**Default value:** `"/%v/%v/%v"`
`ssm_key_hostname` (`string`) optional
The SSM key to save the hostname. See `var.ssm_path_format`.
**Default value:** `"admin/db_hostname"`
`ssm_key_password` (`string`) optional
The SSM key to save the password. See `var.ssm_path_format`.
**Default value:** `"admin/db_password"`
`ssm_key_port` (`string`) optional
The SSM key to save the port. See `var.ssm_path_format`.
**Default value:** `"admin/db_port"`
`ssm_key_prefix` (`string`) optional
SSM path prefix. Omit the leading forward slash `/`.
**Default value:** `"redshift"`
`ssm_key_user` (`string`) optional
The SSM key to save the user. See `var.ssm_path_format`.
**Default value:** `"admin/db_user"`
`use_private_subnets` (`bool`) optional
Whether to use private or public subnets for the Redshift cluster
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
Amazon Resource Name (ARN) of cluster
`cluster_identifier`
The Cluster Identifier
`cluster_security_groups`
The security groups associated with the cluster
`database_name`
The name of the default database in the Cluster
`dns_name`
The DNS name of the cluster
`endpoint`
The connection endpoint
`id`
The Redshift Cluster ID
`port`
The Port the cluster responds on
`redshift_database_ssm_key_prefix`
SSM prefix
`vpc_security_group_ids`
The VPC security group IDs associated with the cluster
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `aws`, version: `>= 4.17, < 6.0.0`
- `random`, version: `>= 3.0`
### Providers
- `aws`, version: `>= 4.17, < 6.0.0`
- `random`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`redshift_cluster` | 1.3.1 | [`cloudposse/redshift-cluster/aws`](https://registry.terraform.io/modules/cloudposse/redshift-cluster/aws/1.3.1) | n/a
`redshift_sg` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_parameter.redshift_database_hostname`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.redshift_database_name`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.redshift_database_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.redshift_database_port`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.redshift_database_user`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`random_password.admin_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
- [`random_pet.admin_user`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
## Data Sources
The following data sources are used by this module:
---
## redshift-serverless
This component is responsible for provisioning Redshift Serverless clusters.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
```yaml
components:
terraform:
redshift-serverless:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: redshift-serverless
admin_user: admin
database_name: dev
```
## Variables
### Required Variables
`region` (`string`) required
AWS region
### Optional Variables
`admin_password` (`string`) optional
Password for the master DB user. Required unless a snapshot_identifier is provided
**Default value:** `null`
`admin_user` (`string`) optional
Username for the master DB user. Required unless a snapshot_identifier is provided
**Default value:** `null`
`base_capacity` (`number`) optional
The base data warehouse capacity (4 minimum) of the workgroup in Redshift Processing Units (RPUs).
**Default value:** `4`
`config_parameter` optional
A list of Redshift config parameters to apply to the workgroup.
**Type:**
```hcl
list(object({
parameter_key = string
parameter_value = any
}))
```
**Default value:** `[ ]`
`custom_sg_allow_all_egress` (`bool`) optional
Whether to allow all egress traffic or not
**Default value:** `true`
`custom_sg_enabled` (`bool`) optional
Whether to use custom security group or not
**Default value:** `false`
`custom_sg_rules` optional
Custom security group rules
**Type:**
```hcl
list(object({
key = string
type = string
from_port = number
to_port = number
protocol = string
cidr_blocks = list(string)
description = string
}))
```
**Default value:** `[ ]`
`database_name` (`string`) optional
The name of the first database to be created when the cluster is created
**Default value:** `null`
`default_iam_role_arn` (`string`) optional
The Amazon Resource Name (ARN) of the IAM role to set as a default in the namespace
**Default value:** `null`
`endpoint_name` (`string`) optional
Endpoint name for the redshift endpoint, if null, is set to $stage-$name
**Default value:** `null`
`enhanced_vpc_routing` (`bool`) optional
The value that specifies whether to turn on enhanced virtual private cloud (VPC) routing, which forces Amazon Redshift Serverless to route traffic through your VPC instead of over the internet.
**Default value:** `true`
`iam_roles` (`list(string)`) optional
A list of IAM roles to associate with the namespace.
**Default value:** `[ ]`
`import_profile_name` (`string`) optional
AWS Profile name to use when importing a resource
**Default value:** `null`
`import_role_arn` (`string`) optional
IAM Role ARN to use when importing a resource
**Default value:** `null`
`kms_alias_name_ssm` (`string`) optional
KMS alias name for SSM
**Default value:** `"alias/aws/ssm"`
`kms_key_id` (`string`) optional
The ARN of the Amazon Web Services Key Management Service key used to encrypt your data.
**Default value:** `null`
`log_exports` (`set(string)`) optional
The types of logs the namespace can export. Available export types are `userlog`, `connectionlog`, and `useractivitylog`.
**Default value:** `[ ]`
`publicly_accessible` (`bool`) optional
If true, the cluster can be accessed from a public network
**Default value:** `false`
`security_group_ids` (`list(string)`) optional
An array of security group IDs to associate with the endpoint.
**Default value:** `null`
`ssm_path_prefix` (`string`) optional
SSM path prefix (without leading or trailing slash)
**Default value:** `"redshift"`
`use_private_subnets` (`bool`) optional
Whether to use private or public subnets for the Redshift cluster
**Default value:** `true`
An array of security group IDs to associate with the workgroup.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`endpoint_address`
The DNS address of the VPC endpoint.
`endpoint_arn`
Amazon Resource Name (ARN) of the Redshift Serverless Endpoint Access.
`endpoint_id`
The Redshift Endpoint Access Name.
`endpoint_name`
Endpoint Name.
`endpoint_port`
The port that Amazon Redshift Serverless listens on.
`endpoint_subnet_ids`
Subnets used in redshift serverless endpoint.
`endpoint_vpc_endpoint`
The VPC endpoint or the Redshift Serverless workgroup. See VPC Endpoint below.
`namespace_arn`
Amazon Resource Name (ARN) of the Redshift Serverless Namespace.
`namespace_id`
The Redshift Namespace Name.
`namespace_namespace_id`
The Redshift Namespace ID.
`workgroup_arn`
Amazon Resource Name (ARN) of the Redshift Serverless Workgroup.
`workgroup_endpoint`
The Redshift Serverless Endpoint.
`workgroup_id`
The Redshift Workgroup Name.
`workgroup_workgroup_id`
The Redshift Workgroup ID.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 3.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`redshift_sg` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_redshiftserverless_endpoint_access.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/redshiftserverless_endpoint_access) (resource)
- [`aws_redshiftserverless_namespace.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/redshiftserverless_namespace) (resource)
- [`aws_redshiftserverless_workgroup.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/redshiftserverless_workgroup) (resource)
- [`aws_ssm_parameter.admin_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.admin_user`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.endpoint`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`random_password.admin_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
- [`random_pet.admin_user`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) (resource)
## Data Sources
The following data sources are used by this module:
---
## route53-resolver-dns-firewall
This component is responsible for provisioning
[Route 53 Resolver DNS Firewall](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-dns-firewall.html)
resources, including Route 53 Resolver DNS Firewall, domain lists, firewall rule groups, firewall rules, and logging
configuration.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
# stacks/catalog/route53-resolver-dns-firewall/defaults.yaml
components:
terraform:
route53-resolver-dns-firewall/defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
firewall_fail_open: "ENABLED"
query_log_enabled: true
logs_bucket_component_name: "route53-resolver-dns-firewall-logs-bucket"
domains_config:
allowed-domains:
# Concat the lists of domains passed in the `domains` field and loaded from the file `domains_file`
# The file is in the `components/terraform/route53-resolver-dns-firewall/config` folder
domains_file: "config/allowed_domains.txt"
domains: []
blocked-domains:
# Concat the lists of domains passed in the `domains` field and loaded from the file `domains_file`
# The file is in the `components/terraform/route53-resolver-dns-firewall/config` folder
domains_file: "config/blocked_domains.txt"
domains: []
rule_groups_config:
blocked-and-allowed-domains:
# 'priority' must be between 100 and 9900 exclusive
priority: 101
rules:
allowed-domains:
firewall_domain_list_name: "allowed-domains"
# 'priority' must be between 100 and 9900 exclusive
priority: 101
action: "ALLOW"
blocked-domains:
firewall_domain_list_name: "blocked-domains"
# 'priority' must be between 100 and 9900 exclusive
priority: 200
action: "BLOCK"
block_response: "NXDOMAIN"
```
```yaml
# stacks/mixins/stage/dev.yaml
import:
- catalog/route53-resolver-dns-firewall/defaults
components:
terraform:
route53-resolver-dns-firewall/example:
metadata:
component: route53-resolver-dns-firewall
inherits:
- route53-resolver-dns-firewall/defaults
vars:
name: route53-dns-firewall-example
vpc_component_name: vpc
```
Execute the following command to provision the `route53-resolver-dns-firewall/example` component using Atmos:
```shell
atmos terraform apply route53-resolver-dns-firewall/example -s
```
## Variables
### Required Variables
`domains_config` required
Map of Route 53 Resolver DNS Firewall domain configurations
**Type:**
```hcl
map(object({
domains = optional(list(string))
domains_file = optional(string)
}))
```
The name of a VPC component where the Network Firewall is provisioned
### Optional Variables
`firewall_fail_open` (`string`) optional
Determines how Route 53 Resolver handles queries during failures, for example when all traffic that is sent to DNS Firewall fails to receive a reply.
By default, fail open is disabled, which means the failure mode is closed.
This approach favors security over availability. DNS Firewall blocks queries that it is unable to evaluate properly.
If you enable this option, the failure mode is open. This approach favors availability over security.
In this case, DNS Firewall allows queries to proceed if it is unable to properly evaluate them.
Valid values: ENABLED, DISABLED.
**Default value:** `"ENABLED"`
`logs_bucket_component_name` (`string`) optional
Flow logs bucket component name
**Default value:** `null`
`query_log_config_name` (`string`) optional
Route 53 Resolver query log config name. If omitted, the name will be generated by concatenating the ID from the context with the VPC ID
**Default value:** `null`
`query_log_enabled` (`bool`) optional
Flag to enable/disable Route 53 Resolver query logging
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`domains`
Route 53 Resolver DNS Firewall domain configurations
`query_log_config`
Route 53 Resolver query logging configuration
`rule_group_associations`
Route 53 Resolver DNS Firewall rule group associations
`rule_groups`
Route 53 Resolver DNS Firewall rule groups
`rules`
Route 53 Resolver DNS Firewall rules
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`logs_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`route53_resolver_dns_firewall` | 0.3.0 | [`cloudposse/route53-resolver-dns-firewall/aws`](https://registry.terraform.io/modules/cloudposse/route53-resolver-dns-firewall/aws/0.3.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## runs-on
Component: `runs-on`
This component provisions RunsOn for GitHub Actions self-hosted runners. After deploying this
component, install the RunsOn GitHub App in your organization to enable runner registration.
See the RunsOn documentation for GitHub App installation and configuration details.
Compatibility: Requires RunsOn CloudFormation template version 2.8.2 or newer due to output changes.
## Usage
Stack Level: Regional
(`runs-on/defaults.yaml`)
```yaml
components:
terraform:
runs-on/defaults:
metadata:
component: runs-on
type: abstract
vars:
name: runs-on
enabled: true
capabilities: ["CAPABILITY_IAM"]
on_failure: "ROLLBACK"
timeout_in_minutes: 30
# template_url: https://runs-on.s3.eu-west-1.amazonaws.com/cloudformation/template.yaml
# See latest version and changelog at https://runs-on.com/changelog/
template_url: https://runs-on.s3.eu-west-1.amazonaws.com/cloudformation/template-v2.8.3.yaml
parameters:
AppCPU: 256
AppMemory: 512
EmailAddress: developer@cloudposse.com
# Environments let you run multiple Stacks in one organization and segregate resources.
# If you specify an environment, then all the jobs must also specify which environment they are running in.
# To keep things simple, we use the default environment ("production") and leave the `env` label unset in the workflow.
EncryptEbs: true
# With the default value of SSHAllowed: true, the runners that are placed in a public subnet
# will allow ingress on port 22. This is highly abused (scanners running constantly looking for vulnerable SSH servers)
# and should not be allowed. If you need access to the runners, use Session Manager (SSM).
SSHAllowed: false
LicenseKey:
Private: false # always | true | false - Always will default place in private subnet, true will place in private subnet if tag `private=true` present on workflow, false will place in public subnet
RunnerLargeDiskSize: 120 # Disk size in GB for disk=large runners
Ec2LogRetentionInDays: 30
VpcFlowLogRetentionInDays: 14
```
### Embedded networking (Runs On managed VPC)
When no VPC details are set, the component will create a new VPC and subnets via the CloudFormation template.
Set the `VpcCidrBlock` parameter to the CIDR block of the VPC that will be created.
(`runs-on.yaml`)
```yaml
import:
- orgs/acme/core/auto/_defaults
- mixins/region/us-east-1
- catalog/runs-on/defaults
components:
terraform:
runs-on:
metadata:
inherits:
- runs-on/defaults
component: runs-on
vars:
networking_stack: embedded
parameters:
VpcCidrBlock: 10.100.0.0/16
```
### External networking (Use existing VPC)
Use an existing VPC by setting `vpc_id`, `subnet_ids`, and `security_group_id`.
(`_defaults.yaml`)
```yaml
terraform:
hooks:
store-outputs:
name: auto/ssm
```
(`runs-on.yaml`)
```yaml
import:
- orgs/acme/core/auto/_defaults
- mixins/region/us-east-1
- catalog/vpc/defaults
- catalog/runs-on/defaults
components:
terraform:
runs-on:
metadata:
inherits:
- runs-on/defaults
component: runs-on
vars:
networking_stack: external
# There are other ways to get the vpc_id, subnet_ids, and security_group_id.
# Hardcode
# Use Atmos KV Store
# Use atmos !terraform.output yaml function
vpc_id: !store auto/ssm vpc vpc_id
subnet_ids: !store auto/ssm vpc private_subnet_ids
security_group_id: !store auto/ssm vpc default_security_group_id
```
(DEPRECATED) Configuring with Transit Gateway
It's important to note that the embedded networking will require some customization to work with Transit Gateway.
The following configuration assumes you are using the Cloud Posse Components for Transit Gateway
([tgw/hub](https://docs.cloudposse.com/components/library/aws/tgw/hub/) &
[tgw/spoke](https://docs.cloudposse.com/components/library/aws/tgw/spoke/)).
The outputs of this component contain the same outputs as the `vpc` component. This is because the runs-on
cloudformation stack creates a VPC and subnets.
First we need to update the TGW/Hub - this stores information about the VPCs that are allowed to be used by TGW Spokes.
Assuming your TGW/Hub lives in the `core-network` account and your Runs-On is deployed to `core-auto` (`tgw-hub.yaml`)
```yaml
vars:
connections:
- account:
tenant: core
stage: auto
vpc_component_names:
- vpc
- runs-on
```
```yaml
components:
terraform:
tgw/hub/defaults:
metadata:
type: abstract
component: tgw/hub
vars:
enabled: true
name: tgw-hub
tags:
Team: sre
Service: tgw-hub
tgw/hub:
metadata:
inherits:
- tgw/hub/defaults
component: tgw/hub
vars:
connections:
- account:
tenant: core
stage: network
- account:
tenant: core
stage: auto
vpc_component_names:
- vpc
- runs-on
- account:
tenant: plat
stage: sandbox
- account:
tenant: plat
stage: dev
- account:
tenant: plat
stage: staging
- account:
tenant: plat
stage: prod
```
We then need to create a spoke that refers to the VPC created by Runs-On.
(`tgw-spoke.yaml`)
```yaml
tgw/spoke/runs-on:
metadata:
component: tgw/spoke
inherits:
- tgw/spoke-defaults
vars:
own_vpc_component_name: runs-on
attributes:
- "runs-on"
connections:
- account:
tenant: core
stage: network
- account:
tenant: core
stage: auto
vpc_component_names:
- vpc
- runs-on
- account:
tenant: plat
stage: sandbox
- account:
tenant: plat
stage: dev
- account:
tenant: plat
stage: staging
- account:
tenant: plat
stage: prod
```
Finally we need to update the spokes of the TGW/Spokes to allow Runs-On traffic to the other accounts.
Typically this includes `core-auto`, `core-network`, and your platform accounts.
(`tgw-spoke.yaml`)
```yaml
tgw/spoke:
metadata:
inherits:
- tgw/spoke-defaults
vars:
connections:
# ...
vpc_component_names:
- vpc
- runs-on
# ...
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`template_url` (`string`) required
Amazon S3 bucket URL location of a file containing the CloudFormation template body. Maximum file size: 460,800 bytes
### Optional Variables
`capabilities` (`list(string)`) optional
A list of capabilities. Valid values: CAPABILITY_IAM, CAPABILITY_NAMED_IAM, CAPABILITY_AUTO_EXPAND
**Default value:** `[ ]`
`networking_stack` (`string`) optional
Let RunsOn manage your networking stack (`embedded`), or use a vpc under your control (`external`). Null will default to whatever the template used as default. If you select `external`, you will need to provide the VPC ID, the subnet IDs, and optionally the security group ID, and make sure your whole networking setup is compatible with RunsOn (see https://runs-on.com/networking/embedded-vs-external/ for more details). To get started quickly, we recommend using the 'embedded' option.
**Default value:** `"embedded"`
`on_failure` (`string`) optional
Action to be taken if stack creation fails. This must be one of: `DO_NOTHING`, `ROLLBACK`, or `DELETE`
**Default value:** `"ROLLBACK"`
`parameters` (`map(string)`) optional
Key-value map of input parameters for the Stack Set template. (_e.g._ map("BusinessUnit","ABC")
**Default value:** `{ }`
`policy_body` (`string`) optional
Structure containing the stack policy body
**Default value:** `""`
`private_subnet_ids` (`list(string)`) optional
Private subnet IDs for runners (maps to ExternalVpcPrivateSubnetIds).
This variable only applies when using `networking_stack = "external"` (bring your own VPC).
When using `networking_stack = "embedded"`, RunsOn creates its own VPC with public and private
subnets via CloudFormation, so this variable should not be set.
Required when using external networking with `Private: "true"` or `Private: "always"` to place
runners in private subnets. These subnets should have NAT gateway access for outbound connectivity.
**Default value:** `null`
`security_group_id` (`string`) optional
Security group ID. If not set, a new security group will be created.
**Default value:** `null`
`security_group_rules` optional
Security group rules. These are either added to the security passed in, or added to the security group created when var.security_group_id is not set. Types include `ingress` and `egress`.
**Type:**
```hcl
list(object({
type = string
from_port = number
to_port = number
protocol = string
cidr_blocks = list(string)
}))
```
**Default value:** `null`
`subnet_ids` (`list(string)`) optional
Public subnet IDs for runners (maps to ExternalVpcPublicSubnetIds).
This variable only applies when using `networking_stack = "external"` (bring your own VPC).
When using `networking_stack = "embedded"`, RunsOn creates its own VPC with public and private
subnets via CloudFormation, so this variable should not be set.
Used for runners without the `private=true` label, or when `Private` parameter is set to `"false"`.
**Default value:** `null`
`timeout_in_minutes` (`number`) optional
The amount of time that can pass before the stack status becomes `CREATE_FAILED`
**Default value:** `30`
`vpc_id` (`string`) optional
VPC ID for external networking (maps to ExternalVpcId).
This variable only applies when using `networking_stack = "external"` (bring your own VPC).
When using `networking_stack = "embedded"`, RunsOn creates its own VPC via CloudFormation,
so this variable should not be set.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`id`
ID of the CloudFormation Stack
`name`
Name of the CloudFormation Stack
`nat_gateway_ids`
NAT Gateway IDs
`nat_instance_ids`
NAT Instance IDs
`outputs`
Outputs of the CloudFormation Stack
`private_route_table_ids`
Private subnet route table IDs
`private_subnet_ids`
Private subnet IDs
`public_subnet_ids`
Public subnet IDs
`security_group_id`
Security group ID
`vpc_cidr`
CIDR of the VPC created by RunsOn CloudFormation Stack
`vpc_id`
ID of the VPC created by RunsOn CloudFormation Stack
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`cloudformation_stack` | 0.7.1 | [`cloudposse/cloudformation-stack/aws`](https://registry.terraform.io/modules/cloudposse/cloudformation-stack/aws/0.7.1) | n/a
`iam_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`security_group` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | Typically when runs-on is installed, and we're using the embedded networking stack, we need a security group. This is a batties included optional feature.
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_security_group_rule.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_nat_gateways.ngws`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/nat_gateways) (data source)
- [`aws_subnets.private`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnets) (data source)
- [`aws_subnets.public`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnets) (data source)
- [`aws_vpc.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpc) (data source)
---
## s3-bucket
This component is responsible for provisioning S3 buckets.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
`stacks/s3/defaults.yaml` file (base component for all S3 buckets with default settings):
```yaml
components:
terraform:
s3-bucket-defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
account_map_tenant_name: core
# Suggested configuration for all buckets
user_enabled: false
acl: "private"
grants: null
force_destroy: false
versioning_enabled: false
allow_encrypted_uploads_only: true
block_public_acls: true
block_public_policy: true
ignore_public_acls: true
restrict_public_buckets: true
allow_ssl_requests_only: true
lifecycle_configuration_rules:
- id: default
enabled: true
abort_incomplete_multipart_upload_days: 90
filter_and:
prefix: ""
tags: {}
transition:
- storage_class: GLACIER
days: 60
noncurrent_version_transition:
- storage_class: GLACIER
noncurrent_days: 60
noncurrent_version_expiration:
noncurrent_days: 90
expiration:
days: 120
```
```yaml
import:
- catalog/s3/defaults
components:
terraform:
template-bucket:
metadata:
component: s3-bucket
inherits:
- s3-bucket-defaults
vars:
enabled: true
name: template
logging_bucket_name_rendering_enabled: true
logging:
bucket_name: s3-access-logs
prefix: logs/
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_map` optional
Static account map to use when `account_map_enabled` is `false`. Map of account names (tenant-stage format) to account IDs.
Optional attributes support component-specific functionality (e.g., audit_account_account_name for cloudtrail).
**Type:**
```hcl
object({
full_account_map = map(string)
audit_account_account_name = optional(string, "")
root_account_account_name = optional(string, "")
identity_account_account_name = optional(string, "")
aws_partition = optional(string, "aws")
iam_role_arn_templates = optional(map(string), {})
})
```
**Default value:**
```hcl
{
"audit_account_account_name": "",
"aws_partition": "aws",
"full_account_map": {},
"iam_role_arn_templates": {},
"identity_account_account_name": "",
"root_account_account_name": ""
}
```
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
`account_map_enabled` (`bool`) optional
Enable the account map component lookup. When disabled, use the `account_map` variable to provide static account mapping.
**Default value:** `true`
The name of the environment where `account_map` is provisioned
**Default value:** `"gbl"`
`account_map_stage_name` (`string`) optional
The name of the stage where `account_map` is provisioned
**Default value:** `"root"`
`account_map_tenant_name` (`string`) optional
The name of the tenant where `account_map` is provisioned.
If the `tenant` label is not used, leave this as `null`.
**Default value:** `null`
`acl` (`string`) optional
The [canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply.
We recommend `private` to avoid exposing sensitive information. Conflicts with `grants`.
**Default value:** `"private"`
`allow_encrypted_uploads_only` (`bool`) optional
Set to `true` to prevent uploads of unencrypted objects to S3 bucket
**Default value:** `false`
`allow_ssl_requests_only` (`bool`) optional
Set to `true` to require requests to use Secure Socket Layer (HTTPS/SSL). This will explicitly deny access to HTTP requests
**Default value:** `false`
List of actions the user is permitted to perform on the S3 bucket
**Default value:**
```hcl
[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation",
"s3:AbortMultipartUpload"
]
```
`block_public_acls` (`bool`) optional
Set to `false` to disable the blocking of new public access lists on the bucket
**Default value:** `true`
`block_public_policy` (`bool`) optional
Set to `false` to disable the blocking of new public policies on the bucket
**Default value:** `true`
`bucket_key_enabled` (`bool`) optional
Set this to true to use Amazon S3 Bucket Keys for SSE-KMS, which reduce the cost of AWS KMS requests.
For more information, see: https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html
**Default value:** `false`
`bucket_name` (`string`) optional
Bucket name. If provided, the bucket will be created with this name instead of generating the name from the context
**Default value:** `""`
`cors_configuration` optional
Specifies the allowed headers, methods, origins and exposed headers when using CORS on this bucket
**Type:**
```hcl
list(object({
allowed_headers = list(string)
allowed_methods = list(string)
allowed_origins = list(string)
expose_headers = list(string)
max_age_seconds = number
}))
```
**Default value:** `null`
When `true`, permits a non-empty S3 bucket to be deleted by first deleting all objects in the bucket.
THESE OBJECTS ARE NOT RECOVERABLE even if they were versioned and stored in Glacier.
**Default value:** `false`
`grants` optional
A list of policy grants for the bucket, taking a list of permissions.
Conflicts with `acl`. Set `acl` to `null` to use this.
**Type:**
```hcl
list(object({
id = string
type = string
permissions = list(string)
uri = string
}))
```
**Default value:** `[ ]`
`iam_policy_statements` (`any`) optional
Map of IAM policy statements to use in the bucket policy.
**Default value:** `{ }`
`ignore_public_acls` (`bool`) optional
Set to `false` to disable the ignoring of public access lists on the bucket
**Default value:** `true`
`intelligent_tiering_configuration` optional
A list of S3 Intelligent-Tiering configurations for the bucket.
Each configuration controls archive access tiers within the INTELLIGENT_TIERING storage class.
`access_tier` must be `ARCHIVE_ACCESS` or `DEEP_ARCHIVE_ACCESS`.
**Type:**
```hcl
list(object({
name = string
status = optional(string, "Enabled")
filter = optional(object({
prefix = optional(string)
tags = optional(map(string))
}))
tiering = list(object({
access_tier = string
days = number
}))
}))
```
**Default value:** `[ ]`
`kms_master_key_arn` (`string`) optional
The AWS KMS master key ARN used for the `SSE-KMS` encryption. This can only be used when you set the value of `sse_algorithm` as `aws:kms`. The default aws/s3 AWS KMS master key is used if this element is absent while the `sse_algorithm` is `aws:kms`
**Default value:** `""`
`lifecycle_configuration_rules` optional
A list of lifecycle V2 rules
**Type:**
```hcl
list(object({
enabled = bool
id = string
abort_incomplete_multipart_upload_days = number
# `filter_and` is the `and` configuration block inside the `filter` configuration.
# This is the only place you should specify a prefix.
filter_and = any
expiration = any
transition = list(any)
noncurrent_version_expiration = any
noncurrent_version_transition = list(any)
}))
```
**Default value:** `[ ]`
The template for the template used to render Bucket Name for the Logging bucket.
Default is appropriate when using `tenant` and default label order with `null-label`.
Use `"%s-%s-%s-%%s"` when not using `tenant`.
**Default value:** `"%s-%s-%s-%s-%s"`
The template for the template used to render Bucket Prefix for the Logging bucket, uses the format `var.logging.prefix`/`var.name`
**Default value:** `"%s/%s/"`
`object_lock_configuration` optional
A configuration for S3 object locking. With S3 Object Lock, you can store objects using a `write once, read many` (WORM) model. Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
**Type:**
```hcl
object({
mode = string # Valid values are GOVERNANCE and COMPLIANCE.
days = number
years = number
})
```
**Default value:** `null`
List of actions to permit `privileged_principal_arns` to perform on bucket and bucket prefixes (see `privileged_principal_arns`)
**Default value:** `[ ]`
List of maps. Each map has one key, an IAM Principal ARN, whose associated value is
a list of S3 path prefixes to grant `privileged_principal_actions` permissions for that principal,
in addition to the bucket itself, which is automatically included. Prefixes should not begin with '/'.
**Default value:** `[ ]`
`restrict_public_buckets` (`bool`) optional
Set to `false` to disable the restricting of making the bucket public
**Default value:** `true`
`s3_object_ownership` (`string`) optional
Specifies the S3 object ownership control. Valid values are `ObjectWriter`, `BucketOwnerPreferred`, and 'BucketOwnerEnforced'.
**Default value:** `"ObjectWriter"`
`s3_replica_bucket_arn` (`string`) optional
A single S3 bucket ARN to use for all replication rules.
Note: The destination bucket can be specified in the replication rule itself
(which allows for multiple destinations), in which case it will take precedence over this variable.
**Default value:** `""`
`s3_replication_enabled` (`bool`) optional
Set this to true and specify `s3_replication_rules` to enable replication. `versioning_enabled` must also be `true`.
**Default value:** `false`
`s3_replication_rules` (`list(any)`) optional
Specifies the replication rules for S3 bucket replication if enabled. You must also set s3_replication_enabled to true.
**Default value:** `null`
Cross-account IAM Role ARNs that will be allowed to perform S3 replication to this bucket (for replication within the same AWS account, it's not necessary to adjust the bucket policy).
**Default value:** `[ ]`
List of IAM policy documents that are merged together into the exported document.
Statements defined in source_policy_documents or source_json must have unique SIDs.
Statement having SIDs that match policy SIDs generated by this module will override them.
**Default value:** `[ ]`
`source_policy_enabled` (`bool`) optional
Whether to pass source policy documents to the S3 module.
Set to `false` to prevent the module from creating an `aws_s3_bucket_policy` resource.
This suppresses both the `iam_policy_statements` policy and the custom policy
(controlled by `custom_policy_enabled`). Useful when importing existing buckets that have
their own policies which should not be managed by Terraform.
**Default value:** `true`
`sse_algorithm` (`string`) optional
The server-side encryption algorithm to use. Valid values are `AES256` and `aws:kms`
**Default value:** `"AES256"`
`transfer_acceleration_enabled` (`bool`) optional
Set this to true to enable S3 Transfer Acceleration for the bucket.
**Default value:** `false`
`user_enabled` (`bool`) optional
Set to `true` to create an IAM user with permission to access the bucket
**Default value:** `false`
`versioning_enabled` (`bool`) optional
A state of versioning. Versioning is a means of keeping multiple variants of an object in the same bucket
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`bucket_arn`
Bucket ARN
`bucket_domain_name`
Bucket domain name
`bucket_id`
Bucket ID
`bucket_region`
Bucket region
`bucket_regional_domain_name`
Bucket region-specific domain name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `template`, version: `>= 2.2.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `template`, version: `>= 2.2.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`bucket_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`s3_bucket` | 4.11.0 | [`cloudposse/s3-bucket/aws`](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/4.11.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.custom_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`template_file.bucket_policy`](https://registry.terraform.io/providers/cloudposse/template/latest/docs/data-sources/file) (data source)
---
## scp
This component is responsible for creating a single Service Control Policy (SCP) and optionally
attaching it to a target (organization root, OU, or account).
Unlike the monolithic `account` component which manages SCPs as part of the organization hierarchy,
this component follows the single-resource pattern - it only manages a single SCP.
:::note
This component should be deployed from the **management/root account** as it creates SCPs
within AWS Organizations.
:::
## Usage
**Stack Level**: Global (deployed in the management/root account)
### Using the SCP Policy Catalog (recommended)
This component ships with a `catalog/` directory of pre-built SCP policies based on
[CloudPosse terraform-aws-service-control-policies v0.15.1](https://github.com/cloudposse/terraform-aws-service-control-policies).
Each file contains a single policy statement that maps to one SCP. Use Atmos `!include` to
reference catalog files directly (local path) or from a remote source:
**Local include** (when the catalog is vendored with the component):
```yaml
components:
terraform:
aws-scp/iam-restrictions:
metadata:
component: aws-scp
vars:
enabled: true
policy_name: IAMRestrictions
policy_description: "Restrict IAM user creation and deny root account access"
policy_statements:
- !include catalog/DenyIAMCreatingUsers.yaml
- !include catalog/DenyIAMRootAccount.yaml
- !include catalog/DenyLeavingOrganization.yaml
target_id: !terraform.output aws-organizational-unit/core organizational_unit_id
```
**Remote include** (reference the catalog directly from GitHub):
```yaml
components:
terraform:
aws-scp/iam-restrictions:
metadata:
component: aws-scp
vars:
enabled: true
policy_name: IAMRestrictions
policy_statements:
- !include "https://github.com/cloudposse-terraform-components/aws-scp/blob/main/catalog/DenyIAMCreatingUsers.yaml"
- !include "https://github.com/cloudposse-terraform-components/aws-scp/blob/main/catalog/DenyIAMRootAccount.yaml"
- !include "https://github.com/cloudposse-terraform-components/aws-scp/blob/main/catalog/DenyLeavingOrganization.yaml"
target_id: !terraform.output aws-organizational-unit/core organizational_unit_id
```
Atmos supports multiple remote protocols including `https://`, `github://`, `s3://`, and `gcs://`
via [go-getter](https://atmos.tools/functions/yaml/include#remote-sources). Standard GitHub URLs
(with `/blob/` or `/tree/`) are automatically converted to raw content URLs.
Available catalog files:
**Account & Organization:**
- `DenyAccountRegionDisableEnable.yaml` - Deny enabling/disabling AWS regions
- `DenyLeavingOrganization.yaml` - Prevent leaving the organization
- `DenyRootAccountAccess.yaml` - Deny all actions by root account
- `DenyAllAccess.yaml` - Deny all access (quarantine)
**IAM:**
- `DenyIAMCreatingUsers.yaml` - Deny IAM user and access key creation
- `DenyIAMRolesChanges.yaml` - Deny IAM role modifications
- `DenyIAMNoMFA.yaml` - Require MFA for most actions (uses `not_actions`)
- `DenyIAMRootAccount.yaml` - Deny all actions by IAM root account
**EC2 & Compute:**
- `DenyEC2NonNitroInstances.yaml` - Require Nitro-based instance types
- `DenyEC2InstancesWithoutEncryptionInTransit.yaml` - Require encryption-in-transit capable instances
- `DenyEC2PublicAMI.yaml` - Deny launching from public AMIs
- `DenyEC2AssociatePublicIp.yaml` - Deny public IP assignment
- `DenyEC2WithNoIMDSv2.yaml` - Require IMDSv2
- `DenyEC2ApiWithNoMFA.yaml` - Require MFA for stop/terminate
- `RequireEBSEncryption.yaml` - Require EBS volume encryption
- `DenyLambdaWithoutVpc.yaml` - Require VPC for Lambda functions
**Storage & Database:**
- `DenyS3DeleteBucketsAndObjects.yaml` - Prevent S3 bucket/object deletion
- `DenyS3BucketsPublicAccess.yaml` - Prevent modifying S3 public access blocks
- `DenyS3IncorrectEncryptionHeader.yaml` - Require S3 server-side encryption
- `DenyS3UnEncryptedObjectUploads.yaml` - Deny unencrypted S3 uploads
- `DenyRDSUnencrypted.yaml` - Require RDS encryption
- `DenyDeletingKMSKeys.yaml` - Prevent KMS key deletion
**Networking:**
- `DenyVpcDeletingFlowLogs.yaml` - Protect VPC flow logs
- `DenyVpcInternetAccess.yaml` - Deny internet gateway and VPC peering creation
- `DenyRoute53DeletingZones.yaml` - Prevent hosted zone deletion
**Security & Monitoring:**
- `DenyCloudTrailActions.yaml` - Protect CloudTrail configuration
- `DenyCloudWatchDeletingLogs.yaml` - Protect CloudWatch log groups
- `DenyDisablingCloudWatch.yaml` - Protect CloudWatch alarms and dashboards
- `DenyConfigRulesDelete.yaml` - Protect AWS Config rules and recorders
- `DenyGuardDutyDisassociation.yaml` - Prevent GuardDuty disassociation
- `DenyDisablingGuardDuty.yaml` - Protect GuardDuty configuration
- `DenyShieldlRemoval.yaml` - Prevent Shield protection removal
**SageMaker:**
- `DenySagemakerDirectInternetNotebook.yaml` - Deny direct internet for notebooks
- `DenySagemakerWithoutRootAccess.yaml` - Require root access for notebooks
- `DenySagemakerWithoutInterContainerEncrypt.yaml` - Require inter-container encryption
- `DenyeSagemakerWithoutVpcDomain.yaml` - Deny public internet domains
### Using inline policy_statements
```yaml
components:
terraform:
aws-scp/deny-leaving-organization:
metadata:
component: aws-scp
vars:
enabled: true
policy_name: DenyLeavingOrganization
policy_description: "Prevents accounts from leaving the organization"
policy_statements:
- sid: "DenyLeaveOrganization"
effect: "Deny"
actions:
- "organizations:LeaveOrganization"
resources:
- "*"
target_ids:
- !terraform.output aws-organizational-unit/core organizational_unit_id
```
### Using policy_content (raw JSON)
```yaml
components:
terraform:
aws-scp/custom-policy:
metadata:
component: aws-scp
vars:
enabled: true
policy_name: CustomPolicy
policy_content: |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": ["ec2:RunInstances"],
"Resource": "*"
}
]
}
target_ids:
- !terraform.output aws-organizational-unit/plat organizational_unit_id
```
### Policy Without Attachment
Create a policy without attaching it to any target:
```yaml
components:
terraform:
aws-scp/deny-root-user:
metadata:
component: aws-scp
vars:
enabled: true
policy_name: DenyRootUser
attach_to_target: false
policy_statements:
- sid: "DenyRootUser"
effect: "Deny"
actions:
- "*"
resources:
- "*"
conditions:
- test: "StringLike"
variable: "aws:PrincipalArn"
values:
- "arn:aws:iam::*:root"
```
### Importing an Existing SCP
To import an existing SCP:
1. Get the policy ID from AWS Console or CLI
2. Set the `import_policy_id` variable:
```yaml
vars:
import_policy_id: "p-xxxxxxxxxx"
```
3. Run `atmos terraform apply`
After successful import, you can remove the `import_policy_id` variable.
> **Note:** If you don't need import functionality, you can exclude `imports.tf` when vendoring the component.
## Policy Statements Format
Each statement must specify either `actions` or `not_actions`, but not both.
`not_actions` maps to the IAM `NotAction` element, which matches all actions except the listed ones.
```yaml
policy_statements:
- sid: "OptionalStatementId"
effect: "Deny" # or "Allow"
actions: # use actions OR not_actions
- "service:Action"
resources:
- "*"
conditions: # optional
- test: "StringEquals"
variable: "aws:RequestedRegion"
values:
- "us-east-1"
- sid: "ExampleWithNotActions"
effect: "Deny"
not_actions: # matches everything EXCEPT these actions
- "iam:CreateVirtualMFADevice"
- "iam:EnableMFADevice"
resources:
- "*"
```
## Related Components
This component is part of a suite of single-resource components for AWS Organizations:
| Component | Purpose |
|-----------|---------|
| `aws-organization` | Creates/imports the AWS Organization |
| `aws-organizational-unit` | Creates/imports a single OU |
| `aws-account` | Creates/imports a single AWS Account |
| `aws-account-settings` | Configures account settings |
| `aws-scp` | Creates/imports Service Control Policies (this component) |
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`attach_to_target` (`bool`) optional
Whether to attach the SCP to a target. Set to false to create the policy without attaching it.
**Default value:** `true`
`import_policy_id` (`string`) optional
The ID of an existing SCP to import
**Default value:** `null`
`import_target_ids` (`list(string)`) optional
The IDs of the targets (organization roots, OUs, or accounts) that already have the SCP attached, to import existing attachments. Must be a subset of target_ids.
**Default value:** `[ ]`
`policy_content` (`string`) optional
The JSON policy document for the SCP. If not provided, policy_statements will be used to generate the policy.
**Default value:** `null`
`policy_description` (`string`) optional
Description of the SCP
**Default value:** `"Service Control Policy managed by Terraform"`
`policy_name` (`string`) optional
The name of the Service Control Policy. Defaults to module.this.id
**Default value:** `null`
`policy_statements` optional
List of policy statements to generate the SCP. Alternative to policy_content. Each statement must specify either 'actions' or 'not_actions', but not both.
**Type:**
```hcl
list(object({
sid = optional(string)
effect = string
actions = optional(list(string), [])
not_actions = optional(list(string), [])
resources = list(string)
conditions = optional(list(object({
test = string
variable = string
values = list(string)
})), [])
}))
```
**Default value:** `[ ]`
`skip_destroy` (`bool`) optional
If true, the policy will be detached from the target but not destroyed when removed from Terraform
**Default value:** `false`
`target_id` (`string`) optional
DEPRECATED: Use `target_ids` instead. The ID of the organization root, OU, or account to attach the SCP to.
**Default value:** `null`
`target_ids` (`list(string)`) optional
The IDs of the organization roots, OUs, or accounts to attach the SCP to
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`attached`
Whether the SCP was attached to any targets
`attachment_ids`
Map of target IDs to policy attachment IDs
`policy_arn`
The ARN of the Service Control Policy
`policy_id`
The ID of the Service Control Policy
`policy_name`
The name of the Service Control Policy
`target_ids`
The target IDs the SCP is attached to
## Dependencies
### Requirements
- `terraform`, version: `>= 1.7.0`
- `aws`, version: `>= 5.66`
### Providers
- `aws`, version: `>= 5.66`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_organizations_policy.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_policy) (resource)
- [`aws_organizations_policy_attachment.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_policy_attachment) (resource)
## Data Sources
The following data sources are used by this module:
---
## security-group
This component provisions AWS Security Groups that can be shared across multiple components.
It is a thin wrapper around the `cloudposse/security-group/aws` module that integrates with the
Atmos stack configuration and remote state patterns.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
# catalog/security-group/defaults
components:
terraform:
security-group/defaults:
metadata:
type: abstract
component: security-group
vars:
enabled: true
allow_all_egress: true
# Configure where account-map is deployed (required when account_map_enabled: true)
account_map_tenant_name: core
```
```yaml
# catalog/security-group/lambda
import:
- catalog/security-group/defaults
components:
terraform:
security-group/lambda:
metadata:
inherits:
- security-group/defaults
vars:
name: "lambda"
security_group_description: "Security group for Lambda functions"
rules:
- type: "egress"
from_port: 0
to_port: 0
protocol: "-1"
cidr_blocks: ["0.0.0.0/0"]
ipv6_cidr_blocks: ["::/0"]
```
```yaml
# stacks/orgs/acme/plat/dev/us-east-2/network.yaml
import:
- catalog/security-group/lambda
components:
terraform:
security-group/lambda:
vars:
enabled: true
```
### Account Map Bypass Pattern
By default, this component uses the `account-map` component via remote state for IAM role lookups
(`account_map_enabled: true`). For environments migrating to Atmos Auth or using static account
mappings, you can disable this and provide a static `account_map` variable instead:
```yaml
# stacks/orgs/acme/_defaults.yaml
vars:
# Disable account-map remote state lookup
account_map_enabled: false
# Provide static account mapping
account_map:
full_account_map:
acme-core-root: "111111111111"
acme-core-audit: "222222222222"
acme-plat-dev: "333333333333"
acme-plat-prod: "444444444444"
audit_account_account_name: "acme-core-audit"
root_account_account_name: "acme-core-root"
identity_account_account_name: "acme-core-identity"
aws_partition: "aws"
iam_role_arn_templates:
terraform: "arn:aws:iam::%s:role/acme-core-gbl-auto-terraform"
```
When `account_map_enabled: false`, the component bypasses the remote state lookup and uses
the static `account_map` variable directly.
### Providing VPC ID Directly
You can provide the VPC ID directly via the `vpc_id` variable, which overrides the remote state lookup:
```yaml
components:
terraform:
security-group/lambda:
vars:
# Provide VPC ID directly (overrides remote state lookup)
vpc_id: "vpc-12345678"
```
This is useful when:
- The VPC was created outside of Atmos/Terraform
- You want to reference a VPC from a different state backend
- You're using Atmos functions like `!terraform.output` to fetch the VPC ID
### Referencing the Security Group from Other Components
Once deployed, other components can reference this security group using the `!terraform.state` Atmos function:
```yaml
components:
terraform:
lambda/my-function:
vars:
vpc_config:
security_group_ids:
- !terraform.state security-group/lambda security_group_id
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`allow_all_egress` (`bool`) optional
A convenience that adds to the rules specified elsewhere a rule that allows all egress.
If this is false and no egress rules are specified via `rules` or `rule-matrix`, then no egress will be allowed.
**Default value:** `true`
`create_before_destroy` (`bool`) optional
Set `true` to enable terraform `create_before_destroy` behavior on the created security group.
We only recommend setting this `false` if you are importing an existing security group
that you do not want replaced and therefore need full control over its name.
Note that changing this value will always cause the security group to be replaced.
**Default value:** `true`
`inline_rules_enabled` (`bool`) optional
NOT RECOMMENDED. Create rules "inline" instead of as separate `aws_security_group_rule` resources.
See the `cloudposse/security-group/aws` module documentation for caveats.
**Default value:** `false`
`preserve_security_group_id` (`bool`) optional
When `false` and `create_before_destroy` is `true`, changes to security group rules
cause a new security group to be created with the new rules, and the existing security group is then
replaced with the new one, eliminating any service interruption.
When `true` or when changing the value (from `false` to `true` or from `true` to `false`),
existing security group rules will be deleted before new ones are created, resulting in a service interruption,
but preserving the security group itself.
**NOTE:** Setting this to `true` does not guarantee the security group will never be replaced,
it only keeps changes to the security group rules from triggering a replacement.
**Default value:** `false`
`revoke_rules_on_delete` (`bool`) optional
Instruct Terraform to revoke all of the Security Group's attached ingress and egress rules before deleting
the Security Group itself. This is normally not needed, however certain AWS services such as
Elastic Map Reduce may automatically add required rules to security groups used with the service,
and those rules may contain a cyclic dependency that prevent the security groups from being destroyed.
**Default value:** `false`
`rule_matrix` (`any`) optional
A convenient way to apply the same set of rules to a set of subjects. See the `cloudposse/security-group/aws` module documentation for details.
**Default value:** `[ ]`
`rules` (`list(any)`) optional
A list of Security Group rule objects. All elements of a list must be exactly the same type;
use `rules_map` if you want to supply multiple lists of rules.
Rules are defined in a map of rule objects. See the `cloudposse/security-group/aws` module documentation for details.
**Default value:** `[ ]`
`rules_map` (`any`) optional
A map-like object of lists of Security Group rule objects. See the `cloudposse/security-group/aws` module documentation for details.
**Default value:** `{ }`
`security_group_description` (`string`) optional
The description to assign to the created Security Group.
Warning: Changing the description causes the security group to be replaced.
**Default value:** `"Managed by Terraform"`
`security_group_name` (`list(string)`) optional
The name to assign to the created security group. Must be unique within the VPC.
If not provided, will be derived from the `null-label` context.
**Default value:** `[ ]`
The ID of an existing Security Group to which Security Group rules will be assigned.
The Security Group's name and description will not be changed.
Not compatible with `inline_rules_enabled` or `revoke_rules_on_delete`.
If not provided, a new security group will be created.
**Default value:** `[ ]`
`vpc_component_name` (`string`) optional
The name of the VPC component to fetch remote state from
**Default value:** `"vpc"`
`vpc_environment_name` (`string`) optional
The name of the environment where the VPC component is provisioned. Defaults to the current environment.
**Default value:** `null`
`vpc_id` (`string`) optional
The ID of the VPC where the Security Group will be created.
If provided, this overrides the VPC ID from remote state lookup.
**Default value:** `null`
`vpc_stage_name` (`string`) optional
The name of the stage where the VPC component is provisioned. Defaults to the current stage.
**Default value:** `null`
`vpc_tenant_name` (`string`) optional
The name of the tenant where the VPC component is provisioned.
Defaults to the current tenant.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
The ARN of the created Security Group
`id`
The ID of the created Security Group
`name`
The name of the created Security Group
`rules_terraform_ids`
List of Terraform IDs of created `security_group_rule` resources, primarily provided to enable `depends_on`
`security_group_arn`
The ARN of the created Security Group (alias for `arn` output)
`security_group_id`
The ID of the created Security Group (alias for `id` output)
`security_group_name`
The name of the created Security Group (alias for `name` output)
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`security_group` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## security-hub
This component is responsible for configuring Security Hub within an AWS Organization.
Amazon Security Hub enables users to centrally manage and monitor the security and compliance of their AWS accounts and
resources. It aggregates, organizes, and prioritizes security findings from various AWS services, third-party tools, and
integrated partner solutions.
## Key Features
- **Centralized Security Management**: Provides a centralized dashboard where users can view and manage security
findings from multiple AWS accounts and regions, allowing for a unified view of the security posture across the
entire AWS environment.
- **Automated Security Checks**: Automatically performs continuous security checks on AWS resources, configurations,
and security best practices using industry standards and compliance frameworks such as AWS CIS Foundations Benchmark.
- **Product Subscriptions**: Integrates with AWS security services (GuardDuty, Inspector, Macie, Config, Access
Analyzer, Firewall Manager) to automatically receive and aggregate findings in a single dashboard.
- **Security Standards and Compliance**: Provides compliance checks against industry standards and regulatory
frameworks such as PCI DSS, HIPAA, NIST 800-53, and GDPR, with guidance on remediation actions.
- **Prioritized Security Findings**: Analyzes and prioritizes security findings based on severity, enabling users to
focus on the most critical issues with efficient threat response and remediation.
- **Custom Insights and Event Aggregation**: Supports custom insights and rules to focus on specific security criteria,
with event aggregation and correlation capabilities to identify related findings and attack patterns.
- **Alert Notifications and Automation**: Supports alert notifications through Amazon SNS and facilitates automation
through integration with AWS Lambda for automated remediation actions.
- **GovCloud Support**: All product subscription ARNs use partition-aware format, automatically supporting both
Commercial AWS and GovCloud partitions.
## Component Features
- **Delegated Administrator Model**: Uses AWS Organizations delegated administrator pattern for centralized management
- **Multi-Region Deployment**: Supports deployment across all AWS regions with finding aggregation
- **Product Subscriptions**: Automatically creates subscriptions for AWS security service integrations
- **SNS Notifications**: Optional SNS topic creation for security finding alerts
- **Compliance Standards**: Configurable security standards (CIS, PCI DSS, AWS Foundational Security Best Practices)
## Usage
**Stack Level**: Regional
## Deployment Overview
This component is complex in that it must be deployed multiple times with different variables set to configure the AWS
Organization successfully.
It is further complicated by the fact that you must deploy each of the component instances described below to every
region that existed before March 2019 and to any regions that have been opted-in as described in the
[AWS Documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions).
In the examples below, we assume that the AWS Organization Management account is `root` and the AWS Organization
Delegated Administrator account is `security`, both in the `core` tenant.
### Deploy to Delegated Administrator Account
First, the component is deployed to the
[Delegated Administrator](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_organizations.html) account in each
region to configure the Security Hub instance to which each account will send its findings.
```yaml
# core-ue1-security
components:
terraform:
security-hub/delegated-administrator/ue1:
metadata:
component: security-hub
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
# Product subscriptions for AWS security service integrations
product_subscriptions:
guardduty: true # Enable GuardDuty findings
inspector: true # Enable Inspector findings
macie: true # Enable Macie findings
config: true # Enable Config findings
access_analyzer: true # Enable Access Analyzer findings
firewall_manager: false # Disabled by default
```
```bash
atmos terraform apply security-hub/delegated-administrator/ue1 -s core-ue1-security
atmos terraform apply security-hub/delegated-administrator/ue2 -s core-ue2-security
atmos terraform apply security-hub/delegated-administrator/uw1 -s core-uw1-security
# ... other regions
```
### Deploy to Organization Management (root) Account
Next, the component is deployed to the AWS Organization Management (a/k/a `root`) Account in order to set the AWS
Organization Designated Administrator account.
Note that `SuperAdmin` permissions must be used as we are deploying to the AWS Organization Management account. Since we
are using the `SuperAdmin` user, it will already have access to the state bucket, so we set the `role_arn` of the
backend config to null and set `var.privileged` to `true`.
```yaml
# core-ue1-root
components:
terraform:
security-hub/root/ue1:
metadata:
component: security-hub
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
privileged: true
```
```bash
atmos terraform apply security-hub/root/ue1 -s core-ue1-root
atmos terraform apply security-hub/root/ue2 -s core-ue2-root
atmos terraform apply security-hub/root/uw1 -s core-uw1-root
# ... other regions
```
### Deploy Organization Settings in Delegated Administrator Account
Finally, the component is deployed to the Delegated Administrator Account again in order to create the organization-wide
Security Hub configuration for the AWS Organization, but with `var.admin_delegated` set to `true` this time to indicate
that the delegation from the Organization Management account has already been performed.
```yaml
# core-ue1-security
components:
terraform:
security-hub/org-settings/ue1:
metadata:
component: security-hub
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
admin_delegated: true
```
```bash
atmos terraform apply security-hub/org-settings/ue1 -s core-ue1-security
atmos terraform apply security-hub/org-settings/ue2 -s core-ue2-security
atmos terraform apply security-hub/org-settings/uw1 -s core-uw1-security
# ... other regions
```
## Product Subscriptions
Product subscriptions enable Security Hub to receive and aggregate findings from AWS security services. The component
supports automatic integration with:
| Product | Default | Description |
|------------------|---------|--------------------------------------|
| GuardDuty | `true` | Threat detection findings |
| Inspector | `true` | Vulnerability scanning findings |
| Macie | `true` | Sensitive data discovery findings |
| Config | `true` | Configuration compliance findings |
| Access Analyzer | `true` | External access findings |
| Firewall Manager | `false` | Firewall policy compliance findings |
Product subscriptions are only created during Step 1 (delegated administrator deployment) and use partition-aware ARN
format for GovCloud compatibility.
### Verification
After deployment, verify product subscriptions:
```bash
# Via Terraform output
atmos terraform output security-hub/delegated-administrator/ue1 -s core-ue1-security
# Via AWS CLI
aws securityhub list-enabled-products-for-import --region us-east-1
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
`account_map_tenant` (`string`) optional
The tenant where the `account_map` component required by remote-state is deployed
**Default value:** `"core"`
`admin_delegated` (`bool`) optional
A flag to indicate if the AWS Organization-wide settings should be created. This can only be done after the Security
Hub Administrator account has already been delegated from the AWS Org Management account (usually 'root'). See the
Deployment section of the README for more information.
**Default value:** `false`
Flag to toggle auto-enablement of Security Hub for new member accounts in the organization.
For more information, see:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/securityhub_organization_configuration#auto_enable
**Default value:** `true`
The detail-type pattern used to match events that will be sent to SNS.
For more information, see:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEventsandEventPatterns.html
https://docs.aws.amazon.com/eventbridge/latest/userguide/event-types.html
**Default value:** `"Security Hub Findings - Imported"`
`create_sns_topic` (`bool`) optional
Flag to indicate whether an SNS topic should be created for notifications. If you want to send findings to a new SNS
topic, set this to true and provide a valid configuration for subscribers.
**Default value:** `false`
`default_standards_enabled` (`bool`) optional
Flag to indicate whether default standards should be enabled
**Default value:** `true`
The name of the account that is the AWS Organization Delegated Administrator account
**Default value:** `"core-security"`
`enabled_standards` (`set(string)`) optional
A list of standards to enable in the account.
For example:
- standards/aws-foundational-security-best-practices/v/1.0.0
- ruleset/cis-aws-foundations-benchmark/v/1.2.0
- standards/pci-dss/v/3.2.1
- standards/cis-aws-foundations-benchmark/v/1.4.0
**Default value:** `[ ]`
`finding_aggregation_region` (`string`) optional
If finding aggregation is enabled, the region that collects findings
**Default value:** `null`
`finding_aggregator_enabled` (`bool`) optional
Flag to indicate whether a finding aggregator should be created
If you want to aggregate findings from one region, set this to `true`.
For more information, see:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/securityhub_finding_aggregator
**Default value:** `false`
Linking mode to use for the finding aggregator.
The possible values are:
- `ALL_REGIONS` - Aggregate from all regions
- `ALL_REGIONS_EXCEPT_SPECIFIED` - Aggregate from all regions except those specified in `var.finding_aggregator_regions`
- `SPECIFIED_REGIONS` - Aggregate from regions specified in `var.finding_aggregator_regions`
**Default value:** `"ALL_REGIONS"`
`finding_aggregator_regions` (`any`) optional
A list of regions to aggregate findings from.
This is only used if `finding_aggregator_enabled` is `true`.
**Default value:** `null`
`findings_notification_arn` (`string`) optional
The ARN for an SNS topic to send findings notifications to. This is only used if create_sns_topic is false.
If you want to send findings to an existing SNS topic, set this to the ARN of the existing topic and set
create_sns_topic to false.
**Default value:** `null`
`global_environment` (`string`) optional
Global environment name
**Default value:** `"gbl"`
`import_profile_name` (`string`) optional
AWS Profile name to use when importing a resource
**Default value:** `null`
`import_role_arn` (`string`) optional
IAM Role ARN to use when importing a resource
**Default value:** `null`
The name of the AWS Organization management account
**Default value:** `null`
`privileged` (`bool`) optional
true if the default provider already has access to the backend
**Default value:** `false`
`product_subscriptions` optional
Map of AWS service product subscriptions to enable in Security Hub.
Product subscriptions allow Security Hub to receive findings from AWS security services.
Default values:
- guardduty: true (enable GuardDuty findings integration)
- inspector: true (enable Inspector findings integration)
- macie: true (enable Macie findings integration)
- config: true (enable Config findings integration)
- access_analyzer: true (enable Access Analyzer findings integration)
- firewall_manager: false (disabled by default - enable if using Firewall Manager)
Note: Product subscriptions can be enabled even if the source service is not yet deployed.
The subscription will simply wait for findings once the service is enabled.
For more information, see:
https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-providers.html
**Type:**
```hcl
object({
guardduty = optional(bool, true)
inspector = optional(bool, true)
macie = optional(bool, true)
config = optional(bool, true)
access_analyzer = optional(bool, true)
firewall_manager = optional(bool, false)
})
```
**Default value:** `{ }`
`root_account_stage` (`string`) optional
The stage name for the Organization root (management) account. This is used to lookup account IDs from account names
using the `account-map` component.
**Default value:** `"root"`
`subscribers` optional
A map of subscription configurations for SNS topics
For more information, see:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sns_topic_subscription#argument-reference
protocol:
The protocol to use. The possible values for this are: sqs, sms, lambda, application. (http or https are partially
supported, see link) (email is an option but is unsupported in terraform, see link).
endpoint:
The endpoint to send data to, the contents will vary with the protocol. (see link for more information)
endpoint_auto_confirms:
Boolean indicating whether the end point is capable of auto confirming subscription e.g., PagerDuty. Default is
false.
raw_message_delivery:
Boolean indicating whether or not to enable raw message delivery (the original message is directly passed, not
wrapped in JSON with the original message in the message property). Default is false.
**Type:**
```hcl
map(object({
protocol = string
endpoint = string
endpoint_auto_confirms = bool
raw_message_delivery = bool
}))
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`delegated_administrator_account_id`
The AWS Account ID of the AWS Organization delegated administrator account
`product_subscriptions`
ARNs of Security Hub product subscriptions for AWS service integrations
`sns_topic_name`
The name of the SNS topic created by the component
`sns_topic_subscriptions`
The SNS topic subscriptions created by the component
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 5.0, < 6.0.0`
- `awsutils`, version: `>= 0.16.0, < 6.0.0`
### Providers
- `aws`, version: `>= 5.0, < 6.0.0`
- `awsutils`, version: `>= 0.16.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`security_hub` | 0.12.2 | [`cloudposse/security-hub/aws`](https://registry.terraform.io/modules/cloudposse/security-hub/aws/0.12.2) | If we are running in the AWS Org designated administrator account, enable Security Hub and optionally enable standards and finding aggregation
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_securityhub_account.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/securityhub_account) (resource)
- [`aws_securityhub_organization_admin_account.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/securityhub_organization_admin_account) (resource)
- [`aws_securityhub_organization_configuration.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/securityhub_organization_configuration) (resource)
- [`aws_securityhub_product_subscription.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/securityhub_product_subscription) (resource)
- [`awsutils_security_hub_organization_settings.this`](https://registry.terraform.io/providers/cloudposse/awsutils/latest/docs/resources/security_hub_organization_settings) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_partition.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_region.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) (data source)
---
## ses
This component provisions Amazon Simple Email Service (SES) to act as an SMTP gateway.
By default, the component sets up SES domain identity with DKIM and domain verification via Route53, suitable for use with IAM roles (e.g., ECS task roles).
Optionally, an IAM user and group can be created for SMTP authentication by setting `ses_user_enabled` and `ses_group_enabled` to `true`. When enabled, credentials are stored in SSM Parameter Store and encrypted with a dedicated KMS key.
## Usage
**Stack Level**: Regional
:::important
This release changes the default of `ses_user_enabled` from `true` to `false`.
Existing stacks that still need SMTP credentials must set `ses_user_enabled: true`
(and `ses_group_enabled: true` if they need the IAM group) before applying this version,
or Terraform will destroy the IAM/KMS/SSM resources created by earlier releases.
:::
Here's an example snippet for how to use this component with IAM roles (the default, recommended for ECS/Lambda workloads):
```yaml
components:
terraform:
ses:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: ses
# format(domain_template, tenant, environment, stage)
# produces: dev.use1.platform.acme.org
domain_template: "%[2]s.%[3]s.%[1]s.acme.org"
tags:
Team: sre
Service: ses
```
To create an IAM user with SMTP credentials stored in SSM Parameter Store (for legacy or third-party integrations):
```yaml
components:
terraform:
ses:
vars:
enabled: true
name: ses
domain_template: "%[2]s.%[3]s.%[1]s.acme.org"
ses_user_enabled: true
ses_group_enabled: true
ssm_prefix: "/ses"
tags:
Team: sre
Service: ses
```
If you want to provide the Route53 zone ID directly instead of looking it up via the `dns-delegated` remote state:
```yaml
components:
terraform:
ses:
vars:
enabled: true
name: ses
domain_template: "%[2]s.%[3]s.%[1]s.acme.org"
zone_id: "Z1234567890"
tags:
Team: sre
Service: ses
```
## Variables
### Required Variables
`domain_template` (`string`) required
The `format()` string to use to generate the base domain name for sending and receiving email with Amazon SES, `format(var.domain_template, var.tenant, var.environment, var.stage)
`dns-delegated` component environment name
**Default value:** `null`
`ses_group_enabled` (`bool`) optional
Creates a group with permission to send emails from SES domain
**Default value:** `false`
`ses_user_enabled` (`bool`) optional
Creates user with permission to send emails from SES domain
**Default value:** `false`
`ses_verify_dkim` (`bool`) optional
If provided the module will create Route53 DNS records used for DKIM verification.
**Default value:** `true`
`ses_verify_domain` (`bool`) optional
If provided the module will create Route53 DNS records used for domain verification.
**Default value:** `true`
`ssm_prefix` (`string`) optional
The prefix to use for the SSM parameters
**Default value:** `"/ses"`
`zone_id` (`string`) optional
Route53 hosted zone ID. If provided, bypasses the `dns-delegated` remote state lookup.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`domain`
The SES domain name
`ses_domain_identity_arn`
The ARN of the SES domain identity
`smtp_password`
The SMTP password. Only available when `ses_user_enabled` is `true`. This value is stored in Terraform state, so protect the state backend with encryption and access controls.
`smtp_user`
Access key ID of the IAM user. Only available when `ses_user_enabled` is `true`
`user_arn`
The ARN of the IAM user. Only available when `ses_user_enabled` is `true`
`user_name`
Normalized name of the IAM user. Only available when `ses_user_enabled` is `true`
`user_unique_id`
The unique ID of the IAM user. Only available when `ses_user_enabled` is `true`
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `awsutils`, version: `>= 0.11.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`dns_gbl_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`kms_key_ses` | 0.12.2 | [`cloudposse/kms-key/aws`](https://registry.terraform.io/modules/cloudposse/kms-key/aws/0.12.2) | n/a
`ses` | 0.25.1 | [`cloudposse/ses/aws`](https://registry.terraform.io/modules/cloudposse/ses/aws/0.25.1) | n/a
`ssm_parameter_store` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_iam_policy_document.kms_key_ses`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
---
## sftp
This component is responsible for provisioning SFTP Endpoints.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
sftp:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
```
## Variables
### Required Variables
`hosted_zone_suffix` (`string`) required
The hosted zone name suffix. The stage name will be prefixed to this suffix.
`region` (`string`) required
VPN Endpoints are region-specific. This identifies the region. AWS Region
`s3_bucket_context` (`any`) required
The s3 bucket context map of inputs. The same null label inputs can be provided. Provide the `name` to find the s3 bucket using a data source.
A list of address allocation IDs that are required to attach an Elastic IP address to your SFTP server's endpoint. This property can only be used when endpoint_type is set to VPC.
**Default value:** `[ ]`
`domain` (`string`) optional
Where your files are stored. S3 or EFS
**Default value:** `"S3"`
`domain_name` (`string`) optional
Domain to use when connecting to the SFTP endpoint
**Default value:** `""`
`eip_enabled` (`bool`) optional
Whether to provision and attach an Elastic IP to be used as the SFTP endpoint. An EIP will be provisioned per subnet.
**Default value:** `false`
`force_destroy` (`bool`) optional
Forces the AWS Transfer Server to be destroyed
**Default value:** `false`
`restricted_home` (`bool`) optional
Restricts SFTP users so they only have access to their home directories.
**Default value:** `true`
`security_group_rules` (`list(any)`) optional
A list of Security Group rule objects to add to the created security group.
**Default value:**
```hcl
[
{
"cidr_blocks": [
"0.0.0.0/0"
],
"from_port": 22,
"protocol": "tcp",
"to_port": 22,
"type": "ingress"
}
]
```
`security_policy_name` (`string`) optional
Specifies the name of the security policy that is attached to the server. Possible values are TransferSecurityPolicy-2018-11, TransferSecurityPolicy-2020-06, and TransferSecurityPolicy-FIPS-2020-06. Default value is: TransferSecurityPolicy-2018-11.
**Default value:** `"TransferSecurityPolicy-2018-11"`
`sftp_users` (`any`) optional
List of SFTP usernames and public keys
**Default value:** `{ }`
`vpc_endpoint_id` (`string`) optional
The ID of the VPC endpoint. This property can only be used when endpoint_type is set to VPC_ENDPOINT
**Default value:** `null`
A list of security groups IDs that are available to attach to your server's endpoint. If no security groups are specified, the VPC's default security groups are automatically assigned to your endpoint. This property can only be used when endpoint_type is set to VPC.
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`sftp`
The SFTP module outputs
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `awsutils`, version: `>= 0.11.0, < 6.0.0`
- `local`, version: `>= 2.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`s3_context` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`security_group` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`sftp` | 2.3.1 | [`cloudposse/transfer-sftp/aws`](https://registry.terraform.io/modules/cloudposse/transfer-sftp/aws/2.3.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_route53_zone.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/route53_zone) (data source)
- [`aws_s3_bucket.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/s3_bucket) (data source)
---
## site-to-site-vpn
This component provisions a [Site-To-Site VPN](https://aws.amazon.com/vpn/site-to-site-vpn/) with a target AWS VPC on
one side of the tunnel. The other (customer) side can be any VPN gateway endpoint, e.g. a hardware device, other cloud
VPN, etc.
AWS Site-to-Site VPN is a fully-managed service that creates a secure connection between your data center or branch
office and your AWS resources using IP Security (IPSec) tunnels. When using Site-to-Site VPN, you can connect to both
your Amazon Virtual Private Clouds (VPC) and AWS Transit Gateway, and two tunnels per connection are used for increased
redundancy.
The component provisions the following resources:
- AWS Virtual Private Gateway (a representation of the AWS side of the tunnel)
- AWS Customer Gateway (a representation of the other (remote) side of the tunnel). It requires:
- The gateway's Border Gateway Protocol (BGP) Autonomous System Number (ASN)
- `/32` IP of the VPN endpoint
- AWS Site-To-Site VPN connection. It creates two VPN tunnels for redundancy and requires:
- The IP CIDR ranges on each side of the tunnel
- Pre-shared Keys for each tunnel (can be auto-generated if not provided and saved into SSM Parameter Store)
- (Optional) IP CIDR ranges to be used inside each VPN tunnel
- Route table entries to direct the appropriate traffic from the local VPC to the other side of the tunnel
## Usage
Stack Level: Regional
Example configuration:
```yaml
components:
terraform:
site-to-site-vpn:
metadata:
component: site-to-site-vpn
vars:
enabled: true
name: "site-to-site-vpn"
vpc_component_name: vpc
customer_gateway_bgp_asn: 65000
customer_gateway_ip_address: 20.200.30.0
vpn_gateway_amazon_side_asn: 64512
vpn_connection_static_routes_only: true
vpn_connection_tunnel1_inside_cidr: 169.254.20.0/30
vpn_connection_tunnel2_inside_cidr: 169.254.21.0/30
vpn_connection_local_ipv4_network_cidr: 10.100.128.0/24
vpn_connection_remote_ipv4_network_cidr: 10.10.80.0/24
vpn_connection_static_routes_destinations:
- 10.100.128.0/24
vpn_connection_tunnel1_startup_action: add
vpn_connection_tunnel2_startup_action: add
transit_gateway_enabled: false
vpn_connection_tunnel1_cloudwatch_log_enabled: false
vpn_connection_tunnel2_cloudwatch_log_enabled: false
preshared_key_enabled: true
ssm_enabled: true
ssm_path_prefix: "/site-to-site-vpn"
```
Provisioning:
```sh
atmos terraform plan site-to-site-vpn -s
atmos terraform apply site-to-site-vpn -s
```
Post-tunnel creation requirements:
Once the site-to-site VPN resources are deployed, send the VPN configuration from the AWS side to the
administrator of the remote side of the VPN connection. To do this:
1. Determine the infrastructure that will be used for the remote side, specifically vendor, platform, software version, and IKE version.
2. Log into the target AWS account and open the VPC console.
3. Navigate to `Virtual Private Network` > `Site-to-Site VPN Connections`.
4. Select the VPN connection that was created via this component.
5. Click `Download Configuration` (top right).
6. Enter the information you obtained and click `Download`.
7. Send the configuration file to the administrator of the remote side of the tunnel.
Amazon side Autonomous System Number (ASN):
The variable `vpn_gateway_amazon_side_asn` (Amazon side ASN) is not strictly required when creating an AWS VPN Gateway.
If you do not specify it during creation, AWS will automatically assign a default ASN (7224 for the Amazon side).
Specifying the Amazon side ASN can be important if you integrate the VPN with an on-premises network that uses BGP and
you want to avoid ASN conflicts or require a specific ASN for routing policies. If your use case involves BGP peering
and you need a specific ASN for the Amazon side, explicitly set `vpn_gateway_amazon_side_asn`. Otherwise, it can be
omitted (set to `null`) and AWS will handle it automatically.
## Variables
### Required Variables
`customer_gateway_bgp_asn` (`number`) required
The Customer Gateway's Border Gateway Protocol (BGP) Autonomous System Number (ASN)
`region` (`string`) required
AWS Region
### Optional Variables
`customer_gateway_ip_address` (`string`) optional
The IPv4 address for the Customer Gateway device's outside interface. Set to `null` to not create the Customer Gateway
**Default value:** `null`
`existing_transit_gateway_id` (`string`) optional
Existing Transit Gateway ID. If provided, the module will not create a Virtual Private Gateway but instead will use the transit_gateway. For setting up transit gateway we can use the cloudposse/transit-gateway/aws module and pass the output transit_gateway_id to this variable
**Default value:** `""`
`preshared_key_enabled` (`bool`) optional
Flag to enable adding the preshared keys to the VPN connection
**Default value:** `true`
`ssm_enabled` (`bool`) optional
Flag to enable saving the `tunnel1_preshared_key` and `tunnel2_preshared_key` in the SSM Parameter Store
**Default value:** `false`
`ssm_path_prefix` (`string`) optional
SSM Key path prefix for the associated SSM parameters
**Default value:** `""`
`transit_gateway_enabled` (`bool`) optional
Set to true to enable VPN connection to transit gateway and then pass in the existing_transit_gateway_id
**Default value:** `false`
The ID of the route table for the transit gateway that you want to associate + propagate the VPN connection's TGW attachment
**Default value:** `null`
`transit_gateway_routes` optional
A map of transit gateway routes to create on the given TGW route table (via `transit_gateway_route_table_id`) for the created VPN Attachment. Use the key in the map to describe the route
**Type:**
```hcl
map(object({
blackhole = optional(bool, false)
destination_cidr_block = string
}))
```
**Default value:** `{ }`
`vpc_component_name` (`string`) optional
Atmos VPC component name
**Default value:** `"vpc"`
List of CIDR blocks to be used as destination for static routes. Routes to destinations will be propagated to the VPC route tables
**Default value:** `[ ]`
If set to `true`, the VPN connection will use static routes exclusively. Static routes must be used for devices that don't support BGP
**Default value:** `false`
The action to take after DPD timeout occurs for the first VPN tunnel. Specify restart to restart the IKE initiation. Specify `clear` to end the IKE session. Valid values are `clear` | `none` | `restart`
**Default value:** `"clear"`
List of one or more Diffie-Hellman group numbers that are permitted for the first VPN tunnel for phase 1 IKE negotiations. Valid values are 2 | 5 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24
**Default value:** `[ ]`
List of one or more encryption algorithms that are permitted for the first VPN tunnel for phase 1 IKE negotiations. Valid values are AES128 | AES256 | AES128-GCM-16 | AES256-GCM-16
**Default value:** `[ ]`
One or more integrity algorithms that are permitted for the first VPN tunnel for phase 1 IKE negotiations. Valid values are SHA1 | SHA2-256 | SHA2-384 | SHA2-512
**Default value:** `[ ]`
List of one or more Diffie-Hellman group numbers that are permitted for the first VPN tunnel for phase 2 IKE negotiations. Valid values are 2 | 5 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24
**Default value:** `[ ]`
List of one or more encryption algorithms that are permitted for the first VPN tunnel for phase 2 IKE negotiations. Valid values are AES128 | AES256 | AES128-GCM-16 | AES256-GCM-16
**Default value:** `[ ]`
One or more integrity algorithms that are permitted for the first VPN tunnel for phase 2 IKE negotiations. Valid values are SHA1 | SHA2-256 | SHA2-384 | SHA2-512
**Default value:** `[ ]`
The preshared key of the first VPN tunnel. The preshared key must be between 8 and 64 characters in length and cannot start with zero. Allowed characters are alphanumeric characters, periods(.) and underscores(_)
**Default value:** `""`
The action to take when the establishing the tunnel for the first VPN connection. By default, your customer gateway device must initiate the IKE negotiation and bring up the tunnel. Specify `start` for AWS to initiate the IKE negotiation. Valid values are `add` | `start`
**Default value:** `"add"`
The action to take after DPD timeout occurs for the second VPN tunnel. Specify restart to restart the IKE initiation. Specify clear to end the IKE session. Valid values are `clear` | `none` | `restart`
**Default value:** `"clear"`
List of one or more Diffie-Hellman group numbers that are permitted for the second VPN tunnel for phase 1 IKE negotiations. Valid values are 2 | 5 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24
**Default value:** `[ ]`
List of one or more encryption algorithms that are permitted for the second VPN tunnel for phase 1 IKE negotiations. Valid values are AES128 | AES256 | AES128-GCM-16 | AES256-GCM-16
**Default value:** `[ ]`
One or more integrity algorithms that are permitted for the second VPN tunnel for phase 1 IKE negotiations. Valid values are SHA1 | SHA2-256 | SHA2-384 | SHA2-512
**Default value:** `[ ]`
List of one or more Diffie-Hellman group numbers that are permitted for the second VPN tunnel for phase 2 IKE negotiations. Valid values are 2 | 5 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24
**Default value:** `[ ]`
List of one or more encryption algorithms that are permitted for the second VPN tunnel for phase 2 IKE negotiations. Valid values are AES128 | AES256 | AES128-GCM-16 | AES256-GCM-16
**Default value:** `[ ]`
One or more integrity algorithms that are permitted for the second VPN tunnel for phase 2 IKE negotiations. Valid values are SHA1 | SHA2-256 | SHA2-384 | SHA2-512
**Default value:** `[ ]`
The preshared key of the second VPN tunnel. The preshared key must be between 8 and 64 characters in length and cannot start with zero. Allowed characters are alphanumeric characters, periods(.) and underscores(_)
**Default value:** `""`
The action to take when the establishing the tunnel for the second VPN connection. By default, your customer gateway device must initiate the IKE negotiation and bring up the tunnel. Specify `start` for AWS to initiate the IKE negotiation. Valid values are `add` | `start`
**Default value:** `"add"`
`vpn_gateway_amazon_side_asn` (`number`) optional
The Autonomous System Number (ASN) for the Amazon side of the VPN Gateway. If you don't specify an ASN, the Virtual Private Gateway is created with the default ASN
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`customer_gateway_id`
Customer Gateway ID
`vpn_connection_customer_gateway_configuration`
The configuration information for the VPN connection's Customer Gateway (in the native XML format)
`vpn_connection_id`
VPN Connection ID
`vpn_connection_tunnel1_address`
The public IP address of the first VPN tunnel
`vpn_connection_tunnel1_cgw_inside_address`
The RFC 6890 link-local address of the first VPN tunnel (Customer Gateway side)
`vpn_connection_tunnel1_vgw_inside_address`
The RFC 6890 link-local address of the first VPN tunnel (Virtual Private Gateway side)
`vpn_connection_tunnel2_address`
The public IP address of the second VPN tunnel
`vpn_connection_tunnel2_cgw_inside_address`
The RFC 6890 link-local address of the second VPN tunnel (Customer Gateway side)
`vpn_connection_tunnel2_vgw_inside_address`
The RFC 6890 link-local address of the second VPN tunnel (Virtual Private Gateway side)
`vpn_gateway_id`
Virtual Private Gateway ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 2.2`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `random`, version: `>= 2.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`vpn_connection` | 1.9.0 | [`cloudposse/vpn-connection/aws`](https://registry.terraform.io/modules/cloudposse/vpn-connection/aws/1.9.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_parameter.tunnel1_preshared_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.tunnel2_preshared_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`random_password.tunnel1_preshared_key`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
- [`random_password.tunnel2_preshared_key`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
## Data Sources
The following data sources are used by this module:
---
## snowflake-account
This component sets up the requirements for all other Snowflake components, including creating the Terraform service
user. Before running this component, follow the manual, Click-Ops steps below to create a Snowflake subscription.
## Deployment Steps
1. Open the AWS Console for the given stack.
2. Go to AWS Marketplace Subscriptions.
3. Click "Manage Subscriptions", click "Discover products", type "Snowflake" in the search bar.
4. Select "Snowflake Data Cloud"
5. Click "Continue to Subscribe"
6. Fill out the information steps using the following as an example. Note, the provided email cannot use labels such as
`mdev+sbx01@example.com`.
```
First Name: John
Last Name: Smith
Email: aws@example.com
Company: Example
Country: United States
```
7. Select "Standard" and the current region. In this example, we chose "US East (Ohio)" which is the same as
`us-east-1`.
8. Continue and wait for Sign Up to complete. Note the Snowflake account ID; you can find this in the newly accessible
Snowflake console in the top right of the window.
9. Check for the Account Activation email. Note, this may be collected in a Slack notifications channel for easy access.
10. Follow the given link to create the Admin user with username `admin` and a strong password. Be sure to save that
password somewhere secure.
11. Upload that password to AWS Parameter Store under `/snowflake/$ACCOUNT/users/admin/password`, where `ACCOUNT` is the
value given during the subscription process. This password will only be used to create a private key, and all other
authentication will be done with said key. Below is an example of how to do that with a
[chamber](https://github.com/segmentio/chamber) command:
```
AWS_PROFILE=$NAMESPACE-$TENANT-gbl-sbx01-admin chamber write /snowflake/$ACCOUNT/users/admin/ admin $PASSWORD
```
11. Finally, use atmos to deploy this component:
```
atmos terraform deploy snowflake/account --stack $TENANT-use2-sbx01
```
## Migrate `chanzuckerberg/snowflake` to `snowflakedb/snowflake` provider
5/25/2022 the provider has been transferred from the Chan Zuckerberg Initiative (CZI) GitHub organization to snowflakedb org.
To upgrade from CZI, please run the following command:
```shell
terraform state replace-provider chanzuckerberg/snowflake snowflakedb/snowflake
```
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component:
```yaml
components:
terraform:
snowflake-account:
settings:
spacelift:
workspace_enabled: false
vars:
enabled: true
snowflake_account: "AB12345"
snowflake_account_region: "us-east-2"
snowflake_user_email_format: "aws.dev+%s@example.com"
tags:
Team: data
Service: snowflake
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`snowflake_account` (`string`) required
The Snowflake account given with the AWS Marketplace Subscription.
`snowflake_account_region` (`string`) required
AWS Region with the Snowflake subscription
### Optional Variables
`default_warehouse_size` (`string`) optional
The size for the default Snowflake Warehouse
**Default value:** `"xsmall"`
`global_environment_name` (`string`) optional
Global environment name
**Default value:** `"gbl"`
`privileged` (`bool`) optional
True if the default provider already has access to the backend
**Default value:** `false`
`required_tags` (`list(string)`) optional
List of required tag names
**Default value:** `[ ]`
`root_account_stage_name` (`string`) optional
The stage name for the AWS Organization root (master) account
**Default value:** `"root"`
`service_user_id` (`string`) optional
The identifier for the service user created to manage infrastructure.
**Default value:** `"terraform"`
`snowflake_admin_username` (`string`) optional
Snowflake admin username created with the initial account subscription.
**Default value:** `"admin"`
`snowflake_role_description` (`string`) optional
Comment to attach to the Snowflake Role.
**Default value:** `"Terraform service user role."`
`snowflake_username_format` (`string`) optional
Snowflake username format
**Default value:** `"%s-%s"`
SSM parameter path format for a Snowflake user. For example, /snowflake/\{\{ account \}\}/users/\{\{ username \}\}/
**Default value:** `"/%s/%s/%s/%s/%s"`
`terraform_user_first_name` (`string`) optional
Snowflake Terraform first name given with User creation
**Default value:** `"Terrafrom"`
`terraform_user_last_name` (`string`) optional
Snowflake Terraform last name given with User creation
**Default value:** `"User"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`snowflake_account`
The Snowflake account ID.
`snowflake_region`
The AWS Region with the Snowflake account.
`snowflake_terraform_role`
The name of the role given to the Terraform service user.
`ssm_path_terraform_user_name`
The path to the SSM parameter for the Terraform user name.
`ssm_path_terraform_user_private_key`
The path to the SSM parameter for the Terraform user private key.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 3.0, < 6.0.0`
- `random`, version: `>= 2.3`
- `snowflake`, version: `>= 0.25`
- `tls`, version: `>= 3.0`
### Providers
- `aws`, version: `>= 3.0, < 6.0.0`
- `random`, version: `>= 2.3`
- `snowflake`, version: `>= 0.25`
- `tls`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`introspection` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | introspection module will contain the additional tags
`snowflake_account` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`snowflake_role` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | The identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (e.g. "My object"). Identifiers enclosed in double quotes are also case-sensitive.
`snowflake_warehouse` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | Identifier for the virtual warehouse; must be unique for your account. In addition, the identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (e.g. "My object" ).
`ssm_parameters` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`utils` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
## Resources
The following resources are used by this module:
- [`random_password.terraform_user_password`](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password) (resource)
- [`snowflake_role.terraform`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/role) (resource)
- [`snowflake_role_grants.grant_custom_roles`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/role_grants) (resource)
- [`snowflake_role_grants.grant_system_roles`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/role_grants) (resource)
- [`snowflake_user.terraform`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/user) (resource)
- [`snowflake_warehouse.default`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/warehouse) (resource)
- [`tls_private_key.terraform_user_key`](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.snowflake_password`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## snowflake-database
All data in Snowflake is stored in database tables, logically structured as collections of columns and rows. This
component will create and control a Snowflake database, schema, and set of tables.
## Migrate `chanzuckerberg/snowflake` to `snowflakedb/snowflake` provider
5/25/2022 the provider has been transferred from the Chan Zuckerberg Initiative (CZI) GitHub organization to snowflakedb org.
To upgrade from CZI, please run the following command:
```shell
terraform state replace-provider chanzuckerberg/snowflake snowflakedb/snowflake
```
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component:
```yaml
components:
terraform:
snowflake-database:
vars:
enabled: true
tags:
Team: data
Service: snowflake
tables:
example:
comment: "An example table"
columns:
- name: "data"
type: "text"
- name: "DATE"
type: "TIMESTAMP_NTZ(9)"
- name: "extra"
type: "VARIANT"
comment: "extra data"
primary_key:
name: "pk"
keys:
- "data"
views:
select-example:
comment: "An example view"
statement: |
select * from "example";
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`data_retention_time_in_days` (`string`) optional
Time in days to retain data in Snowflake databases, schemas, and tables by default.
**Default value:** `1`
`database_comment` (`string`) optional
The comment to give to the provisioned database.
**Default value:** `"A database created for managing programmatically created Snowflake schemas and tables."`
`database_grants` (`list(string)`) optional
A list of Grants to give to the database created with component.
**Default value:**
```hcl
[
"MODIFY",
"MONITOR",
"USAGE"
]
```
`required_tags` (`list(string)`) optional
List of required tag names
**Default value:** `[ ]`
`schema_grants` (`list(string)`) optional
A list of Grants to give to the schema created with component.
**Default value:**
```hcl
[
"MODIFY",
"MONITOR",
"USAGE",
"CREATE TABLE",
"CREATE VIEW"
]
```
`table_grants` (`list(string)`) optional
A list of Grants to give to the tables created with component.
**Default value:**
```hcl
[
"SELECT",
"INSERT",
"UPDATE",
"DELETE",
"TRUNCATE",
"REFERENCES"
]
```
`tables` (`map(any)`) optional
A map of tables to create for Snowflake. A schema and database will be assigned for this group of tables.
**Default value:** `{ }`
`view_grants` (`list(string)`) optional
A list of Grants to give to the views created with component.
**Default value:**
```hcl
[
"SELECT",
"REFERENCES"
]
```
`views` (`map(any)`) optional
A map of views to create for Snowflake. The same schema and database will be assigned as for tables.
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 3.0, < 6.0.0`
- `snowflake`, version: `>= 0.25`
### Providers
- `aws`, version: `>= 3.0, < 6.0.0`
- `snowflake`, version: `>= 0.25`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`introspection` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | introspection module will contain the additional tags
`snowflake_account` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`snowflake_database` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`snowflake_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | Create a standard label to define resource name for Snowflake best practice.
`snowflake_schema` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`snowflake_sequence` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`utils` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
## Resources
The following resources are used by this module:
- [`snowflake_database.this`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/database) (resource)
- [`snowflake_database_grant.grant`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/database_grant) (resource)
- [`snowflake_schema.this`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/schema) (resource)
- [`snowflake_schema_grant.grant`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/schema_grant) (resource)
- [`snowflake_sequence.this`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/sequence) (resource)
- [`snowflake_table.tables`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/table) (resource)
- [`snowflake_table_grant.grant`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/table_grant) (resource)
- [`snowflake_view.view`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/view) (resource)
- [`snowflake_view_grant.grant`](https://registry.terraform.io/providers/snowflakedb/snowflake/latest/docs/resources/view_grant) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.snowflake_private_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.snowflake_username`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## sns-topic
This component is responsible for provisioning an SNS topic.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
`stacks/catalog/sns-topic/defaults.yaml` file (base component for all SNS topics with default settings):
```yaml
components:
terraform:
sns-topic/defaults:
metadata:
type: abstract
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
tags:
Team: sre
Service: sns-topic
subscribers: {}
allowed_aws_services_for_sns_published: []
kms_master_key_id: alias/aws/sns
encryption_enabled: true
sqs_queue_kms_master_key_id: alias/aws/sqs
sqs_queue_kms_data_key_reuse_period_seconds: 300
allowed_iam_arns_for_sns_publish: []
sns_topic_policy_json: ""
sqs_dlq_enabled: false
sqs_dlq_max_message_size: 262144
sqs_dlq_message_retention_seconds: 1209600
delivery_policy: null
fifo_topic: false
fifo_queue_enabled: false
content_based_deduplication: false
redrive_policy_max_receiver_count: 5
redrive_policy: null
```
```yaml
import:
- catalog/sns-topic/defaults
components:
terraform:
sns-topic-example:
metadata:
component: sns-topic
inherits:
- sns-topic/defaults
vars:
enabled: true
name: sns-topic-example
sqs_dlq_enabled: false
subscribers:
opsgenie:
protocol: "https"
endpoint: "https://api.example.com/v1/"
endpoint_auto_confirms: true
```
## Variables
### Required Variables
IAM role/user ARNs that will have permission to publish to SNS topic. Used when no external json policy is used.
**Default value:** `[ ]`
`content_based_deduplication` (`bool`) optional
Enable content-based deduplication for FIFO topics
**Default value:** `false`
`delivery_policy` (`string`) optional
The SNS delivery policy as JSON.
**Default value:** `null`
`encryption_enabled` (`bool`) optional
Whether or not to use encryption for SNS Topic. If set to `true` and no custom value for KMS key (kms_master_key_id) is provided, it uses the default `alias/aws/sns` KMS key.
**Default value:** `true`
`fifo_queue_enabled` (`bool`) optional
Whether or not to create a FIFO (first-in-first-out) queue
**Default value:** `false`
`fifo_topic` (`bool`) optional
Whether or not to create a FIFO (first-in-first-out) topic
**Default value:** `false`
`kms_master_key_id` (`string`) optional
The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK.
**Default value:** `"alias/aws/sns"`
`redrive_policy` (`string`) optional
The SNS redrive policy as JSON. This overrides `var.redrive_policy_max_receiver_count` and the `deadLetterTargetArn` (supplied by `var.fifo_queue = true`) passed in by the module.
**Default value:** `null`
The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
**Default value:** `5`
`sns_topic_policy_json` (`string`) optional
The fully-formed AWS policy as JSON
**Default value:** `""`
`sqs_dlq_enabled` (`bool`) optional
Enable delivery of failed notifications to SQS and monitor messages in queue.
**Default value:** `false`
`sqs_dlq_max_message_size` (`number`) optional
The limit of how many bytes a message can contain before Amazon SQS rejects it. An integer from 1024 bytes (1 KiB) up to 262144 bytes (256 KiB). The default for this attribute is 262144 (256 KiB).
**Default value:** `262144`
The number of seconds Amazon SQS retains a message. Integer representing seconds, from 60 (1 minute) to 1209600 (14 days).
**Default value:** `1209600`
The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again
**Default value:** `300`
`sqs_queue_kms_master_key_id` (`string`) optional
The ID of an AWS-managed customer master key (CMK) for Amazon SQS Queue or a custom CMK
**Default value:** `"alias/aws/sqs"`
`subscribers` optional
Required configuration for subscribes to SNS topic.
**Type:**
```hcl
map(object({
protocol = string
# The protocol to use. The possible values for this are: sqs, sms, lambda, application. (http or https are partially supported, see below) (email is an option but is unsupported, see below).
endpoint = string
# The endpoint to send data to, the contents will vary with the protocol. (see below for more information)
endpoint_auto_confirms = optional(bool)
# Boolean indicating whether the end point is capable of auto confirming subscription e.g., PagerDuty (default is false)
raw_message_delivery = optional(bool)
# Boolean indicating whether or not to enable raw message delivery (the original message is directly passed, not wrapped in JSON with the original message in the message property) (default is false)
}))
```
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`dead_letter_queue_arn`
The ARN of the dead letter queue.
`dead_letter_queue_id`
The ID for the created dead letter queue. Same as the URL.
`dead_letter_queue_name`
The name for the created dead letter queue.
`dead_letter_queue_url`
The URL for the created dead letter SQS queue.
`sns_topic_arn`
SNS topic ARN.
`sns_topic_id`
SNS topic ID.
`sns_topic_name`
SNS topic name.
`sns_topic_owner`
SNS topic owner.
`sns_topic_subscriptions`
SNS topic subscription.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`sns_topic` | 0.21.0 | [`cloudposse/sns-topic/aws`](https://registry.terraform.io/modules/cloudposse/sns-topic/aws/0.21.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## spa-s3-cloudfront
This component provisions infrastructure to serve a Single Page Application (SPA) via Amazon S3 and Amazon CloudFront.
- S3 bucket to host SPA assets
- CloudFront distribution for global CDN delivery
- ACM certificate issued in `us-east-1` (required by CloudFront)
NOTE: The component does not use the ACM created by `dns-delegated`, because the ACM region has to be `us-east-1`.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
An import for all instantiations of the `spa-s3-cloudfront` component can be created at `stacks/spa/spa-defaults.yaml`:
```yaml
components:
terraform:
spa-s3-cloudfront:
vars:
# lookup GitHub Runner IAM role via remote state
github_runners_deployment_principal_arn_enabled: true
github_runners_component_name: github-runners
github_runners_tenant_name: core
github_runners_environment_name: ue2
github_runners_stage_name: auto
origin_force_destroy: false
origin_versioning_enabled: true
origin_block_public_acls: true
origin_block_public_policy: true
origin_ignore_public_acls: true
origin_restrict_public_buckets: true
origin_encryption_enabled: true
cloudfront_index_document: index.html
cloudfront_ipv6_enabled: false
cloudfront_compress: true
cloudfront_default_root_object: index.html
cloudfront_viewer_protocol_policy: redirect-to-https
```
An import for all instantiations for a specific SPA can be created at `stacks/spa/example-spa.yaml`:
```yaml
components:
terraform:
example-spa:
component: spa-s3-cloudfront
vars:
name: example-spa
site_subdomain: example-spa
cloudfront_allowed_methods:
- GET
- HEAD
cloudfront_cached_methods:
- GET
- HEAD
cloudfront_custom_error_response:
- error_caching_min_ttl: 1
error_code: 403
response_code: 200
response_page_path: /index.html
cloudfront_default_ttl: 60
cloudfront_min_ttl: 60
cloudfront_max_ttl: 60
```
Finally, the `spa-s3-cloudfront` component can be instantiated in a stack config:
```yaml
import:
- spa/example-spa
components:
terraform:
example-spa:
component: spa-s3-cloudfront
settings:
spacelift:
workspace_enabled: true
vars: {}
```
### Failover Origins
Failover origins are supported via `var.failover_s3_origin_name` and `var.failover_s3_origin_region`.
### Preview Environments
SPA Preview environments (i.e. `subdomain.example.com` mapping to a `/subdomain` path in the S3 bucket) powered by
Lambda@Edge are supported via `var.preview_environment_enabled`. See the both the variable description and inline
documentation for an extensive explanation for how these preview environments work.
### Customizing Lambda@Edge
This component supports customizing Lambda@Edge functions for the CloudFront distribution. All Lambda@Edge function
configuration is deep merged before being passed to the `cloudposse/cloudfront-s3-cdn/aws//modules/lambda@edge` module.
You can add additional functions and overwrite existing functions as such:
```yaml
import:
- catalog/spa-s3-cloudfront/defaults
components:
terraform:
refarch-docs-site-spa:
metadata:
component: spa-s3-cloudfront
inherits:
- spa-s3-cloudfront-defaults
vars:
enabled: true
lambda_edge_functions:
viewer_request: # overwrite existing function
source: null # this overwrites the 404 viewer request source with deep merging
source_zip: "./dist/lambda_edge_paywall_viewer_request.zip"
runtime: "nodejs16.x"
handler: "index.handler"
event_type: "viewer-request"
include_body: false
viewer_response: # new function
source_zip: "./dist/lambda_edge_paywall_viewer_response.zip"
runtime: "nodejs16.x"
handler: "index.handler"
event_type: "viewer-response"
include_body: false
```
## Variables
### Required Variables
When `cloudfront_access_log_create_bucket` is `false`, this is the name of the existing S3 Bucket where
CloudFront Access Logs are to be delivered and is required. IGNORED when `cloudfront_access_log_create_bucket` is `true`.
**Default value:** `""`
If set to `true`, then the CloudFront origin access logs bucket name will be rendered by calling `format("%v-%v-%v-%v", var.namespace, var.environment, var.stage, var.cloudfront_access_log_bucket_name)`.
Otherwise, the value for `cloudfront_access_log_bucket_name` will need to be the globally unique name of the access logs bucket.
For example, if this component produces an origin bucket named `eg-ue1-devplatform-example` and `cloudfront_access_log_bucket_name` is set to
`example-cloudfront-access-logs`, then the bucket name will be rendered to be `eg-ue1-devplatform-example-cloudfront-access-logs`.
**Default value:** `false`
When `true` and `cloudfront_access_logging_enabled` is also true, this module will create a new,
separate S3 bucket to receive CloudFront Access Logs.
**Default value:** `true`
Enable or disable AWS Shield Advanced protection for the CloudFront distribution. If set to 'true', a subscription to AWS Shield Advanced must exist in this account.
**Default value:** `false`
Enable or disable AWS WAF for the CloudFront distribution.
This assumes that the `aws-waf-acl-default-cloudfront` component has been deployed to the regional stack corresponding
to `var.waf_acl_environment`.
**Default value:** `true`
Object that CloudFront return when requests the root URL.
**Default value:** `"index.html"`
`cloudfront_default_ttl` (`number`) optional
Default amount of time (in seconds) that an object is in a CloudFront cache.
**Default value:** `60`
`cloudfront_index_document` (`string`) optional
Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders.
**Default value:** `"index.html"`
`cloudfront_ipv6_enabled` (`bool`) optional
Set to true to enable an AAAA DNS record to be set as well as the A record.
**Default value:** `true`
`cloudfront_lambda_function_association` optional
A config block that configures the CloudFront distribution with lambda@edge functions for specific events.
**Type:**
```hcl
list(object({
event_type = string
include_body = bool
lambda_arn = string
}))
```
**Default value:** `[ ]`
`cloudfront_max_ttl` (`number`) optional
Maximum amount of time (in seconds) that an object is in a CloudFront cache.
**Default value:** `31536000`
`cloudfront_min_ttl` (`number`) optional
Minimum amount of time that you want objects to stay in CloudFront caches.
**Default value:** `0`
The environment where `dns-delegated` component is deployed to
**Default value:** `"gbl"`
`external_aliases` (`list(string)`) optional
List of FQDN's - Used to set the Alternate Domain Names (CNAMEs) setting on CloudFront. No new Route53 records will be created for these.
Setting `process_domain_validation_options` to true may cause the component to fail if an external_alias DNS zone is not controlled by Terraform.
Setting `preview_environment_enabled` to `true` will cause this variable to be ignored.
**Default value:** `[ ]`
The [fixed name](https://github.com/cloudposse/terraform-aws-utils/blob/399951e552483a4f4c1dc7fbe2675c443f3dbd83/main.tf#L10) of the AWS Region where the
failover S3 origin exists. Setting this variable will enable use of a failover S3 origin, but it is required for the
failover S3 origin to exist beforehand. This variable is used in conjunction with `var.failover_s3_origin_format` to
build out the name of the Failover S3 origin in the specified region.
For example, if this component creates an origin of name `eg-ue1-devplatform-example` and this variable is set to `uw1`,
then it is expected that a bucket with the name `eg-uw1-devplatform-example-failover` exists in `us-west-1`.
**Default value:** `null`
`failover_s3_origin_format` (`string`) optional
If `var.failover_s3_origin_environment` is supplied, this is the format to use for the failover S3 origin bucket name when
building the name via `format([format], var.namespace, var.failover_s3_origin_environment, var.stage, var.name)`
and then looking it up via the `aws_s3_bucket` Data Source.
For example, if this component creates an origin of name `eg-ue1-devplatform-example` and `var.failover_s3_origin_environment`
is set to `uw1`, then it is expected that a bucket with the name `eg-uw1-devplatform-example-failover` exists in `us-west-1`.
**Default value:** `"%v-%v-%v-%v-failover"`
`forward_cookies` (`string`) optional
Specifies whether you want CloudFront to forward all or no cookies to the origin. Can be 'all' or 'none'
**Default value:** `"none"`
`forward_header_values` (`list(string)`) optional
A list of whitelisted header values to forward to the origin (incompatible with `cache_policy_id`)
**Default value:**
```hcl
[
"Access-Control-Request-Headers",
"Access-Control-Request-Method",
"Origin"
]
```
A list of the GitHub repositories that are allowed to assume this role from GitHub Actions. For example,
["cloudposse/infra-live"]. Can contain "*" as wildcard.
If org part of repo name is omitted, "cloudposse" will be assumed.
**Default value:** `[ ]`
A flag that is used to decide whether or not to include the GitHub Runner's IAM role in origin_deployment_principal_arns list
**Default value:** `true`
The delay, in [Golang ParseDuration](https://pkg.go.dev/time#ParseDuration) format, to wait before destroying the Lambda@Edge
functions.
This delay is meant to circumvent Lambda@Edge functions not being immediately deletable following their dissociation from
a CloudFront distribution, since they are replicated to CloudFront Edge servers around the world.
If set to `null`, no delay will be introduced.
By default, the delay is 20 minutes. This is because it takes about 3 minutes to destroy a CloudFront distribution, and
around 15 minutes until the Lambda@Edge function is available for deletion, in most cases.
For more information, see: https://github.com/hashicorp/terraform-provider-aws/issues/1721.
**Default value:** `"20m"`
`lambda_edge_functions` optional
Lambda@Edge functions to create.
The key of this map is the name of the Lambda@Edge function.
This map will be deep merged with each enabled default function. Use deep merge to change or overwrite specific values passed by those function objects.
**Type:**
```hcl
map(object({
source = optional(list(object({
filename = string
content = string
})))
source_dir = optional(string)
source_zip = optional(string)
runtime = string
handler = string
event_type = string
include_body = bool
}))
```
**Default value:** `{ }`
`lambda_edge_handler` (`string`) optional
The default Lambda@Edge handler for all functions.
This value is deep merged in `module.lambda_edge_functions` with `var.lambda_edge_functions` and can be overwritten for any individual function.
**Default value:** `"index.handler"`
`lambda_edge_runtime` (`string`) optional
The default Lambda@Edge runtime for all functions.
This value is deep merged in `module.lambda_edge_functions` with `var.lambda_edge_functions` and can be overwritten for any individual function.
**Default value:** `"nodejs16.x"`
`ordered_cache` optional
An ordered list of [cache behaviors](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudfront_distribution#cache-behavior-arguments) resource for this distribution.
List in order of precedence (first match wins). This is in addition to the default cache policy.
Set `target_origin_id` to `""` to specify the S3 bucket origin created by this module.
Set `cache_policy_id` to `""` to use `cache_policy_name` for creating a new policy. At least one of the two must be set.
Set `origin_request_policy_id` to `""` to use `origin_request_policy_name` for creating a new policy. At least one of the two must be set.
**Type:**
```hcl
list(object({
target_origin_id = string
path_pattern = string
allowed_methods = list(string)
cached_methods = list(string)
compress = bool
trusted_signers = list(string)
trusted_key_groups = list(string)
cache_policy_name = optional(string)
cache_policy_id = optional(string)
origin_request_policy_name = optional(string)
origin_request_policy_id = optional(string)
viewer_protocol_policy = string
min_ttl = number
default_ttl = number
max_ttl = number
response_headers_policy_id = string
forward_query_string = bool
forward_header_values = list(string)
forward_cookies = string
forward_cookies_whitelisted_names = list(string)
lambda_function_association = list(object({
event_type = string
include_body = bool
lambda_arn = string
}))
function_association = list(object({
event_type = string
function_arn = string
}))
origin_request_policy = optional(object({
cookie_behavior = optional(string, "none")
header_behavior = optional(string, "none")
query_string_behavior = optional(string, "none")
cookies = optional(list(string), [])
headers = optional(list(string), [])
query_strings = optional(list(string), [])
}), {})
}))
```
**Default value:** `[ ]`
Set to `true` in order to have the origin bucket require requests to use Secure Socket Layer (HTTPS/SSL). This will explicitly deny access to HTTP requests
**Default value:** `true`
`origin_bucket` (`string`) optional
Name of an existing S3 bucket to use as the origin. If this is not provided, this component will create a new s3 bucket using `var.name` and other context related inputs
**Default value:** `null`
List of actions to permit `origin_deployment_principal_arns` to perform on bucket and bucket prefixes (see `origin_deployment_principal_arns`)
**Default value:**
```hcl
[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation",
"s3:AbortMultipartUpload"
]
```
List of role ARNs to grant deployment permissions to the origin Bucket.
**Default value:** `[ ]`
`origin_encryption_enabled` (`bool`) optional
When set to 'true' the origin Bucket will have aes256 encryption enabled by default.
**Default value:** `true`
`origin_force_destroy` (`bool`) optional
A boolean string that indicates all objects should be deleted from the origin Bucket so that the Bucket can be destroyed without error. These objects are not recoverable.
**Default value:** `false`
Name of the existing S3 bucket where S3 Access Logs for the origin Bucket will be delivered. Default is not to enable S3 Access Logging for the origin Bucket.
**Default value:** `""`
If set to `true`, then the S3 origin access logs bucket name will be rendered by calling `format("%v-%v-%v-%v", var.namespace, var.environment, var.stage, var.origin_s3_access_log_bucket_name)`.
Otherwise, the value for `origin_s3_access_log_bucket_name` will need to be the globally unique name of the access logs bucket.
For example, if this component produces an origin bucket named `eg-ue1-devplatform-example` and `origin_s3_access_log_bucket_name` is set to
`example-s3-access-logs`, then the bucket name will be rendered to be `eg-ue1-devplatform-example-s3-access-logs`.
**Default value:** `false`
`origin_s3_access_log_prefix` (`string`) optional
Prefix to use for S3 Access Log object keys. Defaults to `logs/${module.this.id}`
**Default value:** `""`
Set `true` to deliver S3 Access Logs to the `origin_s3_access_log_bucket_name` bucket.
Defaults to `false` if `origin_s3_access_log_bucket_name` is empty (the default), `true` otherwise.
Must be set explicitly if the access log bucket is being created at the same time as this module is being invoked.
**Default value:** `null`
`origin_versioning_enabled` (`bool`) optional
Enable or disable versioning for the origin Bucket. Versioning is a means of keeping multiple variants of an object in the same bucket.
**Default value:** `false`
`parent_zone_name` (`string`) optional
Parent domain name of site to publish. Defaults to format(parent_zone_name_pattern, stage, environment).
**Default value:** `""`
`preview_environment_enabled` (`bool`) optional
Enable or disable SPA Preview Environments via Lambda@Edge, i.e. mapping `subdomain.example.com` to the `/subdomain`
path in the origin S3 bucket.
This variable implicitly affects the following variables:
* `s3_website_enabled`
* `s3_website_password_enabled`
* `block_origin_public_access_enabled`
* `origin_allow_ssl_requests_only`
* `forward_header_values`
* `cloudfront_default_ttl`
* `cloudfront_min_ttl`
* `cloudfront_max_ttl`
* `cloudfront_lambda_function_association`
**Default value:** `false`
Flag to enable/disable processing of the record to add to the DNS zone to complete certificate validation
**Default value:** `true`
`s3_object_ownership` (`string`) optional
Specifies the S3 object ownership control on the origin bucket. Valid values are `ObjectWriter`, `BucketOwnerPreferred`, and 'BucketOwnerEnforced'.
**Default value:** `"ObjectWriter"`
`s3_origins` optional
A list of S3 [origins](https://www.terraform.io/docs/providers/aws/r/cloudfront_distribution.html#origin-arguments) (in addition to the one created by this component) for this distribution.
S3 buckets configured as websites are `custom_origins`, not `s3_origins`.
Specifying `s3_origin_config.origin_access_identity` as `null` or `""` will have it translated to the `origin_access_identity` used by the origin created by this component.
**Type:**
```hcl
list(object({
domain_name = string
origin_id = string
origin_path = string
s3_origin_config = object({
origin_access_identity = string
})
}))
```
**Default value:** `[ ]`
`s3_website_enabled` (`bool`) optional
Set to true to enable the created S3 bucket to serve as a website independently of CloudFront,
and to use that website as the origin.
Setting `preview_environment_enabled` will implicitly set this to `true`.
**Default value:** `false`
`s3_website_password_enabled` (`bool`) optional
If set to true, and `s3_website_enabled` is also true, a password will be required in the `Referrer` field of the
HTTP request in order to access the website, and CloudFront will be configured to pass this password in its requests.
This will make it much harder for people to bypass CloudFront and access the S3 website directly via its website endpoint.
**Default value:** `false`
`site_fqdn` (`string`) optional
Fully qualified domain name of site to publish. Overrides site_subdomain and parent_zone_name.
**Default value:** `""`
`site_subdomain` (`string`) optional
Subdomain to plug into site_name_pattern to make site FQDN.
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`cloudfront_distribution_alias`
Cloudfront Distribution Alias Record.
`cloudfront_distribution_domain_name`
Cloudfront Distribution Domain Name.
`cloudfront_distribution_identity_arn`
CloudFront Distribution Origin Access Identity IAM ARN.
`failover_s3_bucket_name`
Failover Origin bucket name, if enabled.
`github_actions_iam_role_arn`
ARN of IAM role for GitHub Actions
`github_actions_iam_role_name`
Name of IAM role for GitHub Actions
`origin_s3_bucket_arn`
Origin bucket ARN.
`origin_s3_bucket_name`
Origin bucket name.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`acm_request_certificate` | 0.18.0 | [`cloudposse/acm-request-certificate/aws`](https://registry.terraform.io/modules/cloudposse/acm-request-certificate/aws/0.18.0) | Create an ACM and explicitly set it to us-east-1 (requirement of CloudFront)
`dns_delegated` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`gha_assume_role` | latest | [`../account-map/modules/team-assume-role-policy`](https://registry.terraform.io/modules/../account-map/modules/team-assume-role-policy/) | n/a
`gha_role_name` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`github_runners` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`lambda_edge` | 1.1.1 | [`cloudposse/cloudfront-s3-cdn/aws//modules/lambda@edge`](https://registry.terraform.io/modules/cloudposse/cloudfront-s3-cdn/aws/modules/lambda@edge/1.1.1) | n/a
`lambda_edge_functions` | 1.0.2 | [`cloudposse/config/yaml//modules/deepmerge`](https://registry.terraform.io/modules/cloudposse/config/yaml/modules/deepmerge/1.0.2) | n/a
`spa_web` | 1.1.1 | [`cloudposse/cloudfront-s3-cdn/aws`](https://registry.terraform.io/modules/cloudposse/cloudfront-s3-cdn/aws/1.1.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`utils` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
`waf` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_cloudfront_cache_policy.created_cache_policies`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudfront_cache_policy) (resource)
- [`aws_cloudfront_origin_request_policy.created_origin_request_policies`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudfront_origin_request_policy) (resource)
- [`aws_iam_policy.additional_lambda_edge_permission`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role.github_actions`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_iam_role_policy_attachment.additional_lambda_edge_permission`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
- [`aws_shield_protection.shield_protection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/shield_protection) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_iam_policy_document.additional_lambda_edge_permission`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.github_actions_iam_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_s3_bucket.failover_bucket`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/s3_bucket) (data source)
---
## spacelift
These components are responsible for setting up Spacelift and include three components: `spacelift/admin-stack`,
`spacelift/spaces`, and `spacelift/worker-pool`.
Spacelift is a specialized, Terraform-compatible continuous integration and deployment (CI/CD) platform for
infrastructure-as-code. It's designed and implemented by long-time DevOps practitioners based on previous experience
with large-scale installations - dozens of teams, hundreds of engineers and tens of thousands of cloud resources.
## Stack Configuration
Spacelift exists outside of the AWS ecosystem, so we define these components as unique to our standard stack
organization. Spacelift Spaces are required before tenant-specific stacks are created in Spacelift, and the root
administrator stack, referred to as `root-gbl-spacelift-admin-stack`, also does not belong to a specific tenant.
Therefore, we define both outside of the standard `core` or `plat` stacks directories. That root administrator stack is
responsible for creating the tenant-specific administrator stacks, `core-gbl-spacelift-admin-stack` and
`plat-gbl-spacelift-admin-stack`.
Our solution is to define a spacelift-specific configuration file per Spacelift Space. Typically our Spaces would be
`root`, `core`, and `plat`, so we add three files:
```diff
+ stacks/orgs/NAMESPACE/spacelift.yaml
+ stacks/orgs/NAMESPACE/core/spacelift.yaml
+ stacks/orgs/NAMESPACE/plat/spacelift.yaml
```
### Global Configuration
In order to apply common Spacelift configuration to all stacks, we need to set a few global Spacelift settings. The
`pr-comment-triggered` label will be required to trigger stacks with GitHub comments but is not required otherwise. More
on triggering Spacelift stacks to follow.
Add the following to `stacks/orgs/NAMESPACE/_defaults.yaml`:
```yaml
settings:
spacelift:
workspace_enabled: true # enable spacelift by default
before_apply:
- spacelift-configure-paths
before_init:
- spacelift-configure-paths
- spacelift-write-vars
- spacelift-tf-workspace
before_plan:
- spacelift-configure-paths
labels:
- pr-comment-triggered
```
Furthermore, specify additional tenant-specific Space configuration for both `core` and `plat` tenants.
For example, for `core` add the following to `stacks/orgs/NAMESPACE/core/_defaults.yaml`:
```yaml
terraform:
settings:
spacelift:
space_name: core
```
And for `plat` add the following to `stacks/orgs/NAMESPACE/plat/_defaults.yaml`:
```yaml
terraform:
settings:
spacelift:
space_name: plat
```
### Spacelift `root` Space
The `root` Space in Spacelift is responsible for deploying the root administrator stack, `admin-stack`, and the Spaces
component, `spaces`. This Spaces component also includes Spacelift policies. Since the root administrator stack is unique
to tenants, we modify the stack context to create a unique stack slug, `root-gbl-spacelift`.
`stacks/orgs/NAMESPACE/spacelift.yaml`:
```yaml
import:
- mixins/region/global-region
- orgs/NAMESPACE/_defaults
- catalog/terraform/spacelift/admin-stack
- catalog/terraform/spacelift/spaces
# These intentionally overwrite the default values
vars:
tenant: root
environment: gbl
stage: spacelift
components:
terraform:
# This admin stack creates other "admin" stacks
admin-stack:
metadata:
component: spacelift/admin-stack
inherits:
- admin-stack/default
settings:
spacelift:
root_administrative: true
labels:
- root-admin
- admin
vars:
enabled: true
root_admin_stack: true # This stack will be created in the root space and will create all the other admin stacks as children.
context_filters: # context_filters determine which child stacks to manage with this admin stack
administrative: true # This stack is managing all the other admin stacks
root_administrative: false # We don't want this stack to also find itself in the config and add itself a second time
labels:
- admin
# attachments only on the root stack
root_stack_policy_attachments:
- TRIGGER Global administrator
# this creates policies for the children (admin) stacks
child_policy_attachments:
- TRIGGER Global administrator
```
#### Deployment
> [!TIP]
>
> The following steps assume that you've already authenticated with Spacelift locally.
First deploy Spaces and policies with the `spaces` component:
```bash
atmos terraform apply spaces -s root-gbl-spacelift
```
In the Spacelift UI, you should see each Space and each policy.
Next, deploy the `root` `admin-stack` with the following:
```bash
atmos terraform apply admin-stack -s root-gbl-spacelift
```
Now in the Spacelift UI, you should see the administrator stacks created. Typically these should look similar to the
following:
```diff
+ root-gbl-spacelift-admin-stack
+ root-gbl-spacelift-spaces
+ core-gbl-spacelift-admin-stack
+ plat-gbl-spacelift-admin-stack
+ core-ue1-auto-spacelift-worker-pool
```
> [!TIP]
>
> The `spacelift/worker-pool` component is deployed to a specific tenant, stage, and region but is still deployed by the
> root administrator stack. Verify the administrator stack by checking the `managed-by:` label.
Finally, deploy the Spacelift Worker Pool (change the stack-slug to match your configuration):
```bash
atmos terraform apply spacelift/worker-pool -s core-ue1-auto
```
### Spacelift Tenant-Specific Spaces
A tenant-specific Space in Spacelift, such as `core` or `plat`, includes the administrator stack for that specific Space
and _all_ components in the given tenant. This administrator stack uses `var.context_filters` to select all components
in the given tenant and create Spacelift stacks for each. Similar to the root administrator stack, we again create a
unique stack slug for each tenant. For example `core-gbl-spacelift` or `plat-gbl-spacelift`.
For example, configure a `core` administrator stack with `stacks/orgs/NAMESPACE/core/spacelift.yaml`.
```yaml
import:
- mixins/region/global-region
- orgs/NAMESPACE/core/_defaults
- catalog/terraform/spacelift/admin-stack
vars:
tenant: core
environment: gbl
stage: spacelift
components:
terraform:
admin-stack:
metadata:
component: spacelift/admin-stack
inherits:
- admin-stack/default
settings:
spacelift:
labels: # Additional labels for this stack
- admin-stack-name:core
vars:
enabled: true
context_filters:
tenants: ["core"]
labels: # Additional labels added to all children
- admin-stack-name:core # will be used to automatically create the `managed-by:stack-name` label
child_policy_attachments:
- TRIGGER Dependencies
```
Deploy the `core` `admin-stack` with the following:
```bash
atmos terraform apply admin-stack -s core-gbl-spacelift
```
Create the same for the `plat` tenant in `stacks/orgs/NAMESPACE/plat/spacelift.yaml`, update the tenant and
configuration as necessary, and deploy with the following:
```bash
atmos terraform apply admin-stack -s plat-gbl-spacelift
```
Now all stacks for all components should be created in the Spacelift UI.
## Triggering Spacelift Runs
Cloud Posse recommends two options to trigger Spacelift stacks.
### Triggering with Policy Attachments
Historically, all stacks were triggered with three `GIT_PUSH` policies:
1. [GIT_PUSH Global Administrator](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/blob/main/catalog/policies/git_push.administrative.rego)
triggers admin stacks
2. [GIT_PUSH Proposed Run](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/blob/main/catalog/policies/git_push.proposed-run.rego)
triggers Proposed runs (typically Terraform Plan) for all non-admin stacks on Pull Requests
3. [GIT_PUSH Tracked Run](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/blob/main/catalog/policies/git_push.tracked-run.rego)
triggers Tracked runs (typically Terraform Apply) for all non-admin stacks on merges into `main`
Attach these policies to stacks and Spacelift will trigger them on the respective git push.
### Triggering with GitHub Comments (Preferred)
Atmos support for `atmos describe affected` made it possible to greatly improve Spacelift's triggering workflow. Now we
can add a GitHub Action to collect all affected components for a given Pull Request and add a GitHub comment to the
given PR with a formatted list of the affected stacks. Then Spacelift can watch for a GitHub comment event and then
trigger stacks based on that comment.
In order to set up GitHub Comment triggers, first add the following `GIT_PUSH Plan Affected` policy to the `spaces`
component.
For example, `stacks/catalog/spacelift/spaces.yaml`
```yaml
components:
terraform:
spaces:
metadata:
component: spacelift/spaces
settings:
spacelift:
administrative: true
space_name: root
vars:
spaces:
root:
policies:
---
# This policy will automatically assign itself to stacks and is used to trigger stacks directly from the `cloudposse/github-action-atmos-affected-trigger-spacelift` GitHub action
# This is only used if said GitHub action is set to trigger on "comments"
"GIT_PUSH Plan Affected":
type: GIT_PUSH
labels:
- autoattach:pr-comment-triggered
body: |
package spacelift
# This policy runs whenever a comment is added to a pull request. It looks for the comment body to contain either:
# /spacelift preview input.stack.id
# /spacelift deploy input.stack.id
#
# If the comment matches those patterns it will queue a tracked run (deploy) or a proposed run (preview). In the case of
# a proposed run, it will also cancel all of the other pending runs for the same branch.
#
# This is being used on conjunction with the GitHub actions `atmos-trigger-spacelift-feature-branch.yaml` and
# `atmos-trigger-spacelift-main-branch.yaml` in .github/workflows to automatically trigger a preview or deploy run based
# on the `atmos describe affected` output.
track {
commented
contains(input.pull_request.comment, concat(" ", ["/spacelift", "deploy", input.stack.id]))
}
propose {
commented
contains(input.pull_request.comment, concat(" ", ["/spacelift", "preview", input.stack.id]))
}
# Ignore if the event is not a comment
ignore {
not commented
}
# Ignore if the PR has a `spacelift-no-trigger` label
ignore {
input.pull_request.labels[_] = "spacelift-no-trigger"
}
# Ignore if the PR is a draft and deesnt have a `spacelift-trigger` label
ignore {
input.pull_request.draft
not has_spacelift_trigger_label
}
has_spacelift_trigger_label {
input.pull_request.labels[_] == "spacelift-trigger"
}
commented {
input.pull_request.action == "commented"
}
cancel[run.id] {
run := input.in_progress[_]
run.type == "PROPOSED"
run.state == "QUEUED"
run.branch == input.pull_request.head.branch
}
# This is a random sample of 10% of the runs
sample {
millis := round(input.request.timestamp_ns / 1e6)
millis % 100 <= 10
}
```
This policy will automatically attach itself to _all_ components that have the `pr-comment-triggered` label, already
defined in `stacks/orgs/NAMESPACE/_defaults.yaml` under `settings.spacelift.labels`.
Next, create two new GitHub Action workflows:
```diff
+ .github/workflows/atmos-trigger-spacelift-feature-branch.yaml
+ .github/workflows/atmos-trigger-spacelift-main-branch.yaml
```
The feature branch workflow will create a comment event in Spacelift to run a Proposed run for a given stack. Whereas
the main branch workflow will create a comment event in Spacelift to run a Deploy run for those same stacks.
#### Feature Branch
```yaml
name: "Plan Affected Spacelift Stacks"
on:
pull_request:
types:
- opened
- synchronize
- reopened
branches:
- main
jobs:
context:
runs-on: ["self-hosted"]
steps:
- name: Atmos Affected Stacks Trigger Spacelift
uses: cloudposse/github-action-atmos-affected-trigger-spacelift@v1
with:
atmos-config-path: ./rootfs/usr/local/etc/atmos
github-token: ${{ secrets.GITHUB_TOKEN }}
```
This will add a GitHub comment such as:
```
/spacelift preview plat-ue1-sandbox-foobar
```
#### Main Branch
```yaml
name: "Deploy Affected Spacelift Stacks"
on:
pull_request:
types: [closed]
branches:
- main
jobs:
run:
if: github.event.pull_request.merged == true
runs-on: ["self-hosted"]
steps:
- name: Atmos Affected Stacks Trigger Spacelift
uses: cloudposse/github-action-atmos-affected-trigger-spacelift@v1
with:
atmos-config-path: ./rootfs/usr/local/etc/atmos
deploy: true
github-token: ${{ secrets.GITHUB_TOKEN }}
head-ref: ${{ github.sha }}~1
```
This will add a GitHub comment such as:
```
/spacelift deploy plat-ue1-sandbox-foobar
```
---
## admin-stack
This component is responsible for creating an administrative [stack](https://docs.spacelift.io/concepts/stack/) and its
corresponding child stacks in the Spacelift organization.
The component uses a series of `context_filters` to select atmos component instances to manage as child stacks.
## Usage
**Stack Level**: Global
The following are example snippets of how to use this component. For more on Spacelift admin stack usage, see the
[Spacelift README](https://docs.cloudposse.com/components/library/aws/spacelift/)
First define the default configuration for any admin stack:
```yaml
# stacks/catalog/spacelift/admin-stack.yaml
components:
terraform:
admin-stack/default:
metadata:
type: abstract
component: spacelift/admin-stack
settings:
spacelift:
administrative: true
autodeploy: true
before_apply:
- spacelift-configure-paths
before_init:
- spacelift-configure-paths
- spacelift-write-vars
- spacelift-tf-workspace
before_plan:
- spacelift-configure-paths
drift_detection_enabled: true
drift_detection_reconcile: true
drift_detection_schedule:
- 0 4 * * *
manage_state: false
policies: {}
vars:
# Organization specific configuration
branch: main
repository: infrastructure
worker_pool_name: "acme-core-ue1-auto-spacelift-worker-pool"
runner_image: 111111111111.dkr.ecr.us-east-1.amazonaws.com/infrastructure:latest
spacelift_spaces_stage_name: "root"
# These values need to be manually updated as external configuration changes
# This should match the version set in the Dockerfile and be updated when the version changes.
terraform_version: "1.3.6"
# Common configuration
administrative: true # Whether this stack can manage other stacks
component_root: components/terraform
```
Then define the root-admin stack:
```yaml
# stacks/orgs/acme/spacelift.yaml
import:
- mixins/region/global-region
- orgs/acme/_defaults
- catalog/terraform/spacelift/admin-stack
- catalog/terraform/spacelift/spaces
# These intentionally overwrite the default values
vars:
tenant: root
environment: gbl
stage: spacelift
components:
terraform:
# This admin stack creates other "admin" stacks
admin-stack:
metadata:
component: spacelift/admin-stack
inherits:
- admin-stack/default
settings:
spacelift:
root_administrative: true
labels:
- root-admin
- admin
vars:
enabled: true
root_admin_stack: true # This stack will be created in the root space and will create all the other admin stacks as children.
context_filters: # context_filters determine which child stacks to manage with this admin stack
administrative: true # This stack is managing all the other admin stacks
root_administrative: false # We don't want this stack to also find itself in the config and add itself a second time
labels:
- admin
# attachments only on the root stack
root_stack_policy_attachments:
- TRIGGER Global administrator
# this creates policies for the children (admin) stacks
child_policy_attachments:
- TRIGGER Global administrator
```
Finally, define any tenant-specific stacks:
```yaml
# stacks/orgs/acme/core/spacelift.yaml
import:
- mixins/region/global-region
- orgs/acme/core/_defaults
- catalog/terraform/spacelift/admin-stack
vars:
tenant: core
environment: gbl
stage: spacelift
components:
terraform:
admin-stack:
metadata:
component: spacelift/admin-stack
inherits:
- admin-stack/default
settings:
spacelift:
labels: # Additional labels for this stack
- admin-stack-name:core
vars:
enabled: true
context_filters:
tenants: ["core"]
labels: # Additional labels added to all children
- admin-stack-name:core # will be used to automatically create the `managed-by:stack-name` label
child_policy_attachments:
- TRIGGER Dependencies
```
## Variables
### Required Variables
`component_root` (`string`) required
The path, relative to the root of the repository, where the component can be found
Label to use to identify the admin stack when creating the child stacks
**Default value:** `"admin-stack-name"`
`allow_public_workers` (`bool`) optional
Whether to allow public workers to be used for this stack
**Default value:** `false`
`autodeploy` (`bool`) optional
Controls the Spacelift 'autodeploy' option for a stack
**Default value:** `false`
`autoretry` (`bool`) optional
Controls the Spacelift 'autoretry' option for a stack
**Default value:** `false`
`aws_role_arn` (`string`) optional
ARN of the AWS IAM role to assume and put its temporary credentials in the runtime environment
**Default value:** `null`
`aws_role_enabled` (`bool`) optional
Flag to enable/disable Spacelift to use AWS STS to assume the supplied IAM role and put its temporary credentials in the runtime environment
**Default value:** `false`
`aws_role_external_id` (`string`) optional
Custom external ID (works only for private workers). See https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html for more details
**Default value:** `null`
List of policy attachments to attach to the child stacks created by this module
**Default value:** `[ ]`
`cloudformation` (`map(any)`) optional
CloudFormation-specific configuration. Presence means this Stack is a CloudFormation Stack.
**Default value:** `null`
`commit_sha` (`string`) optional
The commit SHA for which to trigger a run. Requires `var.spacelift_run_enabled` to be set to `true`
**Default value:** `null`
`component_env` (`any`) optional
Map of component ENV variables
**Default value:** `{ }`
`component_vars` (`any`) optional
All Terraform values to be applied to the stack via a mounted file
**Default value:** `{ }`
`context_attachments` (`list(string)`) optional
A list of context IDs to attach to this stack
**Default value:** `[ ]`
`description` (`string`) optional
Specify description of stack
**Default value:** `null`
`drift_detection_enabled` (`bool`) optional
Flag to enable/disable drift detection on the infrastructure stacks
**Default value:** `false`
`drift_detection_reconcile` (`bool`) optional
Flag to enable/disable infrastructure stacks drift automatic reconciliation. If drift is detected and `reconcile` is turned on, Spacelift will create a tracked run to correct the drift
**Default value:** `false`
A list of labels for the stack
**Default value:** `[ ]`
`local_preview_enabled` (`bool`) optional
Indicates whether local preview runs can be triggered on this Stack
**Default value:** `false`
`manage_state` (`bool`) optional
Flag to enable/disable manage_state setting in stack
**Default value:** `false`
`protect_from_deletion` (`bool`) optional
Flag to enable/disable deletion protection.
**Default value:** `false`
`pulumi` (`map(any)`) optional
Pulumi-specific configuration. Presence means this Stack is a Pulumi Stack.
**Default value:** `null`
`root_admin_stack` (`bool`) optional
Flag to indicate if this stack is the root admin stack. In this case, the stack will be created in the root space and will create all the other admin stacks as children.
**Default value:** `false`
If enabled, the `spacelift_stack_dependency` Spacelift resource will be used to create dependencies between stacks instead of using the `depends-on` labels. The `depends-on` labels will be removed from the stacks and the trigger policies for dependencies will be detached
**Default value:** `false`
`stack_destructor_enabled` (`bool`) optional
Flag to enable/disable the stack destructor to destroy the resources of the stack before deleting the stack itself
**Default value:** `false`
`stack_name` (`string`) optional
The name of the Spacelift stack
**Default value:** `null`
`terraform_smart_sanitization` (`bool`) optional
Whether or not to enable [Smart Sanitization](https://docs.spacelift.io/vendors/terraform/resource-sanitization) which will only sanitize values marked as sensitive.
**Default value:** `false`
`terraform_version` (`string`) optional
Specify the version of Terraform to use for the stack
**Default value:** `null`
`terraform_version_map` (`map(string)`) optional
A map to determine which Terraform patch version to use for each minor version
**Default value:** `{ }`
`terraform_workflow_tool` (`string`) optional
Defines the tool that will be used to execute the workflow. This can be one of OPEN_TOFU, TERRAFORM_FOSS or CUSTOM. Defaults to TERRAFORM_FOSS.
**Default value:** `"TERRAFORM_FOSS"`
`terraform_workspace` (`string`) optional
Specify the Terraform workspace to use for the stack
**Default value:** `null`
`webhook_enabled` (`bool`) optional
Flag to enable/disable the webhook endpoint to which Spacelift sends the POST requests about run state changes
**Default value:** `false`
`webhook_endpoint` (`string`) optional
Webhook endpoint to which Spacelift sends the POST requests about run state changes
**Default value:** `null`
`webhook_secret` (`string`) optional
Webhook secret used to sign each POST request so you're able to verify that the requests come from Spacelift
**Default value:** `null`
`worker_pool_name` (`string`) optional
The atmos stack name of the worker pool. Example: `acme-core-ue2-auto-spacelift-default-worker-pool`
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`child_stacks`
All children stacks managed by this component
`root_stack`
The root stack, if enabled and created by this component
`root_stack_id`
The stack id
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3`
- `aws`, version: `>= 4.0, < 6.0.0`
- `null`, version: `>= 3.0`
- `spacelift`, version: `>= 0.1.31`
- `utils`, version: `>= 1.14.0`
### Providers
- `null`, version: `>= 3.0`
- `spacelift`, version: `>= 0.1.31`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`all_admin_stacks_config` | 1.7.3 | [`cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-stacks-from-atmos-config`](https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/spacelift/modules/spacelift-stacks-from-atmos-config/1.7.3) | This gets the atmos stack config for all of the administrative stacks
`child_stack` | 1.7.3 | [`cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-stack`](https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/spacelift/modules/spacelift-stack/1.7.3) | n/a
`child_stacks_config` | 1.7.3 | [`cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-stacks-from-atmos-config`](https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/spacelift/modules/spacelift-stacks-from-atmos-config/1.7.3) | Get all of the stack configurations from the atmos config that matched the context_filters and create a stack for each one.
`root_admin_stack` | 1.7.3 | [`cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-stack`](https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/spacelift/modules/spacelift-stack/1.7.3) | n/a
`root_admin_stack_config` | 1.7.3 | [`cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-stacks-from-atmos-config`](https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/spacelift/modules/spacelift-stacks-from-atmos-config/1.7.3) | The root admin stack is a special stack that is used to manage all of the other admin stacks in the the Spacelift organization. This stack is denoted by setting the root_administrative property to true in the atmos config. Only one such stack is allowed in the Spacelift organization.
`spaces` | 2.0.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/2.0.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`null_resource.child_stack_parent_precondition`](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) (resource)
- [`null_resource.public_workers_precondition`](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) (resource)
- [`null_resource.spaces_precondition`](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) (resource)
- [`null_resource.workers_precondition`](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) (resource)
- [`spacelift_policy_attachment.root`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/policy_attachment) (resource)
## Data Sources
The following data sources are used by this module:
- [`spacelift_policies.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/data-sources/policies) (data source)
- [`spacelift_stacks.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/data-sources/stacks) (data source)
- [`spacelift_worker_pools.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/data-sources/worker_pools) (data source)
---
## idp-group-mappings
This component is responsible for creating and managing the IdP group mappings within the Spacelift organization. It ensures that Identity Provider (IdP) groups are correctly mapped to specific roles across designated Spacelift spaces, enabling precise access control and role-based permissions.
## Usage
**Stack Level**: Global
Here's an example snippet for how to use this component.
```yaml
# stacks/catalog/spacelift/idp-group-mappings.yaml
components:
terraform:
idp-group-mappings:
metadata:
component: spacelift/idp-group-mappings
settings:
spacelift:
enabled: true
vars:
spacelift_spaces_tenant_name: root
spacelift_spaces_environment_name: gbl
spacelift_spaces_stage_name: spacelift
spacelift_spaces_component_name: spaces
# These must match the group names from the IdP provider
idp-group-mappings:
spacelift-admin:
spacelift_role_name: "ADMIN"
spaces:
- dev
- staging
- prod
spacelift-writer:
spacelift_role_name: "WRITE"
spaces:
- dev
- staging
- prod
spacelift-reader:
spacelift_role_name: "READ"
spaces:
- dev
- staging
- prod
```
## Variables
### Required Variables
### Optional Variables
`idp_group_mappings` optional
Map of IDP group mappings with role names and associated spaces. The key is the IDP group name.
**Type:**
```hcl
map(object({
spacelift_role_name = string
spaces = list(string)
}))
```
**Default value:** `{ }`
The tenant name of the spacelift spaces component
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`id`
The ID of the component
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3`
- `spacelift`, version: `>= 0.1.31`
### Providers
- `spacelift`, version: `>= 0.1.31`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`spaces` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`spacelift_idp_group_mapping.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/idp_group_mapping) (resource)
## Data Sources
The following data sources are used by this module:
---
## spaces
This component is responsible for creating and managing the [spaces](https://docs.spacelift.io/concepts/spaces/) in the
Spacelift organization.
## Usage
**Stack Level**: Global
The following are example snippets of how to use this component:
```yaml
# stacks/catalog/spacelift/spaces.yaml
components:
terraform:
spaces:
metadata:
component: spacelift/spaces
settings:
spacelift:
administrative: true
space_name: root
vars:
spaces:
# root is a special space that is the parent of all other spaces and cannot be deleted or renamed. Only the
# policies block is actually consumed by the component to create policies for the root space.
root:
parent_space_id: root
description: The root space
inherit_entities: true
policies:
GIT_PUSH Global Administrator:
type: GIT_PUSH
body_url: https://raw.githubusercontent.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/%s/catalog/policies/git_push.administrative.rego
TRIGGER Global Administrator:
type: TRIGGER
body_url: https://raw.githubusercontent.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/%s/catalog/policies/trigger.administrative.rego
GIT_PUSH Proposed Run:
type: GIT_PUSH
body_url: https://raw.githubusercontent.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/%s/catalog/policies/git_push.proposed-run.rego
GIT_PUSH Tracked Run:
type: GIT_PUSH
body_url: https://raw.githubusercontent.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/%s/catalog/policies/git_push.tracked-run.rego
PLAN Default:
type: PLAN
body_url: https://raw.githubusercontent.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/%s/catalog/policies/plan.default.rego
TRIGGER Dependencies:
type: TRIGGER
body_url: https://raw.githubusercontent.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/%s/catalog/policies/trigger.dependencies.rego
PLAN Warn On Resource Changes Except Image ID:
type: PLAN
body_url: https://raw.githubusercontent.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/%s/catalog/policies/plan.warn-on-resource-changes-except-image-id.rego
core:
parent_space_id: root
description: The space for the core tenant
inherit_entities: true
labels:
- core
plat:
parent_space_id: root
description: The space for platform tenant
inherit_entities: true
labels:
- plat
```
## Variables
### Required Variables
`spaces` required
A map of all Spaces to create in Spacelift
**Type:**
```hcl
map(object({
parent_space_id = string,
description = optional(string),
inherit_entities = optional(bool, false),
labels = optional(set(string), []),
policies = optional(map(object({
body = optional(string),
body_url = optional(string),
body_url_version = optional(string, "master"),
body_file_path = optional(string),
type = optional(string),
labels = optional(set(string), []),
})), {}),
}))
```
### Optional Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`policies`
The policies created by this component
`spaces`
The spaces created by this component
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3`
- `aws`, version: `>= 4.0, < 6.0.0`
- `spacelift`, version: `>= 0.1.31`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`policy` | 1.7.3 | [`cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-policy`](https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/spacelift/modules/spacelift-policy/1.7.3) | n/a
`space` | 1.7.3 | [`cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-space`](https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/spacelift/modules/spacelift-space/1.7.3) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## worker-pool-asg
This component provisions Spacelift worker pools on AWS using an Auto Scaling Group.
By default, workers are granted pull access to the configured ECR and permission to assume the `spacelift` team role in the identity account (ensure the `spacelift` team in the identity account allows this via `trusted_role_arns`). Workers also get these AWS managed IAM policies:
- AmazonSSMManagedInstanceCore
- AutoScalingReadOnlyAccess
- AWSXRayDaemonWriteAccess
- CloudWatchAgentServerPolicy
With SSM agent installed, workers can be accessed via SSM Session Manager.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
# stacks/catalog/spacelift/worker-pool.yaml
components:
terraform:
spacelift/worker-pool:
settings:
spacelift:
administrative: true
space_name: root
vars:
enabled: true
spacelift_api_endpoint: https://.app.spacelift.io
spacelift_spaces_tenant_name: "acme"
spacelift_spaces_environment_name: "gbl"
spacelift_spaces_stage_name: "root"
account_map_tenant_name: core
ecr_environment_name: ue1
ecr_repo_name: infrastructure
ecr_stage_name: artifacts
ecr_tenant_name: core
# Set a low scaling threshold to ensure new workers are launched as soon as the current one(s) are busy
cpu_utilization_high_threshold_percent: 10
cpu_utilization_low_threshold_percent: 5
default_cooldown: 300
desired_capacity: null
health_check_grace_period: 300
health_check_type: EC2
infracost_enabled: true
instance_type: t3.small
max_size: 3
min_size: 1
name: spacelift-worker-pool
scale_down_cooldown_seconds: 2700
spacelift_agents_per_node: 1
wait_for_capacity_timeout: 5m
block_device_mappings:
- device_name: "/dev/xvda"
no_device: null
virtual_name: null
ebs:
delete_on_termination: null
encrypted: false
iops: null
kms_key_id: null
snapshot_id: null
volume_size: 100
volume_type: "gp2"
```
To connect to a worker via SSM Session Manager, use:
```bash
aws ssm start-session --target
```
### Impacts on billing
While scaling the workload for Spacelift, keep in mind that each agent connection counts against your quota of self-hosted workers. The number of EC2 instances you have running is not going to affect your Spacelift bill. For example, if you had 3 EC2 instances in your Spacelift worker pool, and you configured `spacelift_agents_per_node` to be `3`, you would see your Spacelift bill report 9 agents being run. Take care while configuring the worker pool for your Spacelift infrastructure.
## Configuration
### Docker Image on ECR
Build and tag a Docker image for this repository and push to ECR. Ensure the account where this component is deployed has read-only access to the ECR repository.
### API Key
Prior to deployment, the API key must exist in SSM. The key must have admin permissions.
To generate the key, please follow [these instructions](https://docs.spacelift.io/integrations/api.html#spacelift-api-key-token). Once generated, write the API key ID and secret to the SSM key store at the following locations within the same AWS account and region where the Spacelift worker pool will reside.
| Key | SSM Path | Type |
| ------- | ----------------------- | -------------- |
| API ID | `/spacelift/key_id` | `SecureString` |
| API Key | `/spacelift/key_secret` | `SecureString` |
Hint: The API key ID is displayed as an upper-case, 16-character alphanumeric value next to the key name in the API key list.
Save the keys using `chamber` using the correct profile for where the Spacelift worker pool is provisioned:
```
AWS_PROFILE=acme-gbl-auto-admin chamber write spacelift key_id 1234567890123456
AWS_PROFILE=acme-gbl-auto-admin chamber write spacelift key_secret abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
```
### IAM configuration
After provisioning the component, you must give the created instance role permission to assume the Spacelift worker role. This is done by adding `iam_role_arn` from the output to the `trusted_role_arns` list for the `spacelift` role in `aws-teams`.
## Variables
### Required Variables
The Spacelift API endpoint URL (e.g. https://example.app.spacelift.io)
`wait_for_capacity_timeout` (`string`) required
A maximum duration that Terraform should wait for ASG instances to be healthy before timing out. (See also Waiting for Capacity below.) Setting this to '0' causes Terraform to skip all Capacity Waiting behavior
The name of the environment where `account_map` is provisioned
**Default value:** `"gbl"`
`account_map_stage_name` (`string`) optional
The name of the stage where `account_map` is provisioned
**Default value:** `"root"`
`account_map_tenant_name` (`string`) optional
The name of the tenant where `account_map` is provisioned.
If the `tenant` label is not used, leave this as `null`.
**Default value:** `null`
`architecture` (`list(string)`) optional
OS architecture of the EC2 instance AMI
**Default value:**
```hcl
[
"x86_64"
]
```
`aws_config_file` (`string`) optional
The AWS_CONFIG_FILE used by the worker. Can be overridden by `/.spacelift/config.yml`.
**Default value:** `"/etc/aws-config/aws-config-spacelift"`
`aws_profile` (`string`) optional
The AWS_PROFILE used by the worker. If not specified, `"${var.namespace}-identity"` will be used.
Can be overridden by `/.spacelift/config.yml`.
**Default value:** `null`
`block_device_mappings` optional
Specify volumes to attach to the instance besides the volumes specified by the AMI
**Type:**
```hcl
list(object({
device_name = string
no_device = bool
virtual_name = string
ebs = object({
delete_on_termination = bool
encrypted = bool
iops = number
kms_key_id = string
snapshot_id = string
volume_size = number
volume_type = string
})
}))
```
**Default value:** `[ ]`
`custom_spacelift_ami` (`bool`) optional
Custom spacelift AMI
**Default value:** `false`
`default_cooldown` (`number`) optional
The amount of time, in seconds, after a scaling activity completes before another scaling activity can start
**Default value:** `300`
`desired_capacity` (`number`) optional
The number of Amazon EC2 instances that should be running in the group, if not set will use `min_size` as value
**Default value:** `null`
`ebs_optimized` (`bool`) optional
If true, the launched EC2 instance will be EBS-optimized
**Default value:** `false`
`ecr_environment_name` (`string`) optional
The name of the environment where `ecr` is provisioned
**Default value:** `""`
`ecr_region` (`string`) optional
AWS region that contains the ECR infrastructure repo
**Default value:** `""`
`ecr_stage_name` (`string`) optional
The name of the stage where `ecr` is provisioned
**Default value:** `"artifacts"`
`ecr_tenant_name` (`string`) optional
The name of the tenant where `ecr` is provisioned.
If the `tenant` label is not used, leave this as `null`.
**Default value:** `null`
`github_netrc_enabled` (`bool`) optional
Whether to create a GitHub .netrc file so Spacelift can clone private GitHub repositories.
**Default value:** `false`
`github_netrc_ssm_path_token` (`string`) optional
If `github_netrc` is enabled, this is the SSM path to retrieve the GitHub token.
**Default value:** `"/github/token"`
`github_netrc_ssm_path_user` (`string`) optional
If `github_netrc` is enabled, this is the SSM path to retrieve the GitHub user
**Default value:** `"/github/user"`
`health_check_grace_period` (`number`) optional
Time (in seconds) after instance comes into service before checking health
**Default value:** `300`
`health_check_type` (`string`) optional
Controls how health checking is done. Valid values are `EC2` or `ELB`
**Default value:** `"EC2"`
`iam_attributes` (`list(string)`) optional
Additional attributes to add to the IDs of the IAM role and policy
**Default value:** `[ ]`
This is the SSM path to retrieve and set the INFRACOST_API_TOKEN environment variable
**Default value:** `"/infracost/token"`
`infracost_cli_args` (`string`) optional
These are the CLI args passed to infracost
**Default value:** `""`
`infracost_enabled` (`bool`) optional
Whether to enable infracost for Spacelift stacks
**Default value:** `false`
`infracost_warn_on_failure` (`bool`) optional
A failure executing Infracost, or a non-zero exit code being returned from the command will cause runs to fail. If this is true, this will only warn instead of failing the stack.
**Default value:** `true`
`instance_lifetime` (`number`) optional
Number of seconds after which the instance will be terminated. The default is set to 14 days.
**Default value:** `1209600`
`instance_refresh` optional
The instance refresh definition. If this block is configured, an Instance Refresh will be started when the Auto Scaling Group is updated
**Type:**
```hcl
object({
strategy = string
preferences = object({
instance_warmup = optional(number, null)
min_healthy_percentage = optional(number, null)
skip_matching = optional(bool, null)
auto_rollback = optional(bool, null)
})
triggers = optional(list(string), [])
})
```
**Default value:** `null`
`instance_type` (`string`) optional
EC2 instance type to use for workers
**Default value:** `"r5n.large"`
`launch_template_version` (`string`) optional
Launch template version to use for workers. Note that instance refresh settings are IGNORED unless template version is empty
**Default value:** `"$Latest"`
`mixed_instances_policy` optional
Policy to use a mixed group of on-demand/spot of different types. Launch template is automatically generated. https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html#mixed_instances_policy-1
**Type:**
```hcl
object({
instances_distribution = object({
on_demand_allocation_strategy = string
on_demand_base_capacity = number
on_demand_percentage_above_base_capacity = number
spot_allocation_strategy = string
spot_instance_pools = number
spot_max_price = string
})
override = list(object({
instance_type = string
weighted_capacity = number
}))
})
```
**Default value:** `null`
`scale_down_cooldown_seconds` (`number`) optional
The amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start
**Default value:** `300`
`space_name` (`string`) optional
The name of the Space to create the worker pool in
**Default value:** `"root"`
`spacelift_agents_per_node` (`number`) optional
Number of Spacelift agents to run on one worker node. NOTE: This affects billable units. Spacelift charges per agent.
**Default value:** `1`
`spacelift_ami_id` (`string`) optional
AMI ID of Spacelift worker pool image
**Default value:** `null`
`spacelift_aws_account_id` (`string`) optional
AWS Account ID owned by Spacelift
**Default value:** `"643313122712"`
`spacelift_domain_name` (`string`) optional
Top-level domain name to use for pulling the launcher binary
**Default value:** `"spacelift.io"`
`spacelift_runner_image` (`string`) optional
URL of ECR image to use for Spacelift
**Default value:** `""`
The tenant name of the spacelift spaces component
**Default value:** `null`
`termination_policies` (`list(string)`) optional
A list of policies to decide how the instances in the auto scale group should be terminated. The allowed values are `OldestInstance`, `NewestInstance`, `OldestLaunchConfiguration`, `ClosestToNextInstanceHour`, `Default`
**Default value:**
```hcl
[
"OldestLaunchConfiguration"
]
```
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`autoscaling_group_arn`
The ARN for this AutoScaling Group
`autoscaling_group_default_cooldown`
Time between a scaling activity and the succeeding scaling activity
`autoscaling_group_health_check_grace_period`
Time after instance comes into service before checking health
`autoscaling_group_health_check_type`
`EC2` or `ELB`. Controls how health checking is done
`autoscaling_group_id`
The autoscaling group id
`autoscaling_group_max_size`
The maximum size of the autoscale group
`autoscaling_group_min_size`
The minimum size of the autoscale group
`autoscaling_group_name`
The autoscaling group name
`iam_role_arn`
Spacelift IAM Role ARN
`iam_role_id`
Spacelift IAM Role ID
`iam_role_name`
Spacelift IAM Role name
`launch_template_arn`
The ARN of the launch template
`launch_template_id`
The ID of the launch template
`security_group_arn`
Spacelift Security Group ARN
`security_group_id`
Spacelift Security Group ID
`security_group_name`
Spacelift Security Group Name
`worker_pool_id`
Spacelift worker pool ID
`worker_pool_name`
Spacelift worker pool name
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `cloudinit`, version: `>= 2.2.0`
- `spacelift`, version: `>= 0.1.2`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `cloudinit`, version: `>= 2.2.0`
- `spacelift`, version: `>= 0.1.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`autoscale_group` | 0.43.1 | [`cloudposse/ec2-autoscale-group/aws`](https://registry.terraform.io/modules/cloudposse/ec2-autoscale-group/aws/0.43.1) | n/a
`ecr` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`security_group` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | n/a
`spaces` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_instance_profile.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile) (resource)
- [`aws_iam_policy.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) (resource)
- [`aws_iam_role.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`spacelift_worker_pool.primary`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/worker_pool) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ami.spacelift`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami) (data source)
- [`aws_iam_policy_document.assume_role_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`aws_ssm_parameter.spacelift_key_id`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.spacelift_key_secret`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`cloudinit_config.config`](https://registry.terraform.io/providers/hashicorp/cloudinit/latest/docs/data-sources/config) (data source)
---
## sqs-queue
This component is responsible for creating an SQS queue.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
sqs-queue/defaults:
vars:
enabled: true
# org defaults
sqs-queue:
metadata:
component: sqs-queue
inherits:
- sqs-queue/defaults
vars:
name: sqs
visibility_timeout_seconds: 30
message_retention_seconds: 86400 # 1 day
delay_seconds: 0
max_message_size_bytes: 262144
receive_wait_time_seconds: 0
fifo_queue: false
content_based_deduplication: false
dlq_enabled: true
dlq_name_suffix: "dead-letter" # default is dlq
dlq_max_receive_count: 1
dlq_kms_data_key_reuse_period_seconds: 86400 # 1 day
kms_data_key_reuse_period_seconds: 86400 # 1 day
# kms_master_key_id: "alias/aws/sqs" # Use KMS # default null
sqs_managed_sse_enabled: true # SSE vs KMS (Priority goes to KMS)
iam_policy_limit_to_current_account: true # default true
iam_policy:
- version: 2012-10-17
policy_id: Allow-S3-Event-Notifications
statements:
- sid: Allow-S3-Event-Notifications
effect: Allow
principals:
- type: Service
identifiers: ["s3.amazonaws.com"]
actions:
- SQS:SendMessage
resources: [] # auto includes this queue's ARN
conditions:
## this is included when `iam_policy_limit_to_current_account` is true
#- test: StringEquals
# variable: aws:SourceAccount
# value: "1234567890"
- test: ArnLike
variable: aws:SourceArn
values:
- "arn:aws:s3:::*"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`content_based_deduplication` (`bool`) optional
Enables content-based deduplication for FIFO queues. For more information, see the [related documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html#FIFO-queues-exactly-once-processing)
**Default value:** `false`
Determines whether to create a redrive allow policy for the dead letter queue.
**Default value:** `true`
`deduplication_scope` (`string`) optional
Specifies whether message deduplication occurs at the message group or queue level
**Default value:** `null`
`delay_seconds` (`number`) optional
The time in seconds that the delivery of all messages in the queue will be delayed. An integer from 0 to 900 (15 minutes). The default for this attribute is 0 seconds.
**Default value:** `0`
The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours)
**Default value:** `null`
`dlq_kms_master_key_id` (`string`) optional
The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK
**Default value:** `null`
`dlq_max_receive_count` (`number`) optional
The number of times a message can be unsuccessfully dequeued before being moved to the Dead Letter Queue.
**Default value:** `5`
The time for which a ReceiveMessage call will wait for a message to arrive (long polling) before returning. An integer from 0 to 20 (seconds)
**Default value:** `null`
`dlq_redrive_allow_policy` (`any`) optional
The JSON policy to set up the Dead Letter Queue redrive permission, see AWS docs.
**Default value:** `{ }`
`dlq_sqs_managed_sse_enabled` (`bool`) optional
Boolean to enable server-side encryption (SSE) of message content with SQS-owned encryption keys
**Default value:** `true`
`dlq_tags` (`map(string)`) optional
A mapping of additional tags to assign to the dead letter queue
**Default value:** `{ }`
The visibility timeout for the queue. An integer from 0 to 43200 (12 hours)
**Default value:** `null`
`fifo_queue` (`bool`) optional
Boolean designating a FIFO queue. If not set, it defaults to false making it standard.
**Default value:** `false`
`fifo_throughput_limit` (`string`) optional
Specifies whether the FIFO queue throughput quota applies to the entire queue or per message group. Valid values are perQueue and perMessageGroupId. This can be specified if fifo_queue is true.
**Default value:** `null`
`iam_policy` optional
IAM policy as list of Terraform objects, compatible with Terraform `aws_iam_policy_document` data source
except that `source_policy_documents` and `override_policy_documents` are not included.
Use inputs `iam_source_policy_documents` and `iam_override_policy_documents` for that.
**Type:**
```hcl
list(object({
policy_id = optional(string, null)
version = optional(string, null)
statements = list(object({
sid = optional(string, null)
effect = optional(string, null)
actions = optional(list(string), null)
not_actions = optional(list(string), null)
resources = optional(list(string), null)
not_resources = optional(list(string), null)
conditions = optional(list(object({
test = string
variable = string
values = list(string)
})), [])
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
not_principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
}))
}))
```
**Default value:** `[ ]`
The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). The default is 300 (5 minutes).
**Default value:** `300`
`kms_master_key_id` (`string`) optional
The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK. For more information, see Key Terms.
**Default value:** `null`
`max_message_size` (`number`) optional
The limit of how many bytes a message can contain before Amazon SQS rejects it. An integer from 1024 bytes (1 KiB) up to 1048576 bytes (1024 KiB). The default for this attribute is 262144 (256 KiB).
**Default value:** `262144`
`message_retention_seconds` (`number`) optional
The number of seconds Amazon SQS retains a message. Integer representing seconds, from 60 (1 minute) to 1209600 (14 days). The default for this attribute is 345600 (4 days).
**Default value:** `345600`
`receive_wait_time_seconds` (`number`) optional
The time for which a ReceiveMessage call will wait for a message to arrive (long polling) before returning. An integer from 0 to 20 (seconds). The default for this attribute is 0, meaning that the call will return immediately.
**Default value:** `0`
`sqs_managed_sse_enabled` (`bool`) optional
Boolean to enable server-side encryption (SSE) of message content with SQS-owned encryption keys
**Default value:** `true`
`visibility_timeout_seconds` (`number`) optional
The visibility timeout for the queue. An integer from 0 to 43200 (12 hours). The default for this attribute is 30. For more information about visibility timeout, see AWS docs.
**Default value:** `30`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`sqs_queue`
The SQS queue.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`queue_policy` | 2.0.2 | [`cloudposse/iam-policy/aws`](https://registry.terraform.io/modules/cloudposse/iam-policy/aws/2.0.2) | n/a
`sqs` | 4.3.1 | [`terraform-aws-modules/sqs/aws`](https://registry.terraform.io/modules/terraform-aws-modules/sqs/aws/4.3.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_sqs_queue_policy.sqs_queue_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sqs_queue_policy) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
---
## ssm-parameters
This component is responsible for provisioning Parameter Store resources against AWS SSM. It supports normal parameter
store resources that can be configured directly in YAML OR pulling secret values from a local Sops file.
## Usage
**Stack Level**: Regional
Here are some example snippets for how to use this component:
`stacks/dev/us-east-1.yaml` file:
```yaml
components:
terraform:
ssm-parameters:
vars:
sops_source_file: ../../config/secrets/dev.yaml
sops_source_key: ssm_params
params:
/DEV/TESTING:
value: This is a test of the emergency broadcast system.
description: This is a test.
overwrite: true
type: String
```
## Variables
### Required Variables
`params` required
A map of parameter values to write to SSM Parameter Store
**Type:**
```hcl
map(object({
value = string
description = string
overwrite = optional(bool, false)
tier = optional(string, "Standard")
type = string
ignore_value_changes = optional(bool, false)
}))
```
`region` (`string`) required
AWS Region
### Optional Variables
`kms_arn` (`string`) optional
The ARN of a KMS key used to encrypt and decrypt SecretString values
**Default value:** `""`
`sops_source_file` (`string`) optional
The relative path to the SOPS file which is consumed as the source for creating parameter resources.
**Default value:** `""`
`sops_source_key` (`string`) optional
The SOPS key to pull from the source file.
**Default value:** `""`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`created_params`
The keys of created SSM parameter store resources.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 4.0, < 6.0.0`
- `sops`, version: `>= 0.5, < 1.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
- `sops`, version: `>= 0.5, < 1.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_parameter.destination`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.destination_ignored`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
## Data Sources
The following data sources are used by this module:
- [`sops_file.source`](https://registry.terraform.io/providers/carlpett/sops/latest/docs/data-sources/file) (data source)
---
## sso-saml-provider
This component reads sso credentials from SSM Parameter store and provides them as outputs
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
```yaml
components:
terraform:
sso-saml-provider:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
ssm_path_prefix: "/sso/saml/google"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`ssm_path_prefix` (`string`) required
Top level SSM path prefix (without leading or trailing slash)
### Optional Variables
`emailAttr` (`string`) optional
Email attribute
**Default value:** `null`
`groupsAttr` (`string`) optional
Group attribute
**Default value:** `null`
`usernameAttr` (`string`) optional
User name attribute
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`ca`
Raw signing certificate
`emailAttr`
Email attribute
`groupsAttr`
Groups attribute
`issuer`
Identity Provider Single Sign-On Issuer URL
`url`
Identity Provider Single Sign-On URL
`usernameAttr`
User name attribute
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`store_read` | 0.13.0 | [`cloudposse/ssm-parameter-store/aws`](https://registry.terraform.io/modules/cloudposse/ssm-parameter-store/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## strongdm
This component provisions [strongDM](https://www.strongdm.com/) gateway, relay and roles
## Usage
**Stack Level**: Regional
Use this in the catalog or use these variables to overwrite the catalog values.
```yaml
components:
terraform:
strong-dm:
vars:
enabled: true
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`ssm_account` (`string`) required
Account (stage) housing SSM parameters
`ssm_region` (`string`) required
AWS Region housing SSM parameters
### Optional Variables
`create_roles` (`bool`) optional
Set `true` to create roles (should only be set in one account)
**Default value:** `false`
`dns_zone` (`string`) optional
DNS zone (e.g. example.com) into which to install the web host.
**Default value:** `null`
`gateway_count` (`number`) optional
Number of gateways to provision
**Default value:** `2`
`install_gateway` (`bool`) optional
Set `true` to install a pair of gateways
**Default value:** `false`
`install_relay` (`bool`) optional
Set `true` to install a pair of relays
**Default value:** `true`
`kms_alias_name` (`string`) optional
AWS KMS alias used for encryption/decryption default is alias used in SSM
**Default value:** `"alias/aws/ssm"`
`kubernetes_namespace` (`string`) optional
The Kubernetes namespace to install the release into. Defaults to `default`.
**Default value:** `null`
`register_nodes` (`bool`) optional
Set `true` to register nodes as SSH targets
**Default value:** `true`
`relay_count` (`number`) optional
Number of relays to provision
**Default value:** `2`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional tags for appending to tags_as_list_of_maps. Not added to `tags`.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
Additional attributes (e.g. `1`)
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {}
}
```
`delimiter` (`string`) optional
Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for default, which is `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
The letter case of label keys (`tag` names) (i.e. `name`, `namespace`, `environment`, `stage`, `attributes`) to use in `tags`.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
The letter case of output label values (also used in `tags` and `id`).
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Default value: `lower`.
**Required:** No
**Default value:** `null`
`name` (`string`) optional
Solution name, e.g. 'app' or 'jenkins'
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `map('BusinessUnit','XYZ')`
**Required:** No
**Default value:** `{ }`
## Dependencies
### Requirements
- `terraform`, version: `>= 0.13.0`
- `aws`, version: `>= 3.0, < 6.0.0`
- `helm`, version: `>= 2.2.0`
- `sdm`, version: `>= 1.0.19`
### Providers
- `aws`, version: `>= 3.0, < 6.0.0`
- `aws`, version: `>= 3.0, < 6.0.0`
- `helm`, version: `>= 2.2.0`
- `sdm`, version: `>= 1.0.19`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`iam_roles_network` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_parameter.gateway_tokens`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.relay_tokens`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`aws_ssm_parameter.ssh_admin_token`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
- [`helm_release.cleanup`](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) (resource)
- [`helm_release.gateway`](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) (resource)
- [`helm_release.node`](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) (resource)
- [`helm_release.relay`](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) (resource)
- [`sdm_node.gateway`](https://registry.terraform.io/providers/strongdm/sdm/latest/docs/resources/node) (resource)
- [`sdm_node.relay`](https://registry.terraform.io/providers/strongdm/sdm/latest/docs/resources/node) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ssm_parameter.api_access_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.api_secret_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`aws_ssm_parameter.ssh_admin_token`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
---
## tfstate-backend
This component is responsible for provisioning an S3 Bucket and DynamoDB table that follow security best practices for
usage as a Terraform backend. It also creates IAM roles for access to the Terraform backend.
Once the initial S3 backend is configured, this component can create additional backends, allowing you to segregate them
and control access to each backend separately. This may be desirable because any secret or sensitive information (such
as generated passwords) that Terraform has access to gets stored in the Terraform state backend S3 bucket, so you may
wish to restrict who can read the production Terraform state backend S3 bucket. However, perhaps counter-intuitively,
all Terraform users require read access to the most sensitive accounts, such as `root` and `audit`, in order to read
security configuration information, so careful planning is required when architecting backend splits.
## Prerequisites
:::tip
Part of cold start, so it has to initially be run with `SuperAdmin`, multiple times: to create the S3 bucket and then
to move the state into it. Follow the guide
**[here](https://docs.cloudposse.com/layers/accounts/tutorials/manual-configuration/#provision-tfstate-backend-component)**
to get started.
:::
- This component assumes you are using the `aws-teams` and `aws-team-roles` components.
- Before the `account` and `account-map` components are deployed for the first time, you'll want to run this component
with `access_roles_enabled` set to `false` to prevent errors due to missing IAM Role ARNs. This will enable only
enough access to the Terraform state for you to finish provisioning accounts and roles. After those components have
been deployed, you will want to run this component again with `access_roles_enabled` set to `true` to provide the
complete access as configured in the stacks.
### Access Control
For each backend, this module will create an IAM role with read/write access and, optionally, an IAM role with read-only
access. You can configure who is allowed to assume these roles.
- While read/write access is required for `terraform apply`, the created role only grants read/write access to the
Terraform state, it does not grant permission to create/modify/destroy AWS resources.
- Similarly, while the read-only role prohibits making changes to the Terraform state, it does not prevent anyone from
making changes to AWS resources using a different role.
- Many Cloud Posse components store information about resources they create in the Terraform state via their outputs,
and many other components read this information from the Terraform state backend via the CloudPosse `remote-state`
module and use it as part of their configuration. For example, the `account-map` component exists solely for the
purpose of organizing information about the created AWS accounts and storing it in its Terraform state, making it
available via `remote-state`. This means that you if you are going to restrict access to some backends, you need to
carefully orchestrate what is stored there and ensure that you are not storing information a component needs in a
backend it will not have access to. Typically, information in the most sensitive accounts, such as `root`, `audit`,
and `security`, is nevertheless needed by every account, for example to know where to send audit logs, so it is not
obvious and can be counter-intuitive which accounts need access to which backends. Plan carefully.
- Atmos provides separate configuration for Terraform state access via the `backend` and `remote_state_backend`
settings. Always configure the `backend` setting with a role that has read/write access (and override that setting to
be `null` for components deployed by SuperAdmin). If a read-only role is available (only helpful if you have more than
one backend), use that role in `remote_state_backend.s3.role_arn`. Otherwise, use the read/write role in
`remote_state_backend.s3.role_arn`, to ensure that all components can read the Terraform state, even if
`backend.s3.role_arn` is set to `null`, as it is with a few critical components meant to be deployed by SuperAdmin.
- Note that the "read-only" in the "read-only role" refers solely to the S3 bucket that stores the backend data. That
role still has read/write access to the DynamoDB table, which is desirable so that users restricted to the read-only
role can still perform drift detection by running `terraform plan`. The DynamoDB table only stores checksums and
mutual-exclusion lock information, so it is not considered sensitive. The worst a malicious user could do would be to
corrupt the table and cause a denial-of-service (DoS) for Terraform, but such DoS would only affect making changes to
the infrastructure, it would not affect the operation of the existing infrastructure, so it is an ineffective and
therefore unlikely vector of attack. (Also note that the entire DynamoDB table is optional and can be deleted
entirely; Terraform will repopulate it as new activity takes place.)
- For convenience, the component automatically grants access to the backend to the user deploying it. This is helpful
because it allows that user, presumably SuperAdmin, to deploy the normal components that expect the user does not have
direct access to Terraform state, without requiring custom configuration. However, you may want to explicitly grant
SuperAdmin access to the backend in the `allowed_principal_arns` configuration, to ensure that SuperAdmin can always
access the backend, even if the component is later updated by the `root-admin` role.
### Quotas
When allowing access to both SAML and AWS SSO users, the trust policy for the IAM roles created by this component can
exceed the default 2048 character limit. If you encounter this error, you can increase the limit by requesting a quota
increase [here](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/iam/quotas/L-C07B4B0D). Note that
this is the IAM limit on "The maximum number of characters in an IAM role trust policy" and it must be configured in the
`us-east-1` region, regardless of what region you are deploying to. Normally 3072 characters is sufficient, and is
recommended so that you still have room to expand the trust policy in the future while perhaps considering how to reduce
its size.
## Usage
**Stack Level**: Regional (because DynamoDB is region-specific), but deploy only in a single region and only in the
`root` account **Deployment**: Must be deployed by SuperAdmin using `atmos` CLI
This component configures the shared Terraform backend, and as such is the first component that must be deployed, since
all other components depend on it. In fact, this component even depends on itself, so special deployment procedures are
needed for the initial deployment (documented in the "Cold Start" procedures).
Here's an example snippet for how to use this component.
```yaml
terraform:
tfstate-backend:
backend:
s3:
role_arn: null
settings:
spacelift:
workspace_enabled: false
vars:
enable_server_side_encryption: true
enabled: true
force_destroy: false
name: tfstate
prevent_unencrypted_uploads: true
access_roles:
default: &tfstate-access-template
write_enabled: true
allowed_roles:
core-identity: ["devops", "developers", "managers", "spacelift"]
core-root: ["admin"]
denied_roles: {}
allowed_permission_sets:
core-identity: ["AdministratorAccess"]
denied_permission_sets: {}
allowed_principal_arns: []
denied_principal_arns: []
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`access_roles` optional
Map of access roles to create (key is role name, use "default" for same as component).
For `allowed_roles` and `denied_roles`, the map keys can be either AWS account IDs (12-digit numbers) or account names.
If account names are used, they will be resolved to account IDs using the `account_map` variable.
The values are lists of role names (e.g., ["admin", "terraform"]). Use ["*"] to allow/deny all roles in an account.
For `allowed_permission_sets` and `denied_permission_sets`, the map keys can be either AWS account IDs or account names.
If account names are used, they will be resolved to account IDs using the `account_map` variable.
The values are lists of permission set names (e.g., ["TerraformUpdateAccess"]).
Role ARNs are constructed as: `arn:{partition}:iam::{account_id}:role/{namespace}-{environment}-{stage}-{name}-{role_name}`
Permission set ARNs are constructed as: `arn:{partition}:iam::{account_id}:role/aws-reserved/sso.amazonaws.com*/AWSReservedSSO_{permission_set_name}_*`
**Type:**
```hcl
map(object({
write_enabled = bool
allowed_roles = map(list(string))
denied_roles = map(list(string))
allowed_principal_arns = list(string)
denied_principal_arns = list(string)
allowed_permission_sets = map(list(string))
denied_permission_sets = map(list(string))
}))
```
**Default value:** `{ }`
`access_roles_enabled` (`bool`) optional
Enable access roles to be assumed. Set `false` for cold start to use a basic trust policy
that only allows the current caller and explicitly allowed principals.
Note that the current caller and any `allowed_principal_arns` will always be allowed to assume the role.
**Default value:** `true`
`account_map` optional
Static account map used when account_map_enabled is false.
Provides account name to account ID mapping without requiring the account-map component.
- full_account_map: Map of account name to account ID
- iam_role_arn_templates: Optional map of account name to IAM role ARN template
(e.g., \{ "identity" = "arn:aws:iam::123456789012:role/acme-gbl-identity-%s" \})
- identity_account_account_name: Name of the identity account (default: "identity")
**Type:**
```hcl
object({
full_account_map = map(string)
iam_role_arn_templates = optional(map(string), {})
identity_account_account_name = optional(string, "identity")
})
```
**Default value:**
```hcl
{
"full_account_map": {},
"iam_role_arn_templates": {},
"identity_account_account_name": "identity"
}
```
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
`account_map_enabled` (`bool`) optional
When true, uses the account-map component to look up account IDs dynamically.
When false, uses the static account_map variable instead. Set to false when
deploying without the account-map component or when using static account mappings.
**Default value:** `true`
`account_map_environment` (`string`) optional
The environment where the account-map component is deployed (e.g., 'gbl')
**Default value:** `"gbl"`
`account_map_stage` (`string`) optional
The stage where the account-map component is deployed (e.g., 'root')
**Default value:** `"root"`
`account_map_tenant` (`string`) optional
The tenant where the account-map component is deployed (defaults to current tenant)
**Default value:** `"core"`
`dynamodb_enabled` (`bool`) optional
Whether to create the DynamoDB table.
**Default value:** `true`
A boolean that indicates the terraform state S3 bucket can be destroyed even if it contains objects. These objects are not recoverable.
**Default value:** `false`
`prevent_unencrypted_uploads` (`bool`) optional
Prevent uploads of unencrypted objects to S3
**Default value:** `true`
`privileged` (`bool`) optional
True if the Terraform user already has access to the backend
**Default value:** `false`
`s3_state_lock_enabled` (`bool`) optional
Whether to use S3 for state lock. If true, the DynamoDB table will not be created.
**Default value:** `false`
`use_organization_id` (`bool`) optional
If `true`, use AWS Organization ID (`aws:PrincipalOrgID` condition) in trust policies instead of
listing individual account root ARNs. When enabled, the principal is set to `*` and access is
restricted to the AWS Organization via a condition.
This is recommended (and often required) when you have many accounts because IAM trust policies
have a maximum size limit of 4096 characters. Listing each account root ARN individually can
easily exceed this limit in organizations with more than ~30 accounts.
If `false`, each account root is listed individually in the principals block, which may hit
the trust policy size limit in larger organizations.
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`tfstate_backend_access_role_arns`
IAM Role ARNs for accessing the Terraform State Backend
`tfstate_backend_dynamodb_table_arn`
Terraform state DynamoDB table ARN
`tfstate_backend_dynamodb_table_id`
Terraform state DynamoDB table ID
`tfstate_backend_dynamodb_table_name`
Terraform state DynamoDB table name
`tfstate_backend_s3_bucket_arn`
Terraform state S3 bucket ARN
`tfstate_backend_s3_bucket_domain_name`
Terraform state S3 bucket domain name
`tfstate_backend_s3_bucket_id`
Terraform state S3 bucket ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `awsutils`, version: `>= 0.16.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.9.0, < 6.0.0`
- `awsutils`, version: `>= 0.16.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 2.0.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/2.0.0) | Remote state lookup for the account-map component (or fallback to static mapping). When account_map_enabled is true: - Performs remote state lookup to retrieve account mappings from the account-map component - Uses account_map_tenant, account_map_environment, account_map_stage for the lookup When account_map_enabled is false: - Bypasses the remote state lookup (bypass = true) - Returns the static account_map variable as defaults instead - Allows the component to function without the account-map dependency
`assume_role` | latest | [`./modules/assume-role-policy`](https://registry.terraform.io/modules/./modules/assume-role-policy/) | Use the assume-role-policy submodule to generate trust policies
`label` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`tfstate_backend` | 1.7.1 | [`cloudposse/tfstate-backend/aws`](https://registry.terraform.io/modules/cloudposse/tfstate-backend/aws/1.7.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_role.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) (resource)
- [`aws_iam_role_policy.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_arn.cold_start_access`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/arn) (data source)
- [`aws_iam_policy_document.cold_start_assume_role`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_iam_policy_document.tfstate`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) (data source)
- [`aws_organizations_organization.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/organizations_organization) (data source)
- [`aws_partition.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) (data source)
- [`awsutils_caller_identity.current`](https://registry.terraform.io/providers/cloudposse/awsutils/latest/docs/data-sources/caller_identity) (data source)
---
## tgw
AWS Transit Gateway connects your Amazon Virtual Private Clouds (VPCs) and on-premises networks through a central hub.
This connection simplifies your network and puts an end to complex peering relationships. Transit Gateway acts as a
highly scalable cloud router—each new connection is made only once.
For more on Transit Gateway, see [the AWS documentation](https://aws.amazon.com/transit-gateway/).
## Requirements
In order to connect accounts with Transit Gateway, we deploy Transit Gateway to a central account, typically
`core-network`, and then deploy Transit Gateway attachments for each connected account. Each connected accounts needs a
Transit Gateway attachment for the given account's VPC, either by VPC attachment or by Peering Connection attachment.
Furthermore, each private subnet in each connected VPC needs to explicitly list the CIDRs for all allowed connections.
## Solution
First we deploy the Transit Gateway Hub, `tgw/hub`, to a central network account. The component prepares the Transit
Gateway network with the following steps:
1. Provision Transit Gateway in the network account
2. Collect VPC and EKS component output from every account connected to Transit Gateway
3. Share the Transit Gateway with the Organization using Resource Access Manager (RAM)
By using the `tgw/hub` component to collect Terraform output from connected accounts, only this single component
requires access to the Terraform state of all connected accounts.
Next we deploy `tgw/spoke` to the network account and then to every connected account. This spoke component connects the
given account to the central hub and any listed connection with the following steps:
1. Create a Transit Gateway VPC attachment in the spoke account. This connects the account's VPC to the shared Transit
Gateway from the hub account.
2. Define all allowed routes for private subnets. Each private subnet in an account's VPC has it's own route table. This
route table needs to explicitly list any allowed connection to another account's VPC CIDR.
3. (Optional) Create an EKS Cluster Security Group rule to allow traffic to the cluster in the given account.
## Implementation
1. Deploy `tgw/hub` to the network account. List every allowed connection:
```yaml
# stacks/catalog/tgw/hub
components:
terraform:
tgw/hub/defaults:
metadata:
type: abstract
component: tgw/hub
vars:
enabled: true
name: tgw-hub
tags:
Team: sre
Service: tgw-hub
tgw/hub:
metadata:
inherits:
- tgw/hub/defaults
component: tgw/hub
vars:
# These are all connections available for spokes in this region
# Defaults environment to this region
connections:
- account:
tenant: core
stage: network
- account:
tenant: core
stage: auto
eks_component_names:
- eks/cluster
- account:
tenant: plat
stage: sandbox
eks_component_names: [] # No clusters deployed for sandbox
- account:
tenant: plat
stage: dev
eks_component_names:
- eks/cluster
- account:
tenant: plat
stage: staging
eks_component_names:
- eks/cluster
- account:
tenant: plat
stage: prod
eks_component_names:
- eks/cluster
```
2. Deploy `tgw/spoke` to network. List every account connected to network (all accounts):
```yaml
# stacks/catalog/tgw/spoke
components:
terraform:
tgw/spoke-defaults:
metadata:
type: abstract
component: tgw/spoke
vars:
enabled: true
name: tgw-spoke
tgw_hub_tenant_name: core
tgw_hub_stage_name: network # default, added for visibility
tags:
Team: sre
Service: tgw-spoke
```
```yaml
# stacks/orgs/acme/core/network/us-east-1/network.yaml
tgw/spoke:
metadata:
inherits:
- tgw/spoke-defaults
vars:
# This is what THIS spoke is allowed to connect to
connections:
- account:
tenant: core
stage: network
- account:
tenant: core
stage: auto
- account:
tenant: plat
stage: sandbox
- account:
tenant: plat
stage: dev
- account:
tenant: plat
stage: staging
- account:
tenant: plat
stage: prod
```
3. Finally, deploy `tgw/spoke` for each connected account and list the allowed connections:
```yaml
# stacks/orgs/acme/plat/dev/us-east-1/network.yaml
tgw/spoke:
metadata:
inherits:
- tgw/spoke-defaults
vars:
connections:
# Always list self
- account:
tenant: plat
stage: dev
- account:
tenant: core
stage: network
- account:
tenant: core
stage: auto
```
### Alternate Regions
In order to connect any account to the network, the given account needs:
1. Access to the shared Transit Gateway hub
2. An attachment for the given Transit Gateway hub
3. Routes to and from each private subnet
However, sharing the Transit Gateway hub via RAM is only supported in the same region as the primary hub. Therefore, we
must instead deploy a new hub in the alternate region and create a
[Transit Gateway Peering Connection](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-peering.html) between the two
Transit Gateway hubs.
Furthermore, since this Transit Gateway hub for the alternate region is now peered, we must create a Peering Transit
Gateway attachment, opposed to a VPC Transit Gateway Attachment.
#### Cross Region Deployment
1. Deploy `tgw/hub` and `tgw/spoke` into the primary region as described in [Implementation](#implementation)
2. Deploy `tgw/hub` and `tgw/cross-region-hub` into the new region in the network account. See the following
configuration:
```yaml
# stacks/catalog/tgw/cross-region-hub
import:
- catalog/tgw/hub
components:
terraform:
# Cross region TGW requires additional hub in the alternate region
tgw/hub:
vars:
# These are all connections available for spokes in this region
# Defaults environment to this region
connections:
# Hub for this region is always required
- account:
tenant: core
stage: network
# VPN source
- account:
tenant: core
stage: network
environment: use1
# Github Runners
- account:
tenant: core
stage: auto
environment: use1
eks_component_names:
- eks/cluster
# All stacks where a spoke will be deployed
- account:
tenant: plat
stage: dev
- account:
tenant: plat
stage: staging
- account:
tenant: plat
stage: prod
# This alternate hub needs to be connected to the primary region's hub
tgw/cross-region-hub-connector:
vars:
enabled: true
primary_tgw_hub_region: us-east-1
```
3. Deploy a `tgw/spoke` for network in the new region. For example:
```yaml
# stacks/orgs/acme/core/network/us-west-2/network.yaml
tgw/spoke:
metadata:
inherits:
- tgw/spoke-defaults
vars:
peered_region: true # Required for alternate region spokes
connections:
# This stack, always included
- account:
tenant: core
stage: network
# VPN
- account:
tenant: core
environment: use1
stage: network
# Automation runners
- account:
tenant: core
environment: use1
stage: auto
eks_component_names:
- eks/cluster
# All other connections
- account:
tenant: plat
stage: dev
- account:
tenant: plat
stage: staging
- account:
tenant: plat
stage: prod
```
4. Deploy the `tgw/spoke` components for all connected accounts. For example:
```yaml
# stacks/orgs/acme/plat/dev/us-west-2/network.yaml
tgw/spoke:
metadata:
inherits:
- tgw/spoke-defaults
vars:
peered_region: true # Required for alternate region spokes
connections:
# This stack, always included
- account:
tenant: plat
stage: dev
# TGW Hub, always included
- account:
tenant: core
stage: network
# VPN
- account:
tenant: core
environment: use1
stage: network
# Automation runners
- account:
tenant: core
environment: use1
stage: auto
eks_component_names:
- eks/cluster
```
5. Update any existing `tgw/spoke` connections to allow the new account and region. For example:
```yaml
# stacks/orgs/acme/core/auto/us-east-1/network.yaml
tgw/spoke:
metadata:
inherits:
- tgw/spoke-defaults
vars:
connections:
- account:
tenant: core
stage: network
- account:
tenant: core
stage: corp
- account:
tenant: core
stage: auto
- account:
tenant: plat
stage: sandbox
- account:
tenant: plat
stage: dev
- account:
tenant: plat
stage: staging
- account:
tenant: plat
stage: prod
# Alternate regions <-------- These are added for alternate region
- account:
tenant: core
stage: network
environment: usw2
- account:
tenant: plat
stage: dev
environment: usw2
- account:
tenant: plat
stage: staging
environment: usw2
- account:
tenant: plat
stage: prod
environment: usw2
```
## Destruction
When destroying Transit Gateway components, order of operations matters. Always destroy any removed `tgw/spoke`
components before removing a connection from the `tgw/hub` component.
The `tgw/hub` component creates map of VPC resources that each `tgw/spoke` component references. If the required
reference is removed before the `tgw/spoke` is destroyed, Terraform will fail to destroy the given `tgw/spoke`
component.
:::info Pro Tip!
[Atmos Workflows](https://atmos.tools/core-concepts/workflows/) make applying and destroying Transit Gateway much
easier! For example, to destroy components in the correct order, use a workflow similar to the following:
```yaml
# stacks/workflows/network.yaml
workflows:
destroy/tgw:
description: Destroy the Transit Gateway "hub" and "spokes" for connecting VPCs.
steps:
- command: echo 'Destroying platform spokes for Transit Gateway'
type: shell
name: plat-spokes
- command: terraform destroy tgw/spoke -s plat-use1-sandbox --auto-approve
- command: terraform destroy tgw/spoke -s plat-use1-dev --auto-approve
- command: terraform destroy tgw/spoke -s plat-use1-staging --auto-approve
- command: terraform destroy tgw/spoke -s plat-use1-prod --auto-approve
- command: echo 'Destroying core spokes for Transit Gateway'
type: shell
name: core-spokes
- command: terraform destroy tgw/spoke -s core-use1-auto --auto-approve
- command: terraform destroy tgw/spoke -s core-use1-network --auto-approve
- command: echo 'Destroying Transit Gateway Hub'
type: shell
name: hub
- command: terraform destroy tgw/hub -s core-use1-network --auto-approve
```
:::
# FAQ
## `tgw/spoke` Fails to Recreate VPC Attachment with `DuplicateTransitGatewayAttachment` Error
```bash
╷
│ Error: creating EC2 Transit Gateway VPC Attachment: DuplicateTransitGatewayAttachment: tgw-0xxxxxxxxxxxxxxxx has non-deleted Transit Gateway Attachments with same VPC ID.
│ status code: 400, request id: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
│
│ with module.tgw_spoke_vpc_attachment.module.standard_vpc_attachment.aws_ec2_transit_gateway_vpc_attachment.default["core-use2-network"],
│ on .terraform/modules/tgw_spoke_vpc_attachment.standard_vpc_attachment/main.tf line 43, in resource "aws_ec2_transit_gateway_vpc_attachment" "default":
│ 43: resource "aws_ec2_transit_gateway_vpc_attachment" "default" {
│
╵
Releasing state lock. This may take a few moments...
exit status 1
```
This is caused by Terraform attempting to create the replacement VPC attachment before the original is completely
destroyed. Retry the apply. Now you should see only "create" actions.
# Changelog
## Upgrading to `v1.276.0`
Components PR [#804](https://github.com/cloudposse/terraform-aws-components/pull/804)
### Affected Components
- `tgw/hub`
- `tgw/spoke`
- `tgw/cross-region-hub-connector`
### Summary
This change to the Transit Gateway components,
[PR #804](https://github.com/cloudposse/terraform-aws-components/pull/804), added support for cross-region connections.
As part of that change, we've added `environment` to the component identifier used in the Terraform Output created by
`tgw/hub`. Because of that map key change, all resources in Terraform now have a new resource identifier and therefore
must be recreated with Terraform or removed from state and imported into the new resource ID.
Recreating the resources is the easiest solution but means that Transit Gateway connectivity will be lost while the
changes apply, which typically takes an hour. Alternatively, removing the resources from state and importing back into
the new resource ID is much more complex operationally but means no lost Transit Gateway connectivity.
Since we use Transit Gateway for VPN and GitHub Automation runner access, a temporarily lost connection is not a
significant concern, so we choose to accept lost connectivity and recreate all `tgw/spoke` resources.
### Steps
1. Notify your team of a temporary VPN and Automation outage for accessing private networks
2. Deploy all `tgw/hub` components. There should be a hub component in each region of your network account connected to
Transit Gateway
3. Deploy all `tgw/spoke` components. There should be a spoke component in every account and every region connected to
Transit Gateway
#### Tips
Use workflows to deploy `tgw` across many accounts with a single command:
```bash
atmos workflow deploy/tgw -f network
```
```yaml
# stacks/workflows/network.yaml
workflows:
deploy/tgw:
description: Provision the Transit Gateway "hub" and "spokes" for connecting VPCs.
steps:
- command: terraform deploy tgw/hub -s core-use1-network
name: hub
- command: terraform deploy tgw/spoke -s core-use1-network
- command: echo 'Creating core spokes for Transit Gateway'
type: shell
name: core-spokes
- command: terraform deploy tgw/spoke -s core-use1-corp
- command: terraform deploy tgw/spoke -s core-use1-auto
- command: terraform deploy tgw/spoke -s plat-use1-sandbox
- command: echo 'Creating platform spokes for Transit Gateway'
type: shell
name: plat-spokes
- command: terraform deploy tgw/spoke -s plat-use1-dev
- command: terraform deploy tgw/spoke -s plat-use1-staging
- command: terraform deploy tgw/spoke -s plat-use1-prod
```
---
## attachment
This component creates a Transit Gateway VPC Attachment and optionally creates an association with a Transit Gateway Route Table.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
tgw/attachment:
vars:
enabled: true
transit_gateway_id: !terraform.output tgw/hub core-use1-network transit_gateway_id
transit_gateway_route_table_id: !terraform.output tgw/hub core-use1-network transit_gateway_route_table_id
create_transit_gateway_route_table_association: false
```
#### Transit Gateway Route Table Association
In the primary account, the account that has the Transit Gateway and the Transit Gateway Route Table, we need to create an association with the Transit Gateway Route Table. This is necessary for attachments to connect to the Transit Gateway Route Table. For example, if you have a Transit Gateway Route Table in the _core-network_ account, you will need to create an association for each VPCs connected to that Transit Gateway Route Table.
The intention is to have all configuration for a given account in the same stack as that account. For example, since the Transit Gateway Route Table is in the _core-network_ account, we would create all necessary associations in the _core-network_ account.
```yaml
# core-network stack
components:
terraform:
tgw/attachment:
vars:
enabled: true
transit_gateway_id: !terraform.output tgw/hub core-usw2-network transit_gateway_id
transit_gateway_route_table_id: !terraform.output tgw/hub core-usw2-network transit_gateway_route_table_id
# Add an association for this account itself
create_transit_gateway_route_table_association: true
# Include association for each of the connected accounts, if necessary
additional_associations:
- attachment_id: !terraform.output tgw/attachment plat-usw2-dev transit_gateway_attachment_id
route_table_id: !terraform.output tgw/hub transit_gateway_route_table_id
- attachment_id: !terraform.output tgw/attachment plat-usw2-prod transit_gateway_attachment_id
route_table_id: !terraform.output tgw/hub transit_gateway_route_table_id
```
In connected accounts, an account that does _not_ have a Transit Gateway and Transit Gateway Route Table, you do not need to create any associations.
```yaml
# plat-dev stack
components:
terraform:
tgw/attachment:
vars:
enabled: true
transit_gateway_id: !terraform.output tgw/hub core-usw2-network transit_gateway_id
transit_gateway_route_table_id: !terraform.output tgw/hub core-usw2-network transit_gateway_route_table_id
# Do not create an association in this account since there is no Transit Gateway Route Table in this account.
create_transit_gateway_route_table_association: false
```
Plus the same for all other connected accounts.
## Variables
### Required Variables
IDs of the private subnets to attach to. Required if `vpc_id` is defined.
**Default value:** `[ ]`
`vpc_component_name` (`string`) optional
The name of the vpc component
**Default value:** `"vpc"`
`vpc_id` (`string`) optional
ID of the VPC to attach to
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`transit_gateway_vpc_attachment_id`
ID of the Transit Gateway VPC Attachment
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`standard_vpc_attachment` | 0.13.0 | [`cloudposse/transit-gateway/aws`](https://registry.terraform.io/modules/cloudposse/transit-gateway/aws/0.13.0) | Create a TGW attachment from this account's VPC to the TGW Hub
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## hub
This component is responsible for provisioning an [AWS Transit Gateway](https://aws.amazon.com/transit-gateway) `hub`
that acts as a centralized gateway for connecting VPCs from other `spoke` accounts.
## Usage
**Stack Level**: Regional
## Basic Usage with `tgw/spoke`
Here's an example snippet for how to configure and use this component:
```yaml
components:
terraform:
tgw/hub/defaults:
metadata:
type: abstract
component: tgw/hub
vars:
enabled: true
name: tgw-hub
expose_eks_sg: false
tags:
Team: sre
Service: tgw-hub
tgw/hub:
metadata:
inherits:
- tgw/hub/defaults
component: tgw/hub
vars:
connections:
- account:
tenant: core
stage: network
vpc_component_names:
- vpc-dev
- account:
tenant: core
stage: artifacts
- account:
tenant: core
stage: auto
eks_component_names:
- eks/cluster
- account:
tenant: plat
stage: dev
vpc_component_names:
- vpc
- vpc/data/1
eks_component_names:
- eks/cluster
- account:
tenant: plat
stage: staging
vpc_component_names:
- vpc
- vpc/data/1
eks_component_names:
- eks/cluster
- account:
tenant: plat
stage: prod
vpc_component_names:
- vpc
- vpc/data/1
eks_component_names:
- eks/cluster
```
To provision the Transit Gateway and all related resources, run the following commands:
```sh
atmos terraform plan tgw/hub -s --network
atmos terraform apply tgw/hub -s --network
```
## Alternate Usage with `tgw/attachment`, `tgw/routes`, and `vpc/routes`
### Components Overview
- **`tgw/hub`**: Creates the Transit Gateway in the network account
- **`tgw/attachment`**: Creates and manages Transit Gateway VPC attachments in connected accounts
- **`tgw/hub-connection`**: Creates the Transit Gateway peering connection between two `tgw/hub` deployments
- **`tgw/routes`**: Manages Transit Gateway route tables in the network account
- **`vpc-routes`** (`vpc/routes/private`): Configures VPC route tables in connected accounts to route traffic through the Transit Gateway (Note: This component lives outside the `tgw/` directory since it's not specific to Transit Gateway)
### Architecture
The Transit Gateway components work together in the following way:
1. Transit Gateway is created in the network account (`tgw/hub`)
2. VPCs in other accounts attach to the Transit Gateway (`tgw/attachment`)
3. Route tables in connected VPCs direct traffic across accounts (`vpc-routes`)
4. Transit Gateway route tables control routing between attachments (`tgw/routes`)
```mermaid
graph TD
subgraph core-use1-network
TGW[Transit Gateway]
TGW_RT[TGW Route Tables]
end
subgraph plat-use1-dev
VPC1[VPC]
VPC1_RT[VPC Route Tables]
ATT1[TGW Attachment]
end
subgraph core-use1-auto
VPC2[VPC]
VPC2_RT[VPC Route Tables]
ATT2[TGW Attachment]
end
ATT1 <--> TGW
ATT2 <--> TGW
TGW <--> TGW_RT
VPC1_RT <--> VPC1
VPC2_RT <--> VPC2
VPC1 <--> ATT1
VPC2 <--> ATT2
```
### Deployment Steps
#### 1. Deploy Transit Gateway Hub
First, create the Transit Gateway in the network account.
:::tip
Leave `var.connections` empty. With this refactor, the `tgw/hub` component is only responsible for creating the Transit Gateway and its route tables. We do not need to fetch and store outputs for the connected components anymore.
:::
```yaml
components:
terraform:
tgw/hub:
vars:
connections: []
```
#### 2. Deploy VPC Attachments
Important: Deploy attachments in connected accounts first, before deploying attachments in the network account.
##### Connected Account Attachments
```yaml
components:
terraform:
tgw/attachment:
vars:
transit_gateway_id: !terraform.output tgw/hub core-use1-network transit_gateway_id
transit_gateway_route_table_id: !terraform.output tgw/hub core-use1-network transit_gateway_route_table_id
create_transit_gateway_route_table_association: false
```
##### Network Account Attachment
```yaml
components:
terraform:
tgw/attachment:
vars:
transit_gateway_id: !terraform.output tgw/hub core-use1-network transit_gateway_id
transit_gateway_route_table_id: !terraform.output tgw/hub core-use1-network transit_gateway_route_table_id
# Route table associations are required so that route tables can propagate their routes to other route tables.
# Set the following to true in the same account where the Transit Gateway and its route tables are deployed
create_transit_gateway_route_table_association: true
# Associate connected accounts with the Transit Gateway route table
additional_associations:
- attachment_id: !terraform.output tgw/attachment core-use1-auto transit_gateway_vpc_attachment_id
route_table_id: !terraform.output tgw/hub transit_gateway_route_table_id
- attachment_id: !terraform.output tgw/attachment plat-use1-dev transit_gateway_vpc_attachment_id
route_table_id: !terraform.output tgw/hub transit_gateway_route_table_id
```
#### 3. Configure VPC Routes
Configure routes in all connected VPCs.
```yaml
components:
terraform:
vpc/routes/private:
metadata:
component: vpc-routes
vars:
route_table_ids: !terraform.output vpc private_route_table_ids
routes:
# Route to network account
- destination:
cidr_block: !terraform.output vpc core-use1-network vpc_cidr
target:
type: transit_gateway_id
value: !terraform.output tgw/hub core-use1-network transit_gateway_id
# Route to core-auto account, if necessary
- destination:
cidr_block: !terraform.output vpc core-use1-auto vpc_cidr
target:
type: transit_gateway_id
value: !terraform.output tgw/hub core-use1-network transit_gateway_id
```
Configure routes in the Network Account VPCs.
```yaml
components:
terraform:
vpc/routes/private:
vars:
route_table_ids: !terraform.output vpc private_route_table_ids
routes:
# Routes to connected accounts
- destination:
cidr_block: !terraform.output vpc core-use1-auto vpc_cidr
target:
type: transit_gateway_id
value: !terraform.output tgw/hub transit_gateway_id
- destination:
cidr_block: !terraform.output vpc plat-use1-dev vpc_cidr
target:
type: transit_gateway_id
value: !terraform.output tgw/hub transit_gateway_id
```
### 4. Deploy Transit Gateway Route Table Routes
Deploy the `tgw/routes` component in the network account to create route tables and routes.
```yaml
components:
terraform:
tgw/routes:
vars:
transit_gateway_route_table_id: !terraform.output tgw/hub transit_gateway_route_table_id
# Use propagated routes to route through VPC attachments
propagated_routes:
# Route to this account
- attachment_id: !terraform.output tgw/attachment core-use1-network transit_gateway_attachment_id
# Route to any connected account
- attachment_id: !terraform.output tgw/attachment core-use1-auto transit_gateway_attachment_id
- attachment_id: !terraform.output tgw/attachment plat-use1-dev transit_gateway_attachment_id
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`account_map` (`any`) optional
Account map to use when account_map_enabled is false. Expected to contain at least 'full_account_map' with account name to ID mappings.
**Default value:** `{ }`
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
`account_map_enabled` (`bool`) optional
Set to true to use the account-map component for account lookups. Set to false to use the static account_map variable.
**Default value:** `true`
The name of the environment where `account_map` is provisioned
**Default value:** `"gbl"`
`account_map_stage_name` (`string`) optional
The name of the stage where `account_map` is provisioned
**Default value:** `"root"`
`account_map_tenant_name` (`string`) optional
The name of the tenant where `account_map` is provisioned.
If the `tenant` label is not used, leave this as `null`.
**Default value:** `null`
`allow_external_principals` (`bool`) optional
Set true to allow the TGW to be RAM shared with external principals specified in ram_principals
**Default value:** `false`
`connections` optional
A list of objects to define each TGW connections.
By default, each connection will look for only the default `vpc` component.
**Type:**
```hcl
list(object({
account = object({
stage = string
environment = optional(string, "")
tenant = optional(string, "")
})
vpc_component_names = optional(list(string), ["vpc"])
eks_component_names = optional(list(string), [])
}))
```
**Default value:** `[ ]`
`expose_eks_sg` (`bool`) optional
Set true to allow EKS clusters to accept traffic from source accounts
**Default value:** `true`
`ram_principals` (`list(string)`) optional
A list of AWS account IDs to share the TGW with outside the organization
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`eks`
Accounts with EKS and EKSs information
`tgw_config`
Transit Gateway config
`transit_gateway_arn`
Transit Gateway ARN
`transit_gateway_id`
Transit Gateway ID
`transit_gateway_route_table_id`
Transit Gateway route table ID
`vpcs`
Accounts with VPC and VPCs information
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`eks` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`tgw_hub` | 0.13.0 | [`cloudposse/transit-gateway/aws`](https://registry.terraform.io/modules/cloudposse/transit-gateway/aws/0.13.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
---
## hub-connector
This component is responsible for provisioning an
[AWS Transit Gateway Peering Connection](https://aws.amazon.com/transit-gateway) to connect TGWs from different accounts
and(or) regions.
Transit Gateway does not support sharing the Transit Gateway hub across regions. You must deploy a Transit Gateway hub
for each region and connect the alternate hub to the primary hub.
## Usage
**Stack Level**: Regional
This component is deployed to each alternate region with `tgw/hub`.
For example if your primary region is `us-east-1` and your alternate region is `us-west-2`, deploy another `tgw/hub` in
`us-west-2` and peer the two with `tgw/cross-region-hub-connector` with the following stack config, imported into
`us-west-2`:
```yaml
import:
- catalog/tgw/hub
components:
terraform:
# Cross region TGW requires additional hub in the alternate region
tgw/hub:
vars:
# These are all connections available for spokes in this region
# Defaults environment to this region
connections:
# Hub for this region is always required
- account:
tenant: core
stage: network
# VPN source
- account:
tenant: core
stage: network
environment: use1
# Github Runners
- account:
tenant: core
stage: auto
environment: use1
eks_component_names:
- eks/cluster
# All stacks where a spoke will be deployed
- account:
tenant: plat
stage: dev
eks_component_names: [] # Add clusters here once deployed
# This alternate hub needs to be connected to the primary region's hub
tgw/cross-region-hub-connector:
vars:
enabled: true
primary_tgw_hub_region: us-east-1
```
## Variables
### Required Variables
`primary_tgw_hub_region` (`string`) required
The name of the AWS region where the primary Transit Gateway hub is deployed. This value is used with `var.env_naming_convention` to determine the primary Transit Gateway hub's environment name.
`region` (`string`) required
AWS Region
### Optional Variables
`account_map` (`any`) optional
Account map to use when account_map_enabled is false. Expected to contain at least 'full_account_map' with account name to ID mappings.
**Default value:** `{ }`
`account_map_component_name` (`string`) optional
The name of the account-map component
**Default value:** `"account-map"`
`account_map_enabled` (`bool`) optional
Set to true to use the account-map component for account lookups. Set to false to use the static account_map variable.
**Default value:** `true`
The name of the environment where `account_map` is provisioned
**Default value:** `"gbl"`
`account_map_stage_name` (`string`) optional
The name of the stage where `account_map` is provisioned
**Default value:** `"root"`
`account_map_tenant_name` (`string`) optional
The name of the tenant where `account_map` is provisioned
**Default value:** `"core"`
`env_naming_convention` (`string`) optional
The cloudposse/utils naming convention used to translate environment name to AWS region name. Options are `to_short` and `to_fixed`
**Default value:** `"to_short"`
`primary_tgw_hub_environment` (`string`) optional
The name of the environment where the primary Transit Gateway hub is deployed. Defaults to `module.this.environment`
**Default value:** `""`
`primary_tgw_hub_stage` (`string`) optional
The name of the stage where the primary Transit Gateway hub is deployed. Defaults to `module.this.stage`
**Default value:** `""`
`primary_tgw_hub_tenant` (`string`) optional
The name of the tenant where the primary Transit Gateway hub is deployed. Only used if tenants are deployed and defaults to `module.this.tenant`
**Default value:** `""`
The component name of the tgw hub in this region
**Default value:** `"tgw/hub"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`aws_ec2_transit_gateway_peering_attachment_id`
Transit Gateway Peering Attachment ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Providers
- `aws`, version: `>= 4.1, < 6.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`account_map` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`tgw_hub_primary_region` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`tgw_hub_this_region` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`utils` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ec2_transit_gateway_peering_attachment.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_peering_attachment) (resource)
- [`aws_ec2_transit_gateway_peering_attachment_accepter.primary_region`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_peering_attachment_accepter) (resource)
- [`aws_ec2_transit_gateway_route_table_association.primary_region`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_route_table_association) (resource)
- [`aws_ec2_transit_gateway_route_table_association.this_region`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_route_table_association) (resource)
## Data Sources
The following data sources are used by this module:
---
## routes
Manages AWS Transit Gateway (TGW) route tables, including static routes and
route propagation from VPC attachments. Enables controlled routing between
VPCs connected to a TGW by configuring TGW route table associations,
propagations, and explicit routes as needed.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
tgw/routes:
vars:
transit_gateway_route_table_id: "tgw-rtb-0123456789abcdef0"
# Static routes for specific CIDR blocks
static_routes:
- cidr_block: "10.100.0.0/16"
attachment_id: "tgw-attach-0123456789abcdef0"
# Route propagation from VPC attachments
propagated_routes:
- attachment_id: "tgw-attach-0123456789abcdef1"
```
The same configuration using terraform outputs:
```yaml
components:
terraform:
tgw/routes:
vars:
transit_gateway_route_table_id: !terraform.output tgw/hub transit_gateway_route_table_id
# Static routes for specific CIDR blocks
static_routes:
- cidr_block: !terraform.output vpc edge-vpc vpc_cidr
attachment_id: !terraform.output tgw/attachment edge-vpc transit_gateway_attachment_id
# Route propagation from VPC attachments
propagated_routes:
- attachment_id: !terraform.output tgw/attachment app-vpc transit_gateway_attachment_id
```
### Multiple Environment Example
For environments with multiple routing requirements, here's an example using physical IDs:
```yaml
components:
terraform:
tgw/routes/nonprod:
metadata:
component: tgw/routes
vars:
transit_gateway_route_table_id: "tgw-rtb-0123456789abcdef1"
# Static routes for specific destinations
static_routes:
- cidr_block: "10.20.0.0/16"
attachment_id: "tgw-attach-0123456789abcdef2"
- cidr_block: "10.30.0.0/16"
attachment_id: "tgw-attach-0123456789abcdef3"
# Enable route propagation from specific VPCs
propagated_routes:
- attachment_id: "tgw-attach-0123456789abcdef4"
- attachment_id: "tgw-attach-0123456789abcdef5"
```
The same configuration using terraform outputs:
```yaml
components:
terraform:
tgw/routes/nonprod:
metadata:
component: tgw/routes
vars:
transit_gateway_route_table_id: !terraform.output tgw/hub transit-use1-nonprod transit_gateway_route_table_id
# Static routes for specific destinations
static_routes:
- cidr_block: !terraform.output vpc dev-use1-edge vpc_cidr
attachment_id: !terraform.output tgw/attachment dev-use1-edge transit_gateway_attachment_id
- cidr_block: !terraform.output vpc staging-use1-edge vpc_cidr
attachment_id: !terraform.output tgw/attachment staging-use1-edge transit_gateway_attachment_id
# Enable route propagation from specific VPCs
propagated_routes:
- attachment_id: !terraform.output tgw/attachment dev-use1-network transit_gateway_attachment_id
- attachment_id: !terraform.output tgw/attachment staging-use1-network transit_gateway_attachment_id
```
## Variables
### Required Variables
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`transit_gateway_route_table_associations`
Transit Gateway route table associations
`transit_gateway_route_table_propagations`
Transit Gateway route table propagations
`transit_gateway_routes`
Transit Gateway static routes
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Providers
- `aws`, version: `>= 4.1, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ec2_transit_gateway_route.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_route) (resource)
- [`aws_ec2_transit_gateway_route_table_association.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_route_table_association) (resource)
- [`aws_ec2_transit_gateway_route_table_propagation.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_route_table_propagation) (resource)
## Data Sources
The following data sources are used by this module:
---
## spoke
This component is responsible for provisioning [AWS Transit Gateway](https://aws.amazon.com/transit-gateway) attachments
to connect VPCs in a `spoke` account to different accounts through a central `hub`.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to configure and use this component:
stacks/catalog/tgw/spoke.yaml
```yaml
components:
terraform:
tgw/spoke-defaults:
metadata:
type: abstract
component: tgw/spoke
vars:
enabled: true
name: tgw-spoke
tags:
Team: sre
Service: tgw-spoke
expose_eks_sg: false
tgw_hub_tenant_name: core
tgw_hub_stage_name: network
tgw/spoke:
metadata:
inherits:
- tgw/spoke-defaults
vars:
# This is what THIS spoke is allowed to connect to.
# since this is deployed to each plat account (dev->prod),
# we allow connections to network and auto.
connections:
- account:
tenant: core
stage: network
# Set this value if the vpc component has a different name in this account
vpc_component_names:
- vpc-dev
- account:
tenant: core
stage: auto
```
stacks/ue2/dev.yaml
```yaml
import:
- catalog/tgw/spoke
components:
terraform:
tgw/spoke:
vars:
# use when there is not an EKS cluster in the stack
expose_eks_sg: false
# override default connections
connections:
- account:
tenant: core
stage: network
vpc_component_names:
- vpc-dev
- account:
tenant: core
stage: auto
- account:
tenant: plat
stage: dev
eks_component_names:
- eks/cluster
- account:
tenant: plat
stage: qa
eks_component_names:
- eks/cluster
```
To provision the attachments for a spoke account:
```sh
atmos terraform plan tgw/spoke -s --
atmos terraform apply tgw/spoke -s --
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`connections` optional
A list of objects to define each TGW connections.
By default, each connection will look for only the default `vpc` component.
**Type:**
```hcl
list(object({
account = object({
stage = string
environment = optional(string, "")
tenant = optional(string, "")
})
vpc_component_names = optional(list(string), ["vpc"])
eks_component_names = optional(list(string), [])
}))
```
**Default value:** `[ ]`
A map of cross-region hub connector components that provide this spoke with the appropriate Transit Gateway attachments IDs.
- The key should be the environment that the remote VPC is located in.
- The component is the name of the component in the remote region (e.g. `tgw/cross-region-hub-connector`)
- The environment is the region that the cross-region-hub-connector is deployed in.
e.g. the following would configure a component called `tgw/cross-region-hub-connector/use1` that is deployed in the
If use2 is the primary region, the following would be its configuration:
use1:
component: "tgw/cross-region-hub-connector"
environment: "use1" (the remote region)
and in the alternate region, the following would be its configuration:
use2:
component: "tgw/cross-region-hub-connector"
environment: "use1" (our own region)
**Default value:** `{ }`
`default_route_enabled` (`bool`) optional
Enable default routing via transit gateway, requires also nat gateway and instance to be disabled in vpc component. Default is disabled.
**Default value:** `false`
The name of the eks components in the owning account.
**Default value:** `[ ]`
`own_vpc_component_name` (`string`) optional
The name of the vpc component in the owning account. Defaults to "vpc"
**Default value:** `"vpc"`
`peered_region` (`bool`) optional
Set `true` if this region is not the primary region
**Default value:** `false`
`static_routes` optional
A list of static routes to add to the transit gateway, pointing at this VPC as a destination.
**Type:**
```hcl
set(object({
blackhole = bool
destination_cidr_block = string
}))
```
**Default value:** `[ ]`
`static_tgw_routes` (`list(string)`) optional
A list of static routes to add to the local routing table with the transit gateway as a destination.
**Default value:** `[ ]`
`tgw_hub_component_name` (`string`) optional
The name of the transit-gateway component
**Default value:** `"tgw/hub"`
`tgw_hub_stage_name` (`string`) optional
The name of the stage where `tgw/hub` is provisioned
**Default value:** `"network"`
`tgw_hub_tenant_name` (`string`) optional
The name of the tenant where `tgw/hub` is provisioned.
If the `tenant` label is not used, leave this as `null`.
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Providers
- `aws`, version: `>= 4.1, < 6.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`cross_region_hub_connector` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`iam_roles` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`tgw_hub` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`tgw_hub_role` | latest | [`../../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../../account-map/modules/iam-roles/) | n/a
`tgw_hub_routes` | 0.13.0 | [`cloudposse/transit-gateway/aws`](https://registry.terraform.io/modules/cloudposse/transit-gateway/aws/0.13.0) | n/a
`tgw_spoke_vpc_attachment` | latest | [`./modules/standard_vpc_attachment`](https://registry.terraform.io/modules/./modules/standard_vpc_attachment/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_route.back_route`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route) (resource)
- [`aws_route.default_route`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route) (resource)
## Data Sources
The following data sources are used by this module:
---
## standard_vpc_attachment
# Standard VPC Attachment
## Variables
### Required Variables
`tgw_config` required
Object to pass common data from root module to this submodule. See root module for details
**Type:**
```hcl
object({
existing_transit_gateway_id = string
existing_transit_gateway_route_table_id = string
vpcs = any
eks = any
})
```
### Optional Variables
`connections` optional
A list of objects to define each TGW connections.
By default, each connection will look for only the default `vpc` component.
**Type:**
```hcl
list(object({
account = object({
stage = string
environment = optional(string, "")
tenant = optional(string, "")
})
vpc_component_names = optional(list(string), ["vpc"])
eks_component_names = optional(list(string), [])
}))
```
**Default value:** `[ ]`
`expose_eks_sg` (`bool`) optional
Set true to allow EKS clusters to accept traffic from source accounts
**Default value:** `true`
`network_account_stage_name` (`string`) optional
The name of the stage designated as the network hub
**Default value:** `"network"`
The name of the eks components in the owning account.
**Default value:** `[ ]`
`own_vpc_component_name` (`string`) optional
The name of the vpc component in the owning account. Defaults to "vpc"
**Default value:** `"vpc"`
`owning_account` (`string`) optional
The name of the account that owns the VPC being attached
**Default value:** `null`
`peered_region` (`bool`) optional
Set `true` if this region is not the primary region
**Default value:** `false`
`static_routes` optional
A list of static routes.
**Type:**
```hcl
set(object({
blackhole = bool
destination_cidr_block = string
}))
```
**Default value:** `[ ]`
`static_tgw_routes` (`list(string)`) optional
A list of static routes to add to the local routing table with the transit gateway as a destination.
**Default value:** `[ ]`
`tgw_connector_config` (`map(any)`) optional
Map of output from all `tgw/cross-region-hub-connector` components. See root module for details
**Default value:** `{ }`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`tg_config`
Transit Gateway configuration formatted for handling
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.0, < 6.0.0`
### Providers
- `aws`, version: `>= 4.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`standard_vpc_attachment` | 0.13.0 | [`cloudposse/transit-gateway/aws`](https://registry.terraform.io/modules/cloudposse/transit-gateway/aws/0.13.0) | Create a TGW attachment from this account's VPC to the TGW Hub This includes a merged list of all CIDRs from allowed VPCs in connected accounts
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ec2_transit_gateway_route.peering_connection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_route) (resource)
- [`aws_route.peering_connection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route) (resource)
- [`aws_security_group_rule.ingress_cidr_blocks`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) (resource)
## Data Sources
The following data sources are used by this module:
None
---
## vpc
This component is responsible for provisioning a VPC and corresponding Subnets with advanced configuration capabilities.
**Key Features:**
- Independent control over public and private subnet counts per Availability Zone
- Flexible NAT Gateway placement (index-based or name-based)
- Named subnets with different naming schemes for public vs private
- Cost optimization through strategic NAT Gateway placement
- VPC Flow Logs support for auditing and compliance
- VPC Endpoints for AWS services (S3, DynamoDB, and interface endpoints)
- AWS Shield Advanced protection for NAT Gateway EIPs (optional)
**What's New in v3.1.0:**
- Uses `terraform-aws-dynamic-subnets` v3.1.0 with enhanced subnet configuration
- Separate public/private subnet counts and names per AZ
- Precise NAT Gateway placement control for cost optimization
- NAT Gateway IDs and private IPs exposed in subnet stats outputs
- Requires AWS Provider v5.0+
## Usage
**Stack Level**: Regional
## Basic Configuration
Here's a basic example using legacy configuration (fully backward compatible):
```yaml
# catalog/vpc/defaults
components:
terraform:
vpc/defaults:
metadata:
type: abstract
component: vpc
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: vpc
availability_zones:
- "a"
- "b"
- "c"
nat_gateway_enabled: true
nat_instance_enabled: false
max_subnet_count: 3
vpc_flow_logs_enabled: true
vpc_flow_logs_bucket_environment_name:
vpc_flow_logs_bucket_stage_name: audit
vpc_flow_logs_traffic_type: "ALL"
subnet_type_tag_key: "example.net/subnet/type"
# Legacy subnet configuration (still supported)
subnets_per_az_count: 1
subnets_per_az_names: ["common"]
```
```yaml
# stacks/ue2-dev.yaml
import:
- catalog/vpc
components:
terraform:
vpc:
metadata:
component: vpc
inherits:
- vpc/defaults
vars:
ipv4_primary_cidr_block: "10.111.0.0/18"
```
## Cost-Optimized NAT Configuration
Reduce NAT Gateway costs by placing NAT Gateways in only one public subnet per AZ:
```yaml
components:
terraform:
vpc:
vars:
# Create 2 public subnets per AZ
public_subnets_per_az_count: 2
public_subnets_per_az_names: ["loadbalancer", "web"]
# Create 3 private subnets per AZ
private_subnets_per_az_count: 3
private_subnets_per_az_names: ["app", "database", "cache"]
# Place NAT Gateway ONLY in the first public subnet (index 0)
# This saves ~67% on NAT Gateway costs compared to NAT in all public subnets
nat_gateway_public_subnet_indices: [0]
```
**Cost Savings Example (3 AZs, us-east-1):**
- Without optimization: 6 NAT Gateways (2 per AZ) = ~$270/month
- With optimization: 3 NAT Gateways (1 per AZ) = ~$135/month
- **Monthly Savings: ~$135 (~$1,620/year)**
**Important**: You can use EITHER `nat_gateway_public_subnet_indices` OR `nat_gateway_public_subnet_names`, but not both. The plan will fail if both are specified.
## Named NAT Gateway Placement
Place NAT Gateways by subnet name instead of index:
```yaml
components:
terraform:
vpc:
vars:
# Must specify both count and names when using named subnets
public_subnets_per_az_count: 2
public_subnets_per_az_names: ["loadbalancer", "web"]
private_subnets_per_az_count: 2
private_subnets_per_az_names: ["app", "database"]
# Place NAT Gateway only in "loadbalancer" subnet
nat_gateway_public_subnet_names: ["loadbalancer"]
```
**Important**: When using `public_subnets_per_az_names` or `private_subnets_per_az_names`, you must also specify the corresponding count variables (`public_subnets_per_az_count` / `private_subnets_per_az_count`).
## High-Availability NAT Configuration
For production environments requiring redundancy:
```yaml
components:
terraform:
vpc:
vars:
public_subnets_per_az_count: 2
nat_gateway_public_subnet_indices: [0, 1] # NAT in both public subnets per AZ
```
## Separate Public/Private Subnet Architecture
Different subnet counts and names for public vs private:
```yaml
components:
terraform:
vpc:
vars:
# 2 public subnets per AZ for load balancers and public services
public_subnets_per_az_count: 2
public_subnets_per_az_names: ["alb", "nat"]
# 4 private subnets per AZ for different application tiers
private_subnets_per_az_count: 4
private_subnets_per_az_names: ["web", "app", "data", "cache"]
# NAT Gateway in "nat" subnet
nat_gateway_public_subnet_names: ["nat"]
```
## VPC Endpoints Configuration
Add VPC Endpoints for AWS services to reduce data transfer costs and improve security:
```yaml
components:
terraform:
vpc:
vars:
# Gateway endpoints (no hourly charges)
gateway_vpc_endpoints:
- "s3"
- "dynamodb"
# Interface endpoints (hourly charges apply)
interface_vpc_endpoints:
- "ec2"
- "ecr.api"
- "ecr.dkr"
- "logs"
- "secretsmanager"
```
## Complete Production Example
```yaml
components:
terraform:
vpc:
vars:
enabled: true
name: vpc
ipv4_primary_cidr_block: "10.0.0.0/16"
availability_zones:
- "a"
- "b"
- "c"
# Public subnets for ALB and NAT
public_subnets_per_az_count: 2
public_subnets_per_az_names: ["loadbalancer", "nat"]
# Private subnets for different tiers
private_subnets_per_az_count: 3
private_subnets_per_az_names: ["app", "database", "cache"]
# Cost-optimized NAT placement
nat_gateway_enabled: true
nat_gateway_public_subnet_names: ["nat"]
# VPC Flow Logs
vpc_flow_logs_enabled: true
vpc_flow_logs_bucket_environment_name: mgmt
vpc_flow_logs_bucket_stage_name: audit
vpc_flow_logs_traffic_type: "ALL"
# VPC Endpoints
gateway_vpc_endpoints:
- "s3"
- "dynamodb"
interface_vpc_endpoints:
- "ecr.api"
- "ecr.dkr"
- "logs"
subnet_type_tag_key: "example.net/subnet/type"
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
`subnet_type_tag_key` (`string`) required
Key for subnet type tag to provide information about the type of subnets, e.g. `cpco/subnet/type=private` or `cpcp/subnet/type=public`
When `true`, assign AWS generated IPv6 CIDR block to the VPC. Conflicts with `ipv6_ipam_pool_id`.
**Default value:** `false`
`availability_zone_ids` (`list(string)`) optional
List of Availability Zones IDs where subnets will be created. Overrides `availability_zones`.
Can be the full name, e.g. `use1-az1`, or just the part after the AZ ID region code, e.g. `-az1`,
to allow reusable values across regions. Consider contention for resources and spot pricing in each AZ when selecting.
Useful in some regions when using only some AZs and you want to use the same ones across multiple accounts.
**Default value:** `[ ]`
`availability_zones` (`list(string)`) optional
List of Availability Zones (AZs) where subnets will be created. Ignored when `availability_zone_ids` is set.
Can be the full name, e.g. `us-east-1a`, or just the part after the region, e.g. `a` to allow reusable values across regions.
The order of zones in the list ***must be stable*** or else Terraform will continually make changes.
If no AZs are specified, then `max_subnet_count` AZs will be selected in alphabetical order.
If `max_subnet_count > 0` and `length(var.availability_zones) > max_subnet_count`, the list
will be truncated. We recommend setting `availability_zones` and `max_subnet_count` explicitly as constant
(not computed) values for predictability, consistency, and stability.
**Default value:** `[ ]`
`gateway_vpc_endpoints` (`set(string)`) optional
A list of Gateway VPC Endpoints to provision into the VPC. Only valid values are "dynamodb" and "s3".
**Default value:** `[ ]`
IPv4 CIDR blocks to assign to the VPC.
`ipv4_cidr_block` can be set explicitly, or set to `null` with the CIDR block derived from `ipv4_ipam_pool_id` using `ipv4_netmask_length`.
Map keys must be known at `plan` time, and are only used to track changes.
**Type:**
```hcl
map(object({
ipv4_cidr_block = string
ipv4_ipam_pool_id = string
ipv4_netmask_length = number
}))
```
**Default value:** `{ }`
`ipv4_cidr_block_association_timeouts` optional
Timeouts (in `go` duration format) for creating and destroying IPv4 CIDR block associations
**Type:**
```hcl
object({
create = string
delete = string
})
```
**Default value:** `null`
`ipv4_cidrs` optional
Lists of CIDRs to assign to subnets. Order of CIDRs in the lists must not change over time.
Lists may contain more CIDRs than needed.
**Type:**
```hcl
list(object({
private = list(string)
public = list(string)
}))
```
**Default value:** `[ ]`
`ipv4_primary_cidr_block` (`string`) optional
The primary IPv4 CIDR block for the VPC.
Either `ipv4_primary_cidr_block` or `ipv4_primary_cidr_block_association` must be set, but not both.
**Default value:** `null`
`ipv4_primary_cidr_block_association` optional
Configuration of the VPC's primary IPv4 CIDR block via IPAM. Conflicts with `ipv4_primary_cidr_block`.
One of `ipv4_primary_cidr_block` or `ipv4_primary_cidr_block_association` must be set.
Additional CIDR blocks can be set via `ipv4_additional_cidr_block_associations`.
**Type:**
```hcl
object({
ipv4_ipam_pool_id = string
ipv4_netmask_length = number
})
```
**Default value:** `null`
`map_public_ip_on_launch` (`bool`) optional
Instances launched into a public subnet should be assigned a public IP address
**Default value:** `true`
`max_nats` (`number`) optional
Upper limit on number of NAT Gateways/Instances to create.
Set to 1 or 2 for cost savings at the expense of availability.
Default creates a NAT Gateway in each public subnet.
**Default value:** `null`
`max_subnet_count` (`number`) optional
Sets the maximum amount of subnets to deploy. 0 will deploy a subnet for every provided availability zone (in `region_availability_zones` variable) within the region
**Default value:** `0`
Enable or disable AWS Shield Advanced protection for NAT EIPs. If set to 'true', a subscription to AWS Shield Advanced must exist in this account.
**Default value:** `false`
`nat_gateway_enabled` (`bool`) optional
Flag to enable/disable NAT gateways
**Default value:** `true`
Indices (0-based) of public subnets where NAT Gateways should be placed.
Use this for index-based NAT Gateway placement (e.g., [0, 1] to place NATs in first 2 public subnets per AZ).
Conflicts with `nat_gateway_public_subnet_names`.
If both are null, NAT Gateways are placed in all public subnets by default.
**Default value:** `null`
Names of public subnets where NAT Gateways should be placed.
Use this for name-based NAT Gateway placement (e.g., ["loadbalancer"] to place NATs only in "loadbalancer" subnets).
Conflicts with `nat_gateway_public_subnet_indices`.
If both are null, NAT Gateways are placed in all public subnets by default.
**Default value:** `null`
`nat_instance_ami_id` (`list(string)`) optional
A list optionally containing the ID of the AMI to use for the NAT instance.
If the list is empty (the default), the latest official AWS NAT instance AMI
will be used. NOTE: The Official NAT instance AMI is being phased out and
does not support NAT64. Use of a NAT gateway is recommended instead.
**Default value:** `[ ]`
`nat_instance_enabled` (`bool`) optional
Flag to enable/disable NAT instances
**Default value:** `false`
The number of private subnets to provision per Availability Zone.
If null, defaults to the value of `subnets_per_az_count` for backward compatibility.
Use this to create different numbers of private and public subnets per AZ.
**Default value:** `null`
The names of private subnets to provision per Availability Zone.
If null, defaults to the value of `subnets_per_az_names` for backward compatibility.
Use this to create different named private subnets than public subnets.
**Default value:** `null`
`public_subnets_enabled` (`bool`) optional
If false, do not create public subnets.
Since NAT gateways and instances must be created in public subnets, these will also not be created when `false`.
**Default value:** `true`
`public_subnets_per_az_count` (`number`) optional
The number of public subnets to provision per Availability Zone.
If null, defaults to the value of `subnets_per_az_count` for backward compatibility.
Use this to create different numbers of public and private subnets per AZ.
**Default value:** `null`
The names of public subnets to provision per Availability Zone.
If null, defaults to the value of `subnets_per_az_names` for backward compatibility.
Use this to create different named public subnets than private subnets.
**Default value:** `null`
`subnets_per_az_count` (`number`) optional
The number of subnet of each type (public or private) to provision per Availability Zone.
**Default value:** `1`
`subnets_per_az_names` (`list(string)`) optional
The subnet names of each type (public or private) to provision per Availability Zone.
This variable is optional.
If a list of names is provided, the list items will be used as keys in the outputs `named_private_subnets_map`, `named_public_subnets_map`,
`named_private_route_table_ids_map` and `named_public_route_table_ids_map`
**Default value:**
```hcl
[
"common"
]
```
The name of the tenant where the VPC Flow Logs bucket is provisioned.
If the `tenant` label is not used, leave this as `null`.
**Default value:** `null`
The type of the logging destination. Valid values: `cloud-watch-logs`, `s3`
**Default value:** `"s3"`
`vpc_flow_logs_traffic_type` (`string`) optional
The type of traffic to capture. Valid values: `ACCEPT`, `REJECT`, `ALL`
**Default value:** `"ALL"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`availability_zone_ids`
List of Availability Zones IDs where subnets were created, when available
`availability_zones`
List of Availability Zones where subnets were created
`az_private_route_table_ids_map`
Map of AZ names to list of private route table IDs in the AZs
`az_private_subnets_map`
Map of AZ names to list of private subnet IDs in the AZs
`az_public_route_table_ids_map`
Map of AZ names to list of public route table IDs in the AZs
`az_public_subnets_map`
Map of AZ names to list of public subnet IDs in the AZs
`flow_log_destination`
Destination bucket for VPC flow logs
`flow_log_id`
ID of the VPC flow log
`gateway_vpc_endpoints`
Map of Gateway VPC Endpoints in this VPC, keyed by service (e.g. "s3").
`igw_id`
The ID of the Internet Gateway
`interface_vpc_endpoints`
Map of Interface VPC Endpoints in this VPC.
`max_subnet_count`
Maximum allowed number of subnets before all subnet CIDRs need to be recomputed
`named_private_subnets_stats_map`
Map of subnet names (specified in `private_subnets_per_az_names` or `subnets_per_az_names` variable) to lists of objects with each object having four items: AZ, private subnet ID, private route table ID, NAT Gateway ID (the NAT Gateway that this private subnet routes to for egress)
`named_public_subnets_stats_map`
Map of subnet names (specified in `public_subnets_per_az_names` or `subnets_per_az_names` variable) to lists of objects with each object having four items: AZ, public subnet ID, public route table ID, NAT Gateway ID (the NAT Gateway in this public subnet, if any)
`named_route_tables`
Map of route table IDs, keyed by subnets_per_az_names.
If subnets_per_az_names is not set, items are grouped by key 'common'
`named_subnets`
Map of subnets IDs, keyed by subnets_per_az_names.
If subnets_per_az_names is not set, items are grouped by key 'common'
`nat_eip_allocation_ids`
Elastic IP allocations in use by NAT
`nat_eip_protections`
List of AWS Shield Advanced Protections for NAT Elastic IPs.
`nat_gateway_ids`
NAT Gateway IDs
`nat_gateway_public_ips`
NAT Gateway public IPs
`nat_instance_ami_id`
ID of AMI used by NAT instance
`nat_instance_ids`
NAT Instance IDs
`nat_ips`
Elastic IP Addresses in use by NAT
`private_network_acl_id`
ID of the Network ACL created for private subnets
`private_route_table_ids`
Private subnet route table IDs
`private_subnet_arns`
Private subnet ARNs
`private_subnet_cidrs`
Private subnet CIDRs
`private_subnet_ids`
Private subnet IDs
`private_subnet_ipv6_cidrs`
Private subnet IPv6 CIDR blocks
`public_network_acl_id`
ID of the Network ACL created for public subnets
`public_route_table_ids`
Public subnet route table IDs
`public_subnet_arns`
Public subnet ARNs
`public_subnet_cidrs`
Public subnet CIDRs
`public_subnet_ids`
Public subnet IDs
`public_subnet_ipv6_cidrs`
Public subnet IPv6 CIDR blocks
`route_tables`
Route tables info map
`subnets`
Subnets info map
`vpc`
VPC info map
`vpc_cidr`
VPC CIDR
`vpc_default_network_acl_id`
The ID of the network ACL created by default on VPC creation
`vpc_default_security_group_id`
The ID of the security group created by default on VPC creation
`vpc_endpoint_dynamodb_id`
ID of the DynamoDB gateway endpoint
`vpc_endpoint_dynamodb_prefix_list_id`
Prefix list ID for DynamoDB gateway endpoint
`vpc_endpoint_interface_security_group_id`
Security group ID for interface VPC endpoints
`vpc_endpoint_s3_id`
ID of the S3 gateway endpoint
`vpc_endpoint_s3_prefix_list_id`
Prefix list ID for S3 gateway endpoint
`vpc_id`
VPC ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 5.0.0`
- `null`, version: `>= 3.0`
### Providers
- `aws`, version: `>= 5.0.0`
- `null`, version: `>= 3.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`endpoint_security_groups` | 2.2.0 | [`cloudposse/security-group/aws`](https://registry.terraform.io/modules/cloudposse/security-group/aws/2.2.0) | We could create a security group per endpoint, but until we are ready to customize them by service, it is just a waste of resources. We use a single security group for all endpoints. Security groups can be updated without recreating the endpoint or interrupting service, so this is an easy change to make later.
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`subnets` | 3.1.1 | [`cloudposse/dynamic-subnets/aws`](https://registry.terraform.io/modules/cloudposse/dynamic-subnets/aws/3.1.1) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`utils` | 1.4.0 | [`cloudposse/utils/aws`](https://registry.terraform.io/modules/cloudposse/utils/aws/1.4.0) | n/a
`vpc` | 3.0.0 | [`cloudposse/vpc/aws`](https://registry.terraform.io/modules/cloudposse/vpc/aws/3.0.0) | n/a
`vpc_endpoints` | 3.0.0 | [`cloudposse/vpc/aws//modules/vpc-endpoints`](https://registry.terraform.io/modules/cloudposse/vpc/aws/modules/vpc-endpoints/3.0.0) | n/a
`vpc_flow_logs_bucket` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_flow_log.default`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/flow_log) (resource)
- [`aws_shield_protection.nat_eip_shield_protection`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/shield_protection) (resource)
- [`null_resource.nat_placement_validation`](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_caller_identity.current`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) (data source)
- [`aws_eip.eip`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eip) (data source)
---
## vpc-flow-logs-bucket
This component provisions an encrypted S3 bucket configured to receive VPC Flow Logs.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
**IMPORTANT**: This component expects the `aws_flow_log` resource to be created externally. Typically that is
accomplished through [the `vpc` component](https://github.com/cloudposse-terraform-components/aws-vpc-flow-logs-bucket/tree/main/vpc-flow-logs-bucket/../vpc/).
```yaml
components:
terraform:
vpc-flow-logs-bucket:
vars:
name: "vpc-flow-logs"
noncurrent_version_expiration_days: 180
noncurrent_version_transition_days: 30
standard_transition_days: 60
glacier_transition_days: 180
expiration_days: 365
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`expiration_days` (`number`) optional
Number of days after which to expunge the objects
**Default value:** `90`
`force_destroy` (`bool`) optional
A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable
**Default value:** `false`
`glacier_transition_days` (`number`) optional
Number of days after which to move the data to the glacier storage tier
**Default value:** `60`
`lifecycle_prefix` (`string`) optional
Prefix filter. Used to manage object lifecycle events
**Default value:** `""`
`lifecycle_rule_enabled` (`bool`) optional
Enable lifecycle events on this bucket
**Default value:** `true`
`lifecycle_tags` (`map(string)`) optional
Tags filter. Used to manage object lifecycle events
**Default value:** `{ }`
Specifies when noncurrent object versions transitions
**Default value:** `30`
`object_lock_configuration` optional
A configuration for S3 object locking. With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. Object lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
**Type:**
```hcl
object({
mode = string # Valid values are GOVERNANCE and COMPLIANCE.
days = number
years = number
})
```
**Default value:** `null`
`standard_transition_days` (`number`) optional
Number of days to persist in the standard storage tier before moving to the infrequent access tier
**Default value:** `30`
`traffic_type` (`string`) optional
The type of traffic to capture. Valid values: `ACCEPT`, `REJECT`, `ALL`
**Default value:** `"ALL"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`vpc_flow_logs_bucket_arn`
VPC Flow Logs bucket ARN
`vpc_flow_logs_bucket_id`
VPC Flow Logs bucket ID
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.9.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`flow_logs_s3_bucket` | 1.3.1 | [`cloudposse/vpc-flow-logs-s3-bucket/aws`](https://registry.terraform.io/modules/cloudposse/vpc-flow-logs-s3-bucket/aws/1.3.1) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
---
## vpc-peering
This component is responsible for creating a peering connection between two VPCs existing in different AWS accounts.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
Default VPC peering settings for all accounts:
```yaml
# stacks/catalog/vpc-peering/defaults.yaml
components:
terraform:
vpc-peering/defaults:
settings:
spacelift:
workspace_enabled: true
metadata:
component: vpc-peering
type: abstract
vars:
enabled: true
requester_allow_remote_vpc_dns_resolution: true
accepter_allow_remote_vpc_dns_resolution: true
```
Use case: Peering v1 accounts to v2
```yaml
# stacks/catalogs/vpc-peering/ue1-prod.yaml
import:
- catalog/vpc-peering/defaults
components:
terraform:
vpc-peering-use1:
metadata:
component: vpc-peering
inherits:
- vpc-peering/defaults
vars:
accepter_region: us-east-1
accepter_vpc_id: vpc-xyz
accepter_aws_assume_role_arn: arn:aws:iam:::role/acme-vpc-peering
```
Use case: Peering v2 accounts to v2
```yaml
vpc-peering/-vpc0:
metadata:
component: vpc-peering
inherits:
- vpc-peering/defaults
vars:
requester_vpc_component_name: vpc
accepter_region: us-east-1
accepter_stage_name:
accepter_vpc:
tags:
# Fill in with your own information
Name: acme----
```
## Legacy Account Configuration
The `vpc-peering` component peers the `dev`, `prod`, `sandbox` and `staging` VPCs to a VPC in the legacy account.
The `dev`, `prod`, `sandbox` and `staging` VPCs are the requesters of the VPC peering connection, while the legacy VPC
is the accepter of the peering connection.
To provision VPC peering and all related resources with Terraform, we need the following information from the legacy
account:
- Legacy account ID
- Legacy VPC ID
- Legacy AWS region
- Legacy IAM role (the role must be created in the legacy account with permissions to create VPC peering and routes).
The name of the role could be `acme-vpc-peering` and the ARN of the role should look like
`arn:aws:iam:::role/acme-vpc-peering`
### Legacy Account IAM Role
In the legacy account, create IAM role `acme-vpc-peering` with the following policy:
NOTE: Replace `` with the ID of the legacy account.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:CreateRoute", "ec2:DeleteRoute"],
"Resource": "arn:aws:ec2:*::route-table/*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcPeeringConnections",
"ec2:DescribeVpcs",
"ec2:ModifyVpcPeeringConnectionOptions",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"ec2:DescribeRouteTables"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:AcceptVpcPeeringConnection",
"ec2:DeleteVpcPeeringConnection",
"ec2:CreateVpcPeeringConnection",
"ec2:RejectVpcPeeringConnection"
],
"Resource": [
"arn:aws:ec2:*::vpc-peering-connection/*",
"arn:aws:ec2:*::vpc/*"
]
},
{
"Effect": "Allow",
"Action": ["ec2:DeleteTags", "ec2:CreateTags"],
"Resource": "arn:aws:ec2:*::vpc-peering-connection/*"
}
]
}
```
Add the following trust policy to the IAM role:
NOTE: Replace `` with the ID of the `identity` account in the new infrastructure.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam:::root"]
},
"Action": ["sts:AssumeRole", "sts:TagSession"],
"Condition": {}
}
]
}
```
The trust policy allows the `identity` account to assume the role (and provision all the resources in the legacy
account).
## Provisioning
Provision the VPC peering connections in the `dev`, `prod`, `sandbox` and `staging` accounts by executing the following
commands:
```sh
atmos terraform plan vpc-peering -s ue1-sandbox
atmos terraform apply vpc-peering -s ue1-sandbox
atmos terraform plan vpc-peering -s ue1-dev
atmos terraform apply vpc-peering -s ue1-dev
atmos terraform plan vpc-peering -s ue1-staging
atmos terraform apply vpc-peering -s ue1-staging
atmos terraform plan vpc-peering -s ue1-prod
atmos terraform apply vpc-peering -s ue1-prod
```
## Variables
### Required Variables
`accepter_region` (`string`) required
Accepter AWS region
`accepter_vpc` (`any`) required
Accepter VPC map of id, cidr_block, or default arguments for the data source
Requester vpc component name
**Default value:** `"vpc"`
`requester_vpc_id` (`string`) optional
Requester VPC ID, if not provided, it will be looked up by component using variable `requester_vpc_component_name`
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`vpc_peering`
VPC peering outputs
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 3.0, < 6.0.0`
### Providers
- `aws`, version: `>= 3.0, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`requester_vpc` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`vpc_peering` | 1.0.0 | [`cloudposse/vpc-peering-multi-account/aws`](https://registry.terraform.io/modules/cloudposse/vpc-peering-multi-account/aws/1.0.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`aws_vpc.accepter`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpc) (data source)
---
## vpc-routes
This component provisions routes in AWS VPC route tables.
It is commonly used to configure routing between VPCs via Transit Gateways and
to manage multiple route tables with similar route configurations on a regional basis.
## Usage
**Stack Level**: Regional
Here's a simple example using physical IDs:
```yaml
components:
terraform:
vpc/routes/private:
metadata:
component: vpc-routes
vars:
route_table_ids: ["rtb-0123456789abcdef0", "rtb-0123456789abcdef1"]
routes:
- destination:
cidr_block: "10.100.0.0/16" # Target VPC CIDR
target:
type: transit_gateway_id
value: "tgw-0123456789abcdef0"
```
The same configuration using terraform outputs:
```yaml
components:
terraform:
vpc/routes/private:
metadata:
component: vpc-routes
vars:
route_table_ids: !terraform.output vpc private_route_table_ids
routes:
- destination:
cidr_block: !terraform.output vpc target-vpc vpc_cidr
target:
type: transit_gateway_id
value: !terraform.output tgw/hub transit_gateway_id
```
### Multiple Routes Example
Example using physical IDs:
```yaml
components:
terraform:
vpc/routes/private:
metadata:
component: vpc-routes
vars:
route_table_ids: ["rtb-0123456789abcdef0"]
routes:
# Route to network account
- destination:
cidr_block: "10.0.0.0/16"
target:
type: transit_gateway_id
value: "tgw-0123456789abcdef0"
# Route to transit account
- destination:
cidr_block: "10.1.0.0/16"
target:
type: transit_gateway_id
value: "tgw-0123456789abcdef0"
```
The same configuration using terraform outputs:
```yaml
components:
terraform:
vpc/routes/private:
metadata:
component: vpc-routes
vars:
route_table_ids: !terraform.output vpc private_route_table_ids
routes:
# Route to network account
- destination:
cidr_block: !terraform.output vpc network-vpc vpc_cidr
target:
type: transit_gateway_id
value: !terraform.output tgw/hub transit_gateway_id
# Route to transit account
- destination:
cidr_block: !terraform.output vpc transit-vpc vpc_cidr
target:
type: transit_gateway_id
value: !terraform.output tgw/hub transit_gateway_id
```
## Variables
### Required Variables
`region` (`string`) required
AWS Region
### Optional Variables
`route_table_ids` (`list(string)`) optional
List of route table IDs
**Default value:** `[ ]`
`routes` optional
A list of route objects to add to route tables. Each route object has a destination and a target.
**Type:**
```hcl
list(object({
destination = object({
cidr_block = string
})
target = object({
type = string
value = optional(string, "")
})
}))
```
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`aws_routes`
VPC routes
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0.0`
- `aws`, version: `>= 4.1, < 6.0.0`
### Providers
- `aws`, version: `>= 4.1, < 6.0.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_route.this`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route) (resource)
## Data Sources
The following data sources are used by this module:
---
## waf
This component is responsible for provisioning an AWS Web Application Firewall (WAF) with an associated managed rule
group.
## Usage
**Stack Level**: Regional
Here's an example snippet for how to use this component.
```yaml
components:
terraform:
waf:
vars:
enabled: true
name: waf
acl_name: default
default_action: allow
description: Default web ACL
visibility_config:
cloudwatch_metrics_enabled: false
metric_name: "default"
sampled_requests_enabled: false
managed_rule_group_statement_rules:
- name: "OWASP-10"
# Rules are processed in order based on the value of priority, lowest number first
priority: 1
statement:
name: AWSManagedRulesCommonRuleSet
vendor_name: AWS
visibility_config:
# Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled: false
metric_name: "OWASP-10"
sampled_requests_enabled: false
```
## Variables
### Required Variables
`acl_name` (`string`) required
Friendly name of the ACL. The ACL ARN will be stored in SSM under \{ssm_path_prefix\}/\{acl_name\}/arn
`region` (`string`) required
AWS Region
`visibility_config` required
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
object({
cloudwatch_metrics_enabled = bool
metric_name = string
sampled_requests_enabled = bool
})
```
### Optional Variables
`alb_names` (`list(string)`) optional
list of ALB names to associate with the web ACL.
**Default value:** `[ ]`
`alb_tags` (`list(map(string))`) optional
list of tags to match one or more ALBs to associate with the web ACL.
**Default value:** `[ ]`
A list of ARNs of the resources to associate with the web ACL.
This must be an ARN of an Application Load Balancer, Amazon API Gateway stage, or AWS AppSync.
Do not use this variable to associate a Cloudfront Distribution.
Instead, you should use the `web_acl_id` property on the `cloudfront_distribution` resource.
For more details, refer to https://docs.aws.amazon.com/waf/latest/APIReference/API_AssociateWebACL.html
**Default value:** `[ ]`
A list of Atmos component selectors to get from the remote state and associate their ARNs with the web ACL.
The components must be Application Load Balancers, Amazon API Gateway stages, or AWS AppSync.
component:
Atmos component name
component_arn_output:
The component output that defines the component ARN
Set `tenant`, `environment` and `stage` if the components are in different OUs, regions or accounts.
Do not use this variable to select a Cloudfront Distribution component.
Instead, you should use the `web_acl_id` property on the `cloudfront_distribution` resource.
For more details, refer to https://docs.aws.amazon.com/waf/latest/APIReference/API_AssociateWebACL.html
**Type:**
```hcl
list(object({
component = string
namespace = optional(string, null)
tenant = optional(string, null)
environment = optional(string, null)
stage = optional(string, null)
component_arn_output = string
}))
```
**Default value:** `[ ]`
`byte_match_statement_rules` optional
A rule statement that defines a string match search for AWS WAF to apply to web requests.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
positional_constraint:
Area within the portion of a web request that you want AWS WAF to search for search_string. Valid values include the following: EXACTLY, STARTS_WITH, ENDS_WITH, CONTAINS, CONTAINS_WORD.
search_string
String value that you want AWS WAF to search for. AWS WAF searches only in the part of web requests that you designate for inspection in field_to_match.
field_to_match:
The part of a web request that you want AWS WAF to inspect.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#field-to-match
text_transformation:
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#text-transformation
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`custom_response_body` optional
Defines custom response bodies that can be referenced by custom_response actions.
The map keys are used as the `key` attribute which is a unique key identifying the custom response body.
content:
Payload of the custom response.
The response body can be plain text, HTML or JSON and cannot exceed 4KB in size.
content_type:
Content Type of Response Body.
Valid values are `TEXT_PLAIN`, `TEXT_HTML`, or `APPLICATION_JSON`.
**Type:**
```hcl
map(object({
content = string
content_type = string
}))
```
**Default value:** `{ }`
`default_action` (`string`) optional
Specifies that AWS WAF should allow requests by default. Possible values: `allow`, `block`.
**Default value:** `"block"`
`default_block_response` (`string`) optional
A HTTP response code that is sent when default action is used. Only takes effect if default_action is set to `block`.
**Default value:** `null`
`description` (`string`) optional
A friendly description of the WebACL.
**Default value:** `"Managed by Terraform"`
`geo_allowlist_statement_rules` optional
A rule statement used to identify a list of allowed countries which should not be blocked by the WAF.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
country_codes:
A list of two-character country codes.
forwarded_ip_config:
fallback_behavior:
The match status to assign to the web request if the request doesn't have a valid IP address in the specified position.
Possible values: `MATCH`, `NO_MATCH`
header_name:
The name of the HTTP header to use for the IP address.
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`geo_match_statement_rules` optional
A rule statement used to identify web requests based on country of origin.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
country_codes:
A list of two-character country codes.
forwarded_ip_config:
fallback_behavior:
The match status to assign to the web request if the request doesn't have a valid IP address in the specified position.
Possible values: `MATCH`, `NO_MATCH`
header_name:
The name of the HTTP header to use for the IP address.
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`ip_set_reference_statement_rules` optional
A rule statement used to detect web requests coming from particular IP addresses or address ranges.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
arn:
The ARN of the IP Set that this statement references.
ip_set:
Defines a new IP Set
description:
A friendly description of the IP Set
addresses:
Contains an array of strings that specifies zero or more IP addresses or blocks of IP addresses.
All addresses must be specified using Classless Inter-Domain Routing (CIDR) notation.
ip_address_version:
Specify `IPV4` or `IPV6`
ip_set_forwarded_ip_config:
fallback_behavior:
The match status to assign to the web request if the request doesn't have a valid IP address in the specified position.
Possible values: `MATCH`, `NO_MATCH`
header_name:
The name of the HTTP header to use for the IP address.
position:
The position in the header to search for the IP address.
Possible values include: `FIRST`, `LAST`, or `ANY`.
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`log_destination_component_selectors` optional
A list of Atmos component selectors to get from the remote state and associate their names/ARNs with the WAF logs.
The components must be Amazon Kinesis Data Firehose, CloudWatch Log Group, or S3 bucket.
component:
Atmos component name
component_output:
The component output that defines the component name or ARN
Set `tenant`, `environment` and `stage` if the components are in different OUs, regions or accounts.
Note: data firehose, log group, or bucket name must be prefixed with `aws-waf-logs-`,
e.g. `aws-waf-logs-example-firehose`, `aws-waf-logs-example-log-group`, or `aws-waf-logs-example-bucket`.
**Type:**
```hcl
list(object({
component = string
namespace = optional(string, null)
tenant = optional(string, null)
environment = optional(string, null)
stage = optional(string, null)
component_output = string
}))
```
**Default value:** `[ ]`
A list of resource names/ARNs to associate Amazon Kinesis Data Firehose, Cloudwatch Log log group, or S3 bucket with the WAF logs.
Note: data firehose, log group, or bucket name must be prefixed with `aws-waf-logs-`,
e.g. `aws-waf-logs-example-firehose`, `aws-waf-logs-example-log-group`, or `aws-waf-logs-example-bucket`.
**Default value:** `[ ]`
`logging_filter` optional
A configuration block that specifies which web requests are kept in the logs and which are dropped.
You can filter on the rule action and on the web request labels that were applied by matching rules during web ACL evaluation.
**Type:**
```hcl
object({
default_behavior = string
filter = list(object({
behavior = string
requirement = string
condition = list(object({
action_condition = optional(object({
action = string
}), null)
label_name_condition = optional(object({
label_name = string
}), null)
}))
}))
})
```
**Default value:** `null`
`managed_rule_group_statement_rules` optional
A rule statement used to run the rules that are defined in a managed rule group.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
override_action:
The override action to apply to the rules in a rule group.
Possible values: `count`, `none`
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
name:
The name of the managed rule group.
vendor_name:
The name of the managed rule group vendor.
version:
The version of the managed rule group.
You can set `Version_1.0` or `Version_1.1` etc. If you want to use the default version, do not set anything.
scope_down_not_statement_enabled:
Whether to wrap the scope_down_statement inside of a not_statement.
Refer to https://docs.aws.amazon.com/waf/latest/developerguide/waf-bot-control-example-scope-down-your-bot.html
scope_down_statement:
Nested statement that narrows the scope of the rate-based statement to matching web requests.
rule_action_override:
Action settings to use in the place of the rule actions that are configured inside the rule group.
You specify one override for each rule whose action you want to change.
managed_rule_group_configs:
Additional information that's used by a managed rule group. Only one rule attribute is allowed in each config.
Refer to https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-list.html for more details.
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
override_action = optional(string)
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = object({
name = string
vendor_name = string
version = optional(string)
scope_down_not_statement_enabled = optional(bool, false)
scope_down_statement = optional(object({
byte_match_statement = object({
positional_constraint = string
search_string = string
field_to_match = object({
all_query_arguments = optional(bool)
body = optional(bool)
method = optional(bool)
query_string = optional(bool)
single_header = optional(object({ name = string }))
single_query_argument = optional(object({ name = string }))
uri_path = optional(bool)
})
text_transformation = list(object({
priority = number
type = string
}))
})
}), null)
rule_action_override = optional(map(object({
action = string
custom_request_handling = optional(object({
insert_header = object({
name = string
value = string
})
}), null)
custom_response = optional(object({
response_code = string
response_header = optional(object({
name = string
value = string
}), null)
}), null)
})), null)
managed_rule_group_configs = optional(list(object({
aws_managed_rules_anti_ddos_rule_set = optional(object({
sensitivity_to_block = optional(string)
client_side_action_config = optional(object({
challenge = object({
usage_of_action = string
sensitivity = optional(string)
exempt_uri_regular_expression = optional(list(object({
regex_string = string
})))
})
}))
}))
aws_managed_rules_bot_control_rule_set = optional(object({
inspection_level = string
enable_machine_learning = optional(bool, true)
}), null)
aws_managed_rules_atp_rule_set = optional(object({
enable_regex_in_path = optional(bool)
login_path = string
request_inspection = optional(object({
payload_type = string
password_field = object({
identifier = string
})
username_field = object({
identifier = string
})
}), null)
response_inspection = optional(object({
body_contains = optional(object({
success_strings = list(string)
failure_strings = list(string)
}), null)
header = optional(object({
name = string
success_values = list(string)
failure_values = list(string)
}), null)
json = optional(object({
identifier = string
success_strings = list(string)
failure_strings = list(string)
}), null)
status_code = optional(object({
success_codes = list(string)
failure_codes = list(string)
}), null)
}), null)
}), null)
aws_managed_rules_acfp_rule_set = optional(object({
creation_path = string
enable_regex_in_path = optional(bool)
registration_page_path = string
request_inspection = optional(object({
payload_type = string
password_field = optional(object({
identifier = string
}), null)
username_field = optional(object({
identifier = string
}), null)
email_field = optional(object({
identifier = string
}), null)
address_fields = optional(object({
identifiers = list(string)
}), null)
phone_number_fields = optional(object({
identifiers = list(string)
}), null)
}), null)
response_inspection = optional(object({
body_contains = optional(object({
success_strings = list(string)
failure_strings = list(string)
}), null)
header = optional(object({
name = string
success_values = list(string)
failure_values = list(string)
}), null)
json = optional(object({
identifier = string
success_values = list(string)
failure_values = list(string)
}), null)
status_code = optional(object({
success_codes = list(string)
failure_codes = list(string)
}), null)
}), null)
}))
})), null)
})
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`rate_based_statement_rules` optional
A rate-based rule tracks the rate of requests for each originating IP address,
and triggers the rule action when the rate exceeds a limit that you specify on the number of requests in any 5-minute time span.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
aggregate_key_type:
Setting that indicates how to aggregate the request counts.
Possible values include: `FORWARDED_IP` or `IP`
limit:
The limit on requests per 5-minute period for a single originating IP address.
evaluation_window_sec:
The amount of time, in seconds, that AWS WAF should include in its request counts, looking back from the current time.
Valid values are 60, 120, 300, and 600. Defaults to 300 (5 minutes).
forwarded_ip_config:
fallback_behavior:
The match status to assign to the web request if the request doesn't have a valid IP address in the specified position.
Possible values: `MATCH`, `NO_MATCH`
header_name:
The name of the HTTP header to use for the IP address.
byte_match_statement:
field_to_match:
Part of a web request that you want AWS WAF to inspect.
positional_constraint:
Area within the portion of a web request that you want AWS WAF to search for search_string.
Valid values include the following: `EXACTLY`, `STARTS_WITH`, `ENDS_WITH`, `CONTAINS`, `CONTAINS_WORD`.
search_string:
String value that you want AWS WAF to search for.
AWS WAF searches only in the part of web requests that you designate for inspection in `field_to_match`.
The maximum length of the value is 50 bytes.
text_transformation:
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#text-transformation
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = object({
limit = number
aggregate_key_type = string
evaluation_window_sec = optional(number)
forwarded_ip_config = optional(object({
fallback_behavior = string
header_name = string
}), null)
scope_down_statement = optional(object({
byte_match_statement = object({
positional_constraint = string
search_string = string
field_to_match = object({
all_query_arguments = optional(bool)
body = optional(bool)
method = optional(bool)
query_string = optional(bool)
single_header = optional(object({ name = string }))
single_query_argument = optional(object({ name = string }))
uri_path = optional(bool)
})
text_transformation = list(object({
priority = number
type = string
}))
})
}), null)
})
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`redacted_fields` optional
The parts of the request that you want to keep out of the logs.
You can only specify one of the following: `method`, `query_string`, `single_header`, or `uri_path`
method:
Whether to enable redaction of the HTTP method.
The method indicates the type of operation that the request is asking the origin to perform.
uri_path:
Whether to enable redaction of the URI path.
This is the part of a web request that identifies a resource.
query_string:
Whether to enable redaction of the query string.
This is the part of a URL that appears after a `?` character, if any.
single_header:
The list of names of the query headers to redact.
**Type:**
```hcl
map(object({
method = optional(bool, false)
uri_path = optional(bool, false)
query_string = optional(bool, false)
single_header = optional(list(string), null)
}))
```
**Default value:** `{ }`
`regex_match_statement_rules` optional
A rule statement used to search web request components for a match against a single regular expression.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
regex_string:
String representing the regular expression. Minimum of 1 and maximum of 512 characters.
field_to_match:
The part of a web request that you want AWS WAF to inspect.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl.html#field_to_match
text_transformation:
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. At least one required.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#text-transformation
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
A rule statement used to search web request components for matches with regular expressions.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
arn:
The Amazon Resource Name (ARN) of the Regex Pattern Set that this statement references.
field_to_match:
The part of a web request that you want AWS WAF to inspect.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#field-to-match
text_transformation:
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#text-transformation
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`rule_group_reference_statement_rules` optional
A rule statement used to run the rules that are defined in an WAFv2 Rule Group.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
override_action:
The override action to apply to the rules in a rule group.
Possible values: `count`, `none`
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
arn:
The ARN of the `aws_wafv2_rule_group` resource.
rule_action_override:
Action settings to use in the place of the rule actions that are configured inside the rule group.
You specify one override for each rule whose action you want to change.
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
override_action = optional(string)
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = object({
arn = string
rule_action_override = optional(map(object({
action = string
custom_request_handling = optional(object({
insert_header = object({
name = string
value = string
})
}), null)
custom_response = optional(object({
response_code = string
response_header = optional(object({
name = string
value = string
}), null)
}), null)
})), null)
})
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`scope` (`string`) optional
Specifies whether this is for an AWS CloudFront distribution or for a regional application.
Possible values are `CLOUDFRONT` or `REGIONAL`.
To work with CloudFront, you must also specify the region us-east-1 (N. Virginia) on the AWS provider.
**Default value:** `"REGIONAL"`
`size_constraint_statement_rules` optional
A rule statement that uses a comparison operator to compare a number of bytes against the size of a request component.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
comparison_operator:
The operator to use to compare the request part to the size setting.
Possible values: `EQ`, `NE`, `LE`, `LT`, `GE`, or `GT`.
size:
The size, in bytes, to compare to the request part, after any transformations.
Valid values are integers between `0` and `21474836480`, inclusive.
field_to_match:
The part of a web request that you want AWS WAF to inspect.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#field-to-match
text_transformation:
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#text-transformation
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`sqli_match_statement_rules` optional
An SQL injection match condition identifies the part of web requests,
such as the URI or the query string, that you want AWS WAF to inspect.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
rule_label:
A List of labels to apply to web requests that match the rule match statement
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
statement:
field_to_match:
The part of a web request that you want AWS WAF to inspect.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#field-to-match
text_transformation:
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#text-transformation
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
`ssm_path_prefix` (`string`) optional
SSM path prefix (with leading but not trailing slash) under which to store all WAF info
**Default value:** `"/waf"`
`token_domains` (`list(string)`) optional
Specifies the domains that AWS WAF should accept in a web request token.
This enables the use of tokens across multiple protected websites.
When AWS WAF provides a token, it uses the domain of the AWS resource that the web ACL is protecting.
If you don't specify a list of token domains, AWS WAF accepts tokens only for the domain of the protected resource.
With a token domain list, AWS WAF accepts the resource's host domain plus all domains in the token domain list,
including their prefixed subdomains.
**Default value:** `null`
`xss_match_statement_rules` optional
A rule statement that defines a cross-site scripting (XSS) match search for AWS WAF to apply to web requests.
action:
The action that AWS WAF should take on a web request when it matches the rule's statement.
name:
A friendly name of the rule.
priority:
If you define more than one Rule in a WebACL,
AWS WAF evaluates each request against the rules in order based on the value of priority.
AWS WAF processes rules with lower priority first.
captcha_config:
Specifies how AWS WAF should handle CAPTCHA evaluations.
immunity_time_property:
Defines custom immunity time.
immunity_time:
The amount of time, in seconds, that a CAPTCHA or challenge timestamp is considered valid by AWS WAF. The default setting is 300.
rule_label:
A List of labels to apply to web requests that match the rule match statement
statement:
field_to_match:
The part of a web request that you want AWS WAF to inspect.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#field-to-match
text_transformation:
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection.
See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl#text-transformation
visibility_config:
Defines and enables Amazon CloudWatch metrics and web request sample collection.
cloudwatch_metrics_enabled:
Whether the associated resource sends metrics to CloudWatch.
metric_name:
A friendly name of the CloudWatch metric.
sampled_requests_enabled:
Whether AWS WAF should store a sampling of the web requests that match the rules.
**Type:**
```hcl
list(object({
name = string
priority = number
action = string
captcha_config = optional(object({
immunity_time_property = object({
immunity_time = number
})
}), null)
rule_label = optional(list(string), null)
statement = any
visibility_config = optional(object({
cloudwatch_metrics_enabled = optional(bool)
metric_name = string
sampled_requests_enabled = optional(bool)
}), null)
}))
```
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
`delimiter` (`string`) optional
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`descriptor_formats` (`any`) optional
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
`labels_as_tags` (`set(string)`) optional
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
`name` (`string`) optional
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
`tenant` (`string`) optional
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
`arn`
The ARN of the WAF WebACL.
`id`
The ID of the WAF WebACL.
`logging_config_id`
The ARN of the WAFv2 Web ACL logging configuration.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.3.0`
- `aws`, version: `>= 6.2.0`
### Providers
- `aws`, version: `>= 6.2.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`association_resource_components` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`aws_waf` | 1.17.0 | [`cloudposse/waf/aws`](https://registry.terraform.io/modules/cloudposse/waf/aws/1.17.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`log_destination_components` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/remote-state/1.8.0) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_ssm_parameter.acl_arn`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_alb.alb`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/alb) (data source)
- [`aws_lbs.alb_by_tags`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lbs) (data source)
---
## zscaler
This component is responsible for provisioning ZScaler Private Access Connector instances on Amazon Linux 2 AMIs.
Prior to provisioning this component, it is required that a SecureString SSM Parameter containing the ZScaler App
Connector Provisioning Key is populated in each account corresponding to the regional stack the component is deployed
to, with the name of the SSM Parameter matching the value of `var.zscaler_key`.
This parameter should be populated using `chamber`, which is included in the geodesic image:
```
chamber write zscaler key
```
Where `` is the ZScaler App Connector Provisioning Key. For more information on how to generate this key, see:
[ZScaler documentation on Configuring App Connectors](https://help.zscaler.com/zpa/configuring-connectors).
## Usage
**Stack Level**: Regional
The typical stack configuration for this component is as follows:
```yaml
components:
terraform:
zscaler:
vars:
zscaler_count: 2
```
Preferably, regional stack configurations can be kept _DRY_ by importing `catalog/zscaler` via the `imports` list at the
top of the configuration.
```
import:
...
- catalog/zscaler
```
## Variables
### Required Variables
`region` (`string`) required
AWS region
### Optional Variables
`ami_owner` (`string`) optional
The owner of the AMI used for the ZScaler EC2 instances.
**Default value:** `"amazon"`
`ami_regex` (`string`) optional
The regex used to match the latest AMI to be used for the ZScaler EC2 instances.
**Default value:** `"^amzn2-ami-hvm.*"`
`aws_ssm_enabled` (`bool`) optional
Set true to install the AWS SSM agent on each EC2 instances.
**Default value:** `true`
`instance_type` (`string`) optional
The instance family to use for the ZScaler EC2 instances.
**Default value:** `"m5n.large"`
`secrets_store_type` (`string`) optional
Secret store type for Zscaler provisioning keys. Valid values: `SSM`, `ASM` (but `ASM` not currently supported)
**Default value:** `"SSM"`
`security_group_rules` (`list(any)`) optional
A list of maps of Security Group rules.
The values of map is fully completed with `aws_security_group_rule` resource.
To get more info see [security_group_rule](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule).
**Default value:**
```hcl
[
{
"cidr_blocks": [
"0.0.0.0/0"
],
"from_port": 0,
"protocol": "-1",
"to_port": 65535,
"type": "egress"
}
]
```
`zscaler_count` (`number`) optional
The number of Zscaler instances.
**Default value:** `1`
`zscaler_key` (`string`) optional
SSM key (without leading `/`) for the Zscaler provisioning key secret.
**Default value:** `"zscaler/key"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
`additional_tag_map` (`map(string)`) optional
Additional tags for appending to tags_as_list_of_maps. Not added to `tags`.
**Required:** No
**Default value:** `{ }`
`attributes` (`list(string)`) optional
Additional attributes (e.g. `1`)
**Required:** No
**Default value:** `[ ]`
`context` (`any`) optional
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {}
}
```
`delimiter` (`string`) optional
Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
`enabled` (`bool`) optional
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
`environment` (`string`) optional
Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
`id_length_limit` (`number`) optional
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for default, which is `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
`label_key_case` (`string`) optional
The letter case of label keys (`tag` names) (i.e. `name`, `namespace`, `environment`, `stage`, `attributes`) to use in `tags`.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
`label_order` (`list(string)`) optional
The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
**Required:** No
**Default value:** `null`
`label_value_case` (`string`) optional
The letter case of output label values (also used in `tags` and `id`).
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Default value: `lower`.
**Required:** No
**Default value:** `null`
`name` (`string`) optional
Solution name, e.g. 'app' or 'jenkins'
**Required:** No
**Default value:** `null`
`namespace` (`string`) optional
Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'
**Required:** No
**Default value:** `null`
`regex_replace_chars` (`string`) optional
Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
`stage` (`string`) optional
Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
`tags` (`map(string)`) optional
Additional tags (e.g. `map('BusinessUnit','XYZ')`
**Required:** No
**Default value:** `{ }`
## Outputs
`instance_id`
Instance ID
`private_ip`
Private IP of the instance
## Dependencies
### Requirements
- `terraform`, version: `>= 0.13.0`
- `aws`, version: `>= 3.0, < 6.0.0`
- `null`, version: `>= 3.0`
- `random`, version: `>= 3.0`
- `template`, version: `>= 2.2`
- `utils`, version: `>= 1.10.0`
### Providers
- `aws`, version: `>= 3.0, < 6.0.0`
- `template`, version: `>= 2.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`ec2_zscaler` | 2.0.0 | [`cloudposse/ec2-instance/aws`](https://registry.terraform.io/modules/cloudposse/ec2-instance/aws/2.0.0) | n/a
`iam_roles` | latest | [`../account-map/modules/iam-roles`](https://registry.terraform.io/modules/../account-map/modules/iam-roles/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`aws_iam_role_policy_attachment.ssm_core`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) (resource)
## Data Sources
The following data sources are used by this module:
- [`aws_ami.amazon_linux_2`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami) (data source)
- [`aws_ssm_parameter.zscaler_key`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) (data source)
- [`template_file.userdata`](https://registry.terraform.io/providers/cloudposse/template/latest/docs/data-sources/file) (data source)
---
## Terraform Components(Library)
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
This is a library of reusable Terraform "root module" components.
---
## GitHub Actions
import Intro from '@site/src/components/Intro'
import DocCardList from '@theme/DocCardList'
In this library you'll find all the GitHub Actions we've implemented to solve common CI/CD challenges.
---
## GitHub Actions(Actions)
import Intro from '@site/src/components/Intro'
import DocCardList from '@theme/DocCardList'
In this library you'll find all the GitHub Actions we've implemented to solve common CI/CD challenges.
---
## atmos-affected-stacks
# GitHub Action: `atmos-affected-stacks`
A GitHub Action to get a list of affected atmos stacks for a pull request
## Introduction
This is a GitHub Action to get a list of affected atmos stacks for a pull request. It optionally installs
`atmos` and `jq` and runs `atmos describe affected` to get the list of affected stacks. It provides the
raw list of affected stacks as an output as well as a matrix that can be used further in GitHub action jobs.
## Usage
### Config
:::important
**Please note!** This GitHub Action only works with `atmos >= 1.99.0`.
If you are using `atmos >= 1.80.0, < 1.99.0` please use `v5` version of this action.
If you are using `atmos >= 1.63.0, < 1.80.0` please use `v3` or `v4` version of this action.
If you are using `atmos < 1.63.0` please use `v2` version of this action.
:::
The action expects the atmos configuration file `atmos.yaml` to be present in the repository.
The action supports AWS and Azure to store Terraform plan files.
You can read more about plan storage in the [cloudposse/github-action-terraform-plan-storage](https://github.com/cloudposse/github-action-terraform-plan-storage?tab=readme-ov-file#aws-default) documentation.
Depends of cloud provider the following fields should be set in the `atmos.yaml`:
#### AWS
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
# Set `apply` empty if you don't want to assume IAM role before terraform apply
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
#### Azure
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
plan-repository-type: azureblob
blob-account-name: tfplans
blob-container-name: plans
metadata-repository-type: cosmos
cosmos-container-name: terraform-plan-storage
cosmos-database-name: terraform-plan-storage
cosmos-endpoint: "https://my-cosmo-account.documents.azure.com:443/"
# We remove the `role` section as it is AWS specific
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
### Stack level configuration
:::important
Wherever it is possible to specify `integration.github.gitops` on stack level
it is required to define default values in `atmos.yaml`
:::
It is possible to override integration settings on a stack level by defining `settings.integrations`.
```yaml
components:
terraform:
foobar:
settings:
integrations:
github:
gitops:
artifact-storage:
bucket: cptest-plat-ue2-auto-gitops
table: cptest-plat-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-plat-ue2-auto-gitops-gha
role:
# Set `plan` empty if you don't want to assume IAM role before terraform plan
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-plat-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-plat-gbl-identity-gitops
```
### Support OpenTofu
This action supports [OpenTofu](https://opentofu.org/).
:::important
**Please note!** OpenTofu supported by Atmos `>= 1.73.0`.
For details [read](https://atmos.tools/core-concepts/projects/configuration/opentofu/)
:::
To enable OpenTofu add the following settings to `atmos.yaml`
* Set the `opentofu-version` in the `atmos.yaml` to the desired version
* Set `components.terraform.command` to `tofu`
#### Example
```yaml
components:
terraform:
command: tofu
...
integrations:
github:
gitops:
opentofu-version: 1.7.3
...
```
### Workflow example
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
atmos-affected:
runs-on: ubuntu-latest
steps:
- id: affected
uses: cloudposse/github-action-atmos-affected-stacks@v6
with:
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: 1.99.0
nested-matrices-count: 1
outputs:
matrix: ${{ steps.affected.outputs.matrix }}
has-affected-stacks: ${{ steps.affected.outputs.has-affected-stacks }}
# This job is an example how to use the affected stacks with the matrix strategy
atmos-plan:
needs: ["atmos-affected"]
if: ${{ needs.atmos-affected.outputs.has-affected-stacks == 'true' }}
name: Plan ${{ matrix.stack_slug }}
runs-on: ubuntu-latest
strategy:
max-parallel: 10
fail-fast: false # Don't fail fast to avoid locking TF State
matrix: ${{ fromJson(needs.atmos-affected.outputs.matrix) }}
## Avoid running the same stack in parallel mode (from different workflows)
concurrency:
group: ${{ matrix.stack_slug }}
cancel-in-progress: false
steps:
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v4
with:
component: ${{ matrix.component }}
stack: ${{ matrix.stack }}
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: 1.99.0
```
### Migrating from `v5` to `v6`
The notable changes in `v6` are:
- `v6` works only with `atmos >= 1.99.0`
- `v6` allow to skip internal checkout with `skip-checkout` input
The only required migration step is updating atmos version to `>= 1.80.0`
### Migrating from `v4` to `v5`
The notable changes in `v5` are:
- `v5` works only with `atmos >= 1.80.0`
- `v5` supports atmos templating
The only required migration step is updating atmos version to `>= 1.80.0`
### Migrating from `v3` to `v4`
The notable changes in `v4` are:
- `v4` perform aws authentication assuming `integrations.github.gitops.role.plan` IAM role
No special migration steps required
### Migrating from `v2` to `v3`
The notable changes in `v3` are:
- `v3` works only with `atmos >= 1.63.0`
- `v3` drops `install-terraform` input because terraform is not required for affected stacks call
- `v3` drops `atmos-gitops-config-path` input and the `./.github/config/atmos-gitops.yaml` config file. Now you have to use GitHub Actions environment variables to specify the location of the `atmos.yaml`.
The following configuration fields now moved to GitHub action inputs with the same names
| name |
|-------------------------|
| `atmos-version` |
| `atmos-config-path` |
The following configuration fields moved to the `atmos.yaml` configuration file.
| name | YAML path in `atmos.yaml` |
|--------------------------|-------------------------------------------------|
| `aws-region` | `integrations.github.gitops.artifact-storage.region` |
| `terraform-state-bucket` | `integrations.github.gitops.artifact-storage.bucket` |
| `terraform-state-table` | `integrations.github.gitops.artifact-storage.table` |
| `terraform-state-role` | `integrations.github.gitops.artifact-storage.role` |
| `terraform-plan-role` | `integrations.github.gitops.role.plan` |
| `terraform-apply-role` | `integrations.github.gitops.role.apply` |
| `terraform-version` | `integrations.github.gitops.terraform-version` |
| `enable-infracost` | `integrations.github.gitops.infracost-enabled` |
| `sort-by` | `integrations.github.gitops.matrix.sort-by` |
| `group-by` | `integrations.github.gitops.matrix.group-by` |
For example, to migrate from `v2` to `v3`, you should have something similar to the following in your `atmos.yaml`:
`./.github/config/atmos.yaml`
```yaml
# ... your existing configuration
integrations:
github:
gitops:
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
`.github/workflows/main.yaml`
```yaml
- id: affected
uses: cloudposse/github-action-atmos-affected-stacks@v3
with:
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: 1.63.0
```
This corresponds to the `v2` configuration (deprecated) below.
The `v2` configuration file `./.github/config/atmos-gitops.yaml` looked like this:
```yaml
atmos-version: 1.45.3
atmos-config-path: ./rootfs/usr/local/etc/atmos/
terraform-state-bucket: cptest-core-ue2-auto-gitops
terraform-state-table: cptest-core-ue2-auto-gitops
terraform-state-role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
terraform-plan-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-apply-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-version: 1.5.2
aws-region: us-east-2
enable-infracost: false
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
And the `v2` GitHub Action Workflow looked like this.
`.github/workflows/main.yaml`
```yaml
- id: affected
uses: cloudposse/github-action-atmos-affected-stacks@v2
with:
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
```
### Migrating from `v1` to `v2`
`v2` moves most of the `inputs` to the Atmos GitOps config path `./.github/config/atmos-gitops.yaml`. Simply create this file, transfer your settings to it, then remove the corresponding arguments from your invocations of the `cloudposse/github-action-atmos-affected-stacks` action.
| name |
|--------------------------|
| `atmos-version` |
| `atmos-config-path` |
| `terraform-state-bucket` |
| `terraform-state-table` |
| `terraform-state-role` |
| `terraform-plan-role` |
| `terraform-apply-role` |
| `terraform-version` |
| `aws-region` |
| `enable-infracost` |
If you want the same behavior in `v2` as in `v1` you should create config `./.github/config/atmos-gitops.yaml` with the same variables as in `v1` inputs.
```yaml
- name: Determine Affected Stacks
uses: cloudposse/github-action-atmos-affected-stacks@v2
id: affected
with:
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
nested-matrices-count: 1
```
Which would produce the same behavior as in `v1`, doing this:
```yaml
- name: Determine Affected Stacks
uses: cloudposse/github-action-atmos-affected-stacks@v1
id: affected
with:
atmos-version: 1.45.3
atmos-config-path: ./rootfs/usr/local/etc/atmos/
terraform-state-bucket: cptest-core-ue2-auto-gitops
terraform-state-table: cptest-core-ue2-auto-gitops
terraform-state-role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
terraform-plan-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-apply-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-version: 1.5.2
aws-region: us-east-2
enable-infracost: false
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| atmos-config-path | The path to the atmos.yaml file | N/A | true |
| atmos-include-dependents | Whether to include dependents of affected stacks in the output | false | false |
| atmos-include-settings | Include the `settings` section for each affected component | false | false |
| atmos-include-spacelift-admin-stacks | Whether to include the Spacelift admin stacks of affected stacks in the output | false | false |
| atmos-pro-base-url | The base URL of Atmos Pro | https://atmos-pro.com | false |
| atmos-pro-token | The API token to allow Atmos Pro to upload affected stacks | | false |
| atmos-pro-upload | Whether to upload affected stacks directly to Atmos Pro | false | false |
| atmos-stack | The stack to operate on | | false |
| atmos-version | The version of atmos to install | >= 1.99.0 | false |
| base-ref | The base ref to checkout. If not provided, the head default branch is used. | N/A | false |
| default-branch | The default branch to use for the base ref. | $\{\{ github.event.repository.default\_branch \}\} | false |
| head-ref | The head ref to checkout. If not provided, the head default branch is used. | $\{\{ github.sha \}\} | false |
| identity | Atmos auth identity | | false |
| install-atmos | Whether to install atmos | true | false |
| install-jq | Whether to install jq | false | false |
| jq-force | Whether to force the installation of jq | true | false |
| jq-version | The version of jq to install if install-jq is true | 1.7 | false |
| nested-matrices-count | Number of nested matrices that should be returned as the output (from 1 to 3) | 2 | false |
| process-functions | Whether to process atmos functions | true | false |
| process-templates | Whether to process atmos templates | true | false |
| skip-atmos-functions | Skip all Atmos functions such as terraform.output | false | false |
| skip-checkout | Disable actions/checkout for head-ref and base-ref. Useful for when the checkout happens in a previous step and file are modified outside of git through other actions | false | false |
## Outputs
| Name | Description |
|------|-------------|
| affected | The affected stacks |
| has-affected-stacks | Whether there are affected stacks |
| matrix | The affected stacks as matrix structure suitable for extending matrix size workaround (see README) |
---
## atmos-affected-trigger-spacelift
# GitHub Action: `atmos-affected-trigger-spacelift`
GitHub Action for Triggering Affected Spacelift Stacks
## Introduction
This repo contains a GitHub Action that determines the affected [Atmos](https://atmos.tools) stacks for a PR, then
creates a comment on the PR which Spacelift can use to trigger the corresponding stacks via a push policy.
Optionally, you can use the `spacectl` trigger method, which uses the `spacectl` CLI to trigger the corresponding
spacelift stacks directly rather than via comment/push policy.
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Atmos Affected Stacks Trigger Spacelift (via comment)
uses: cloudposse/github-action-atmos-affected-trigger-spacelift@main
id: example
with:
atmos-config-path: ./rootfs/usr/local/etc/atmos
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Atmos Affected Stacks Trigger Spacelift (direct)
uses: cloudposse/github-action-atmos-affected-trigger-spacelift@main
id: example
with:
atmos-config-path: ./rootfs/usr/local/etc/atmos
github-token: ${{ secrets.GITHUB_TOKEN }}
trigger-method: spacectl
spacelift-endpoint: https://unicorn.app.spacelift.io
spacelift-api-key-id: ${{ secrets.SPACELIFT_API_KEY_ID }}
spacelift-api-key-secret: ${{ secrets.SPACELIFT_API_KEY_SECRET }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| atmos-config-path | A path to the folder where atmos.yaml is located | . | false |
| atmos-include-dependents | Whether to include dependents of affected stacks in the output | false | false |
| atmos-include-settings | Include the settings section for each affected component | false | false |
| atmos-include-spacelift-admin-stacks | Whether to include the Spacelift admin stacks of affected stacks | true | false |
| atmos-stack | The stack to operate on | | false |
| atmos-version | The version of atmos to install if install-atmos is true | latest | false |
| base-ref | The base ref to checkout. If not provided, the head default branch is used. | N/A | false |
| default-branch | The default branch to use for the base ref. | $\{\{ github.event.repository.default\_branch \}\} | false |
| deploy | A flag to indicate if a deployment should be triggered. If false, a preview will be triggered. | false | false |
| github-token | A GitHub token for running the spacelift-io/setup-spacectl action | N/A | true |
| head-ref | The head ref to checkout. If not provided, the head default branch is used. | N/A | false |
| identity | Atmos auth identity | | false |
| install-atmos | Whether to install atmos | true | false |
| install-jq | Whether to install jq | false | false |
| install-spacectl | Whether to install spacectl | true | false |
| jq-force | Whether to force the installation of jq | true | false |
| jq-version | The version of jq to install if install-jq is true | 1.7 | false |
| nested-matrices-count | Number of nested matrices that should be returned as the output (from 1 to 3) | 2 | false |
| skip-atmos-functions | Skip all Atmos functions such as terraform.output in `atmos describe affected` | false | false |
| skip-checkout | Disable actions/checkout for head-ref and base-ref. Useful for when the checkout happens in a previous step and file are modified outside of git through other actions | false | false |
| skip-process-functions | Skip processing Atmos functions in `atmos describe affected` | false | false |
| skip-process-templates | Skip processing Atmos templates in `atmos describe affected` | false | false |
| spacectl-version | The version of spacectl to install if install-spacectl is true | latest | false |
| spacelift-api-key-id | The SPACELIFT\_API\_KEY\_ID | N/A | false |
| spacelift-api-key-secret | The SPACELIFT\_API\_KEY\_SECRET | N/A | false |
| spacelift-endpoint | The Spacelift endpoint. For example, https://unicorn.app.spacelift.io | N/A | false |
| trigger-method | The method to use to trigger the Spacelift stack. Valid values are `comment` and `spacectl` | comment | false |
## Outputs
| Name | Description |
|------|-------------|
| affected | The affected stacks |
| has-affected-stacks | Whether there are affected stacks |
| matrix | The affected stacks as matrix structure suitable for extending matrix size workaround |
---
## atmos-component-updater
# GitHub Action: `atmos-component-updater`
This is GitHub Action that can be used as a workflow for automatic updates via Pull Requests in your infrastructure repository according to versions in components sources.
## Introduction
This is GitHub Action that can be used as a workflow for automatic updates via Pull Requests in your infrastructure repository according to versions in components sources.
### Key Features:
- **Selective Component Processing:** Configure the action to `exclude` or `include` specific components using wildcards, ensuring that only relevant updates are processed.
- **PR Management:** Limit the number of PRs opened at a time, making it easier to manage large-scale updates without overwhelming the system. Automatically close old component-update PRs, so they don't pile up.
- **Material Changes Focus:** Automatically open pull requests only for components with significant changes, skipping minor updates to `component.yaml` files to reduce unnecessary PRs and maintain a streamlined system.
- **Informative PRs:** Link PRs to release notes for new components, providing easy access to relevant information, and use consistent naming for easy tracking.
- **Scheduled Updates:** Run the action on a cron schedule tailored to your organization's needs, ensuring regular and efficient updates.
## Usage
### Prerequisites
This GitHub Action once used in workflow needs permissions to create/update branches and open/close pull requests so the access token needs to be passed.
It can be done in two ways:
- create a dedicated Personal Access Token (PAT)
- use [`GITHUB_TOKEN`](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret)
If you would like to use `GITHUB_TOKEN` make sure to set permissions in the workflow as follow:
```yaml
permissions:
contents: write
issues: write
pull-requests: write
```
Also, make sure that you set to `Allow GitHub Actions to create and approve pull requests` on both organization and repository levels:
- `https://github.com/organizations/YOUR-ORG/settings/actions`
- `https://github.com/YOUR-ORG/YOUR-REPO/settings/actions`
### Workflow example
```yaml
name: "atmos-components"
on:
workflow_dispatch: {}
schedule:
- cron: '0 8 * * 1' # Execute every week on Monday at 08:00
permissions:
contents: write
pull-requests: write
jobs:
update:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Update Atmos Components
uses: cloudposse/github-action-atmos-component-updater@v2
with:
github-access-token: ${{ secrets.GITHUB_TOKEN }}
max-number-of-prs: 5
include: |
aws-*
eks/*
bastion
exclude: aws-sso,aws-saml
```
### Using a Custom Atmos CLI Config Path (`atmos.yaml`)
If your [`atmos.yaml` file](https://atmos.tools/cli/configuration) is not located in the root of the infrastructure repository, you can specify the path to it using [`ATMOS_CLI_CONFIG_PATH` env variable](https://atmos.tools/cli/configuration/#environment-variables).
```yaml
# ...
- name: Update Atmos Components
uses: cloudposse/github-action-atmos-component-updater@v2
env:
# Directory containing the `atmos.yaml` file
ATMOS_CLI_CONFIG_PATH: ${{ github.workspace }}/rootfs/usr/local/etc/atmos/
with:
github-access-token: ${{ secrets.GITHUB_TOKEN }}
max-number-of-prs: 5
```
### Customize Pull Request labels, title and body
```yaml
# ...
- name: Update Atmos Components
uses: cloudposse/github-action-atmos-component-updater@v2
with:
github-access-token: ${{ secrets.GITHUB_TOKEN }}
max-number-of-prs: 5
pr-title: 'Update Atmos Component \`{{ component_name }}\` to {{ new_version }}'
pr-body: |
## what
Component \`{{ component_name }}\` was updated [{{ old_version }}]({{ old_version_link }}) → [{{ old_version }}]({{ old_version_link }}).
## references
- [{{ source_name }}]({{ source_link }})
pr-labels: |
component-update
automated
atmos
```
**IMPORTANT:** The backtick symbols must be escaped in the GitHub Action parameters. This is because GitHub evaluates whatever is in the backticks and it will render as an empty string.
#### For `title` template these placeholders are available:
- `component_name`
- `source_name`
- `old_version`
- `new_version`
#### For `body` template these placeholders are available:
- `component_name`
- `source_name`
- `source_link`
- `old_version`
- `new_version`
- `old_version_link`
- `new_version_link`
- `old_component_release_link`
- `new_component_release_link`
## FAQ
### The action cannot find any components
You may see that the action returns zero components:
```console
[06-03-2024 17:53:47] INFO Found 0 components
[]
```
This is a common error when the workflow has not checked out the repository before calling this action. Add the following before calling this action.
```yaml
- name: Checkout
uses: actions/checkout@v4
```
### The action cannot find the `component.yaml` file
You may see the action fail to find the `component.yaml` file for a given component as such:
```console
FileNotFoundError: [Errno 2] No such file or directory: 'components/terraform/account-map/component.yaml'
```
This is likely related to a missing or invalid `atmos.yaml` configuration file. Set `ATMOS_CLI_CONFIG_PATH` to the path to your Atmos configuration file.
```yaml
env:
ATMOS_CLI_CONFIG_PATH: ${{ github.workspace }}/rootfs/usr/local/etc/atmos/
```
### The action does not have permission to create Pull Requests
Your action may fail with the following message:
```console
github.GithubException.GithubException: 403 {"message": "GitHub Actions is not permitted to create or approve pull requests.", "documentation_url": "https://docs.github.com/rest/pulls/pulls#create-a-pull-request"}
```
In order to create Pull Requests in your repository, we need to set the permissions for the workflow:
```yaml
permissions:
contents: write
issues: write
pull-requests: write
```
_And_ you need to allow GitHub Actions to create and approve pulls requests in both the GitHub Organization and Repository:
1. `https://github.com/organizations/YOUR-ORG/settings/actions` > `Allow GitHub Actions to create and approve pull requests`
2. `https://github.com/YOUR-ORG/YOUR-REPO/settings/actions` > `Allow GitHub Actions to create and approve pull requests`
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| atmos-version | Atmos version to use for vendoring. Default 'latest' | latest | false |
| dry-run | Skip creation of remote branches and pull requests. Only print list of affected componented into file that is defined in 'outputs.affected-components-file' | false | false |
| exclude | Comma or new line separated list of component names to exclude. For example: 'vpc,eks/\*,rds'. By default no components are excluded. Default '' | | false |
| github-access-token | GitHub Token used to perform git and GitHub operations | $\{\{ github.token \}\} | false |
| include | Comma or new line separated list of component names to include. For example: 'vpc,eks/\*,rds'. By default all components are included. Default '\*' | \* | false |
| infra-repo-dir | Path to the infra repository. Default '/github/workspace/' | /github/workspace/ | false |
| infra-terraform-dirs | Comma or new line separated list of terraform directories in infra repo. For example 'components/terraform,components/terraform-old. Default 'components/terraform' | components/terraform | false |
| log-level | Log level for this action. Default 'INFO' | INFO | false |
| max-number-of-prs | Number of PRs to create. Maximum is 10. | 10 | false |
| pr-body-template | A string representing a Jinja2 formatted template to be used as the content of a Pull Request (PR) body. If not set template from `src/templates/pr\_body.j2.md` will be used | | false |
| pr-labels | Comma or new line separated list of labels that will added on PR creation. Default: `component-update` | component-update | false |
| pr-title-template | A string representing a Jinja2 formatted template to be used as the content of a Pull Request (PR) title. If not, set template from `src/templates/pr\_title.j2.md` will be used | | false |
| vendoring-enabled | Do not perform 'atmos vendor component-name' on components that wasn't vendored | true | false |
## Outputs
| Name | Description |
|------|-------------|
| affected | The affected components |
| has-affected-stacks | Whether there are affected components |
---
## atmos-get-setting
# GitHub Action: `atmos-get-setting`
GitHub Action to retrieve a setting from [atmos](https://github.com/cloudposse/atmos) configuration.
## Introduction
GitHub Action to retrieve settings from [atmos](https://github.com/cloudposse/atmos) configuration. There are two ways
to use this action. The first is to retrieve a single setting and to get its value returned via the `value` output.
The second is to retrieve multiple settings as an object returned via the `settings` output.
## Usage
```
# Example stacks/dev.yaml
components:
terraform:
foo:
settings:
roleArn: arn:aws:iam::000000000000:role/MyRole
secretsArn: arn:aws:secretsmanager:us-east-1:000000000000:secret:MySecret-PlMes3
vars:
foo: bar
```
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
# The following example will return a single setting value of
# `arn:aws:secretsmanager:us-east-1:000000000000:secret:MySecret-PlMes3` in the `value` output:
- name: Get Atmos Single Setting for Secret ARN
uses: cloudposse/github-action-atmos-get-setting@main
id: example
with:
component: foo
stack: core-ue1-dev
settings-path: settings.secrets-arn
# The following example will return an object with the following structure in the `settings` output:
# {"secretsArn":"arn:aws:secretsmanager:us-east-1:000000000000:secret:MySecret-PlMes3", "roleArn":"arn:aws:iam::000000000000:role/MyRole"}
- name: Get Atmos Multiple Settings
uses: cloudposse/github-action-atmos-get-setting@main
id: example
with:
settings: |
- component: foo
stack: core-ue1-dev
settingsPath: settings.secrets-arn
outputPath: secretsArn
- component: foo
stack: core-ue1-dev
settingsPath: settings.secrets-arn
outputPath: roleArn
```
## Migrating from `v0` to `v1`
Starting from `v1` the action is no longer restricted to retrieving the component config from only the `settings` section.
If you want the same behavior in `v1` as in`v0`, you should add the `settings.` prefix to the value of the `settings-path` variable.
For example, in `v1` you would provide `settings.secrets-arn` as the value to the `settings-path`
```yaml
- name: Get Atmos Setting for Secret ARN
uses: cloudposse/github-action-atmos-get-setting@v1
id: example
with:
component: foo
stack: core-ue1-dev
settings-path: settings.secrets-arn
```
Which would provide the same output as passing only `secrets-arn` in `v0`
```yaml
- name: Get Atmos Setting for Secret ARN
uses: cloudposse/github-action-atmos-get-setting@v0
id: example
with:
component: foo
stack: core-ue1-dev
settings-path: secrets-arn
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| component | The atmos component extract the settings for. | N/A | false |
| process-functions | Enable/disable processing of Terraform functions in Atmos stacks manifests. | true | false |
| process-templates | Enable/disable processing of Go templates in Atmos stacks manifests. | true | false |
| settings | The settings to extract. | N/A | false |
| settings-path | The settings path using JSONPath expressions. | N/A | false |
| stack | The atmos stack extract the settings for. | N/A | false |
## Outputs
| Name | Description |
|------|-------------|
| settings | The settings values when multiple settings are returned. |
| value | The value of the setting when a single setting is returned. |
---
## atmos-terraform-apply
# GitHub Action: `atmos-terraform-apply`
This Github Action is used to run Terraform apply for a single, Atmos-supported component with a saved planfile in S3 and DynamoDB.
## Introduction
This Github Action is used to run Terraform apply for a single, Atmos-supported component with a saved planfile in S3 and DynamoDB.
Before running this action, first create and store a planfile with the companion action, [github-action-atmos-terraform-plan](https://github.com/cloudposse/github-action-atmos-terraform-plan).
For more, see [Atmos GitHub Action Integrations](https://atmos.tools/integrations/github-actions/atmos-terraform-apply)
## Usage
### Prerequisites
This GitHub Action requires AWS access for two different purposes. This action will attempt to first pull a Terraform planfile from a S3 Bucket with metadata in a DynamoDB table with one role.
Then the action will run `terraform apply` against that component with another role. We recommend configuring
[OpenID Connect with AWS](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services)
to allow GitHub to assume roles in AWS and then deploying both a Terraform Apply role and a Terraform State role.
For Cloud Posse documentation on setting up GitHub OIDC, see our [`github-oidc-provider` component](https://docs.cloudposse.com/components/library/aws/github-oidc-provider/).
In order to retrieve Terraform Plan Files (not to be confused with Terraform State files, e.g. `tfstate`), we configure an S3 Bucket to store plan files and a DynamoDB table to track plan metadata. Both need to be deployed before running
this action. For more on setting up those components, see the [`gitops` component](https://docs.cloudposse.com/components/library/aws/gitops/). This action will then use the [github-action-terraform-plan-storage](https://github.com/cloudposse/github-action-terraform-plan-storage) action to update these resources.
### Config
:::important
**Please note!** This GitHub Action only works with `atmos >= 1.186.0`.
If you are using `atmos >= 1.158.0, < 1.186.0` please use `v4` version of this action.
If you are using `atmos >= 1.99.0, < 1.158.0` please use `v3` version of this action.
If you are using `atmos >= 1.63.0, < 1.99.0` please use `v2` version of this action.
If you are using `atmos < 1.63.0` please use `v1` version of this action.
:::
The action expects the atmos configuration file `atmos.yaml` to be present in the repository.
The action supports AWS and Azure to store Terraform plan files.
You can read more about plan storage in the [cloudposse/github-action-terraform-plan-storage](https://github.com/cloudposse/github-action-terraform-plan-storage?tab=readme-ov-file#aws-default) documentation.
Depends of cloud provider the following fields should be set in the `atmos.yaml`:
#### AWS
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
# Set `apply` empty if you don't want to assume IAM role before terraform apply
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
#### Azure
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
plan-repository-type: azureblob
blob-account-name: tfplans
blob-container-name: plans
metadata-repository-type: cosmos
cosmos-container-name: terraform-plan-storage
cosmos-database-name: terraform-plan-storage
cosmos-endpoint: "https://my-cosmo-account.documents.azure.com:443/"
# We remove the `role` section as it is AWS specific
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
### Stack level configuration
:::important
Wherever it is possible to specify `integration.github.gitops` on stack level
it is required to define default values in `atmos.yaml`
:::
It is possible to override integration settings on a stack level by defining `settings.integrations`.
```yaml
components:
terraform:
foobar:
settings:
integrations:
github:
gitops:
artifact-storage:
bucket: cptest-plat-ue2-auto-gitops
table: cptest-plat-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-plat-ue2-auto-gitops-gha
role:
# Set `plan` empty if you don't want to assume IAM role before terraform plan
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-plat-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-plat-gbl-identity-gitops
```
### Support OpenTofu
This action supports [OpenTofu](https://opentofu.org/).
:::important
**Please note!** OpenTofu supported by Atmos `>= 1.73.0`.
For details [read](https://atmos.tools/core-concepts/projects/configuration/opentofu/)
:::
To enable OpenTofu add the following settings to `atmos.yaml`
* Set the `opentofu-version` in the `atmos.yaml` to the desired version
* Set `components.terraform.command` to `tofu`
#### Example
```yaml
components:
terraform:
command: tofu
...
integrations:
github:
gitops:
opentofu-version: 1.7.3
...
```
### Plan Diff mode
The action provides a `plan-diff` mode that compares the newly generated plan against the previously stored plan and fails the workflow if any differences are detected.
To enable plan diff mode, set the `plan-diff` input to `true`.
:::important
When plan-diff is disabled, the action will apply the stored plan without re-validating it.
This may result in unintended changes if the underlying infrastructure has been modified between the plan and apply steps.
Additionally, stored plans are single-use: even if an apply operation fails for any reason, the plan becomes outdated and cannot be reused.
:::
### Workflow example
In this example, the action is triggered when certain events occur, such as a manual workflow dispatch or the opening, synchronization, or reopening of a pull request, specifically on the main branch. It specifies specific permissions related to assuming roles in AWS. Within the "apply" job, the "component" and "stack" are hardcoded (`foobar` and `plat-ue2-sandbox`). In practice, these are usually derived from another action.
:::tip
:::
We recommend combining this action with the [`affected-stacks`](https://atmos.tools/integrations/github-actions/affected-stacks) GitHub Action inside a matrix to plan all affected stacks in parallel.
```yaml
name: "atmos-terraform-apply"
on:
workflow_dispatch:
pull_request:
types:
- closed
branches:
- main
# These permissions are required for GitHub to assume roles in AWS
permissions:
id-token: write
contents: read
jobs:
apply:
runs-on: ubuntu-latest
steps:
- name: Terraform Apply
uses: cloudposse/github-action-atmos-terraform-apply@v5
with:
component: "foobar"
stack: "plat-ue2-sandbox"
atmos-config-path: ./rootfs/usr/local/etc/atmos/
```
### Migrating from `v4` to `v5`
The notable changes in `v5` are:
- `v5` works only with `atmos >= 1.186.0`
- `v5` uses `atmos terraform plan-diff` to ensure changes to be applied are consistent with the stored (approved by the user) planfile.
### Migrating from `v3` to `v4`
The notable changes in `v4` are:
- `v4` works only with `atmos >= 1.158.0`
- `v4` supports atnos `templates` and `functions`
### Migrating from `v2` to `v3`
The notable changes in `v3` are:
- `v3` works only with `atmos >= 1.99.0`
- `v3` support azure plan and metadata storage
- `v3` supports stack level integration gitops settings
- `v3` allow to skip internal checkout with `skip-checkout` input
The only required migration step is updating atmos version to `>= 1.99.0`
### Migrating from `v1` to `v2`
The notable changes in `v2` are:
- `v2` works only with `atmos >= 1.63.0`
- `v2` drops `install-terraform` input because terraform is not required for affected stacks call
- `v2` drops `atmos-gitops-config-path` input and the `./.github/config/atmos-gitops.yaml` config file. Now you have to use GitHub Actions environment variables to specify the location of the `atmos.yaml`.
The following configuration fields now moved to GitHub action inputs with the same names
| name |
|-------------------------|
| `atmos-version` |
| `atmos-config-path` |
The following configuration fields moved to the `atmos.yaml` configuration file.
| name | YAML path in `atmos.yaml` |
|--------------------------|-------------------------------------------------|
| `aws-region` | `integrations.github.gitops.artifact-storage.region` |
| `terraform-state-bucket` | `integrations.github.gitops.artifact-storage.bucket` |
| `terraform-state-table` | `integrations.github.gitops.artifact-storage.table` |
| `terraform-state-role` | `integrations.github.gitops.artifact-storage.role` |
| `terraform-plan-role` | `integrations.github.gitops.role.plan` |
| `terraform-apply-role` | `integrations.github.gitops.role.apply` |
| `terraform-version` | `integrations.github.gitops.terraform-version` |
| `enable-infracost` | `integrations.github.gitops.infracost-enabled` |
| `sort-by` | `integrations.github.gitops.matrix.sort-by` |
| `group-by` | `integrations.github.gitops.matrix.group-by` |
For example, to migrate from `v1` to `v2`, you should have something similar to the following in your `atmos.yaml`:
`./.github/config/atmos.yaml`
```yaml
# ... your existing configuration
integrations:
github:
gitops:
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
`.github/workflows/main.yaml`
```yaml
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-apply@v2
with:
component: "foobar"
stack: "plat-ue2-sandbox"
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: 1.63.0
```
This corresponds to the `v1` configuration (deprecated) below.
The `v1` configuration file `./.github/config/atmos-gitops.yaml` looked like this:
```yaml
atmos-version: 1.45.3
atmos-config-path: ./rootfs/usr/local/etc/atmos/
terraform-state-bucket: cptest-core-ue2-auto-gitops
terraform-state-table: cptest-core-ue2-auto-gitops
terraform-state-role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
terraform-plan-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-apply-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-version: 1.5.2
aws-region: us-east-2
enable-infracost: false
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
And the `v1` GitHub Action Workflow looked like this.
`.github/workflows/main.yaml`
```yaml
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-apply@v1
with:
component: "foobar"
stack: "plat-ue2-sandbox"
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
```
### Migrating from `v0` to `v1`
1. `v1` drops the `component-path` variable and instead fetches if directly from the [`atmos.yaml` file](https://atmos.tools/cli/configuration/) automatically. Simply remove the `component-path` argument from your invocations of the `cloudposse/github-action-atmos-terraform-apply` action.
2. `v1` moves most of the `inputs` to the Atmos GitOps config path `./.github/config/atmos-gitops.yaml`. Simply create this file, transfer your settings to it, then remove the corresponding arguments from your invocations of the `cloudposse/github-action-atmos-terraform-apply` action.
| name |
|--------------------------|
| `atmos-version` |
| `atmos-config-path` |
| `terraform-state-bucket` |
| `terraform-state-table` |
| `terraform-state-role` |
| `terraform-plan-role` |
| `terraform-apply-role` |
| `terraform-version` |
| `aws-region` |
| `enable-infracost` |
If you want the same behavior in `v1` as in `v0` you should create config `./.github/config/atmos-gitops.yaml` with the same variables as in `v0` inputs.
```yaml
- name: Terraform apply
uses: cloudposse/github-action-atmos-terraform-apply@v1
with:
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
component: "foobar"
stack: "plat-ue2-sandbox"
```
Which would produce the same behavior as in `v0`, doing this:
```yaml
- name: Terraform apply
uses: cloudposse/github-action-atmos-terraform-apply@v0
with:
component: "foobar"
stack: "plat-ue2-sandbox"
component-path: "components/terraform/s3-bucket"
terraform-apply-role: "arn:aws:iam::111111111111:role/acme-core-gbl-identity-gitops"
terraform-state-bucket: "acme-core-ue2-auto-gitops"
terraform-state-role: "arn:aws:iam::999999999999:role/acme-core-ue2-auto-gitops-gha"
terraform-state-table: "acme-core-ue2-auto-gitops"
aws-region: "us-east-2"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| atmos-config-path | The path to the atmos.yaml file | N/A | true |
| atmos-version | The version of atmos to install | >= 1.186.0 | false |
| branding-logo-image | Branding logo image url | https://cloudposse.com/logo-300x69.svg | false |
| branding-logo-url | Branding logo url | https://cloudposse.com/ | false |
| component | The name of the component to apply. | N/A | true |
| debug | Enable action debug mode. Default: 'false' | false | false |
| identity | Atmos auth identity | | false |
| infracost-api-key | Infracost API key | N/A | false |
| plan-storage | Enable plan storage. Default: 'true'. Set to 'false' to disable plan storage. | true | false |
| sha | Commit SHA to apply. Default: github.sha | $\{\{ github.event.pull\_request.head.sha \}\} | true |
| skip-checkout | Disable actions/checkout. Useful for when the checkout happens in a previous step and file are modified outside of git through other actions | false | false |
| skip-plandiff | Skip plan diff validation. Default: 'false'. Set to 'true' to skip plan prepare and diff validation. | false | false |
| stack | The stack name for the given component. | N/A | true |
| token | Used to pull node distributions for Atmos from Cloud Posse's GitHub repository. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. | $\{\{ github.server\_url == 'https://github.com' && github.token \|\| '' \}\} | false |
## Outputs
| Name | Description |
|------|-------------|
| status | Apply Status. Either 'succeeded' or 'failed' |
---
## atmos-terraform-drift-detection
# GitHub Action: `atmos-terraform-drift-detection`
This Github Action is used to detect drift
## Introduction
This Github Action is used to detect drift.
It will create or update github issue once drift is detect.
It is expected to run this action in a workflow with a scheduled run.
There is another companion action [github-action-atmos-terraform-drift-remediation](https://github.com/cloudposse/github-action-atmos-terraform-drift-remediation).
## Usage
### Workflow example
```yaml
name: 👽 Atmos Terraform Drift Detection
on:
schedule:
- cron: "0 * * * *"
permissions:
id-token: write
contents: write
issues: write
jobs:
select-components:
runs-on: ubuntu-latest
name: Select Components
outputs:
matrix: ${{ steps.components.outputs.matrix }}
steps:
- name: Selected Components
id: components
uses: cloudposse/github-action-atmos-terraform-select-components@v0
with:
jq-query: 'to_entries[] | .key as $parent | .value.components.terraform | to_entries[] | select(.value.settings.github.actions_enabled // false) | [$parent, .key] | join(",")'
debug: ${{ env.DEBUG_ENABLED }}
plan-atmos-components:
needs:
- select-components
runs-on: ubuntu-latest
if: ${{ needs.select-components.outputs.matrix != '{"include":[]}' }}
strategy:
fail-fast: false # Don't fail fast to avoid locking TF State
matrix: ${{ fromJson(needs.select-components.outputs.matrix) }}
name: ${{ matrix.stack_slug }}
env:
GITHUB_TOKEN: "${{ github.token }}"
steps:
- name: Plan Atmos Component
id: atmos-plan
uses: cloudposse/github-action-atmos-terraform-plan@v0
with:
component: ${{ matrix.component }}
stack: ${{ matrix.stack }}
component-path: ${{ matrix.component_path }}
drift-detection-mode-enabled: "true"
terraform-plan-role: "arn:aws:iam::111111111111:role/acme-core-gbl-identity-gitops"
terraform-state-bucket: "acme-core-ue2-auto-gitops"
terraform-state-role: "arn:aws:iam::999999999999:role/acme-core-ue2-auto-gitops-gha"
terraform-state-table: "acme-core-ue2-auto-gitops"
aws-region: "us-east-2"
drift-detection:
needs:
- plan-atmos-components
runs-on: ubuntu-latest
steps:
- name: Drift Detection
uses: cloudposse/github-action-atmos-terraform-drift-detection@v0
with:
max-opened-issues: '3'
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| assignee-teams | Comma-separated list of teams to assign issues to. You have to pass github token with `read:org` scope. This is used only when issue is getting created. | | false |
| assignee-users | Comma-separated list of users to assign issues to. This is used only when issue is getting created. | | false |
| labels | Comma-separated list of additional labels to assign issues to. | | false |
| max-opened-issues | Number of open drift detection issues. Use `-1` to open unlimited number of issues. Default: 10 | 10 | false |
| process-all | Process all issues or only the ones that relates to affected stacks. Default: false | false | false |
| token | Used to pull node distributions for Atmos from Cloud Posse's GitHub repository. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. | $\{\{ github.server\_url == 'https://github.com' && github.token \|\| '' \}\} | false |
---
## atmos-terraform-drift-remediation
# GitHub Action: `atmos-terraform-drift-remediation`
This Github Action is used to remediate drift
## Introduction
This action is used for drift remediation.
There is another companion action [github-action-atmos-terraform-drift-detection](https://github.com/cloudposse/github-action-atmos-terraform-drift-detection).
## Usage
### Config
:::important
**Please note!** This GitHub Action only works with `atmos >= 1.158.0`.
If you are using `atmos >= 1.99.0, < 1.158.0` please use `v3` version of this action.
If you are using `atmos >= 1.63.0, < 1.99.0` please use `v2` version of this action.
If you are using `atmos < 1.63.0` please use `v1` version of this action.
:::
The action expects the atmos configuration file `atmos.yaml` to be present in the repository.
The action supports AWS and Azure to store Terraform plan files.
You can read more about plan storage in the [cloudposse/github-action-terraform-plan-storage](https://github.com/cloudposse/github-action-terraform-plan-storage?tab=readme-ov-file#aws-default) documentation.
Depends of cloud provider the following fields should be set in the `atmos.yaml`:
#### AWS
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
# Set `apply` empty if you don't want to assume IAM role before terraform apply
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
#### Azure
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
plan-repository-type: azureblob
blob-account-name: tfplans
blob-container-name: plans
metadata-repository-type: cosmos
cosmos-container-name: terraform-plan-storage
cosmos-database-name: terraform-plan-storage
cosmos-endpoint: "https://my-cosmo-account.documents.azure.com:443/"
# We remove the `role` section as it is AWS specific
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
### Stack level configuration
:::important
Wherever it is possible to specify `integration.github.gitops` on stack level
it is required to define default values in `atmos.yaml`
:::
It is possible to override integration settings on a stack level by defining `settings.integrations`.
```yaml
components:
terraform:
foobar:
settings:
integrations:
github:
gitops:
artifact-storage:
bucket: cptest-plat-ue2-auto-gitops
table: cptest-plat-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-plat-ue2-auto-gitops-gha
role:
# Set `plan` empty if you don't want to assume IAM role before terraform plan
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-plat-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-plat-gbl-identity-gitops
```
### Support OpenTofu
This action supports [OpenTofu](https://opentofu.org/).
:::important
**Please note!** OpenTofu supported by Atmos `>= 1.73.0`.
For details [read](https://atmos.tools/core-concepts/projects/configuration/opentofu/)
:::
To enable OpenTofu add the following settings to `atmos.yaml`
* Set the `opentofu-version` in the `atmos.yaml` to the desired version
* Set `components.terraform.command` to `tofu`
#### Example
```yaml
components:
terraform:
command: tofu
...
integrations:
github:
gitops:
opentofu-version: 1.7.3
...
```
### Workflow example
In this example drift will be remediated when user sets label `apply` to an issue.
```yaml
name: 👽 Atmos Terraform Drift Remediation
run-name: 👽 Atmos Terraform Drift Remediation
on:
issues:
types:
- labeled
- closed
permissions:
id-token: write
contents: read
jobs:
remediate-drift:
runs-on: ubuntu-latest
name: Remediate Drift
if: |
github.event.action == 'labeled' &&
contains(join(github.event.issue.labels.*.name, ','), 'apply')
steps:
- name: Remediate Drift
uses: cloudposse/github-action-atmos-terraform-drift-remediation@v1
with:
issue-number: ${{ github.event.issue.number }}
action: remediate
atmos-config-path: ./rootfs/usr/local/etc/atmos/
discard-drift:
runs-on: ubuntu-latest
name: Discard Drift
if: |
github.event.action == 'closed' &&
!contains(join(github.event.issue.labels.*.name, ','), 'remediated')
steps:
- name: Discard Drift
uses: cloudposse/github-action-atmos-terraform-drift-remediation@v1
with:
issue-number: ${{ github.event.issue.number }}
action: discard
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
```
### Migrating from `v3` to `v4`
The notable changes in `v4` are:
- `v4` works only with `atmos >= 1.158.0`
- `v4` supports atnos `templates` and `functions`
### Migrating from `v2` to `v3`
The notable changes in `v3` are:
- `v3` works only with `atmos >= 1.99.0`
- `v3` use `cloudposse/github-action-atmos-terraform-apply@v3`
- `v3` supports stack level integration gitops settings
- `v3` allow to skip internal checkout with `skip-checkout` input
The only required migration step is updating atmos version to `>= 1.99.0`
### Migrating from `v1` to `v2`
The notable changes in `v2` are:
- `v2` works only with `atmos >= 1.63.0`
- `v2` drops `install-terraform` input because terraform is not required for affected stacks call
- `v2` drops `atmos-gitops-config-path` input and the `./.github/config/atmos-gitops.yaml` config file. Now you have to use GitHub Actions environment variables to specify the location of the `atmos.yaml`.
The following configuration fields now moved to GitHub action inputs with the same names
| name |
|-------------------------|
| `atmos-version` |
| `atmos-config-path` |
The following configuration fields moved to the `atmos.yaml` configuration file.
| name | YAML path in `atmos.yaml` |
|--------------------------|-------------------------------------------------|
| `aws-region` | `integrations.github.gitops.artifact-storage.region` |
| `terraform-state-bucket` | `integrations.github.gitops.artifact-storage.bucket` |
| `terraform-state-table` | `integrations.github.gitops.artifact-storage.table` |
| `terraform-state-role` | `integrations.github.gitops.artifact-storage.role` |
| `terraform-plan-role` | `integrations.github.gitops.role.plan` |
| `terraform-apply-role` | `integrations.github.gitops.role.apply` |
| `terraform-version` | `integrations.github.gitops.terraform-version` |
| `enable-infracost` | `integrations.github.gitops.infracost-enabled` |
| `sort-by` | `integrations.github.gitops.matrix.sort-by` |
| `group-by` | `integrations.github.gitops.matrix.group-by` |
For example, to migrate from `v1` to `v2`, you should have something similar to the following in your `atmos.yaml`:
`./.github/config/atmos.yaml`
```yaml
# ... your existing configuration
integrations:
github:
gitops:
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
`.github/workflows/main.yaml`
```yaml
- name: Remediate Drift
uses: cloudposse/github-action-atmos-terraform-drift-remediation@v2
with:
issue-number: ${{ github.event.issue.number }}
action: remediate
atmos-config-path: ./rootfs/usr/local/etc/atmos/
```
This corresponds to the `v1` configuration (deprecated) below.
The `v1` configuration file `./.github/config/atmos-gitops.yaml` looked like this:
```yaml
atmos-version: 1.45.3
atmos-config-path: ./rootfs/usr/local/etc/atmos/
terraform-state-bucket: cptest-core-ue2-auto-gitops
terraform-state-table: cptest-core-ue2-auto-gitops
terraform-state-role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
terraform-plan-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-apply-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-version: 1.5.2
aws-region: us-east-2
enable-infracost: false
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
And the `v1` GitHub Action Workflow looked like this.
`.github/workflows/main.yaml`
```yaml
- name: Remediate Drift
uses: cloudposse/github-action-atmos-terraform-drift-remediation@v1
with:
issue-number: ${{ github.event.issue.number }}
action: remediate
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
```
### Migrating from `v0` to `v1`
1. `v2` drops the `component-path` variable and instead fetches if directly from the [`atmos.yaml` file](https://atmos.tools/cli/configuration/) automatically. Simply remove the `component-path` argument from your invocations of the `cloudposse/github-action-atmos-terraform-plan` action.
2. `v2` moves most of the `inputs` to the Atmos GitOps config path `./.github/config/atmos-gitops.yaml`. Simply create this file, transfer your settings to it, then remove the corresponding arguments from your invocations of the `cloudposse/github-action-atmos-terraform-plan` action.
| name |
|--------------------------|
| `atmos-version` |
| `atmos-config-path` |
| `terraform-state-bucket` |
| `terraform-state-table` |
| `terraform-state-role` |
| `terraform-plan-role` |
| `terraform-apply-role` |
| `terraform-version` |
| `aws-region` |
| `enable-infracost` |
If you want the same behavior in `v1` as in`v0` you should create config `./.github/config/atmos-gitops.yaml` with the same variables as in `v0` inputs.
```yaml
- name: Remediate Drift
uses: cloudposse/github-action-atmos-terraform-drift-remediation@v1
with:
issue-number: ${{ github.event.issue.number }}
action: remediate
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
```
Which would produce the same behavior as in `v0`, doing this:
```yaml
- name: Remediate Drift
uses: cloudposse/github-action-atmos-terraform-drift-remediation@v0
with:
issue-number: ${{ github.event.issue.number }}
action: remediate
atmos-config-path: "${{ github.workspace }}/rootfs/usr/local/etc/atmos/"
terraform-plan-role: "arn:aws:iam::111111111111:role/acme-core-gbl-identity-gitops"
terraform-state-bucket: "acme-core-ue2-auto-gitops"
terraform-state-role: "arn:aws:iam::999999999999:role/acme-core-ue2-auto-gitops-gha"
terraform-state-table: "acme-core-ue2-auto-gitops"
aws-region: "us-east-2"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| action | Drift remediation action. One of ['remediate', 'discard'] | remediate | false |
| atmos-config-path | The path to the atmos.yaml file | N/A | true |
| atmos-version | The version of atmos to install | >= 1.158.0 | false |
| debug | Enable action debug mode. Default: 'false' | false | false |
| issue-number | Issue Number | N/A | true |
| skip-checkout | Disable actions/checkout. Useful for when the checkout happens in a previous step and file are modified outside of git through other actions | false | false |
| token | Used to pull node distributions for Atmos from Cloud Posse's GitHub repository. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. | $\{\{ github.server\_url == 'https://github.com' && github.token \|\| '' \}\} | false |
---
## atmos-terraform-plan
# GitHub Action: `atmos-terraform-plan`
This Github Action is used to run Terraform plan for a single, Atmos-supported component and save the given planfile to S3 and DynamoDB.
## Introduction
This Github Action is used to run Terraform plan for a single, Atmos-supported component and save the given planfile to S3 and DynamoDB.
After running this action, apply Terraform with the companion action, [github-action-atmos-terraform-apply](https://github.com/cloudposse/github-action-atmos-terraform-apply)
## Usage
### Prerequisites
This GitHub Action requires AWS access for two different purposes. This action will attempt to first run `terraform plan` against a given component and
then will use another role to save that given Terraform Plan to an S3 Bucket with metadata in a DynamoDB table. We recommend configuring
[OpenID Connect with AWS](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services)
to allow GitHub to assume roles in AWS and then deploying both a Terraform Plan role and a Terraform State role.
For Cloud Posse documentation on setting up GitHub OIDC, see our [`github-oidc-provider` component](https://docs.cloudposse.com/components/library/aws/github-oidc-provider/).
In order to store Terraform State, we configure an S3 Bucket to store plan files and a DynamoDB table to track plan metadata. Both will need to be deployed before running
this action. For more on setting up those components, see the `gitops` component (__documentation pending__). This action will then use the [github-action-terraform-plan-storage](https://github.com/cloudposse/github-action-terraform-plan-storage) action to update these resources.
### Config
:::important
**Please note!** This GitHub Action only works with `atmos >= v1.158.0`
If you are using `atmos >= 1.99.0, < 1.158.0` please use `v4` version of this action.
If you are using `atmos >= 1.63.0, < 1.99.0` please use `v2` or `v3` version of this action.
If you are using `atmos < 1.63.0` please use `v1` version of this action.
:::
The action expects the atmos configuration file `atmos.yaml` to be present in the repository.
The action supports AWS and Azure to store Terraform plan files.
You can read more about plan storage in the [cloudposse/github-action-terraform-plan-storage](https://github.com/cloudposse/github-action-terraform-plan-storage?tab=readme-ov-file#aws-default) documentation.
Depending on the cloud provider, the following fields should be set in the `atmos.yaml`:
#### AWS
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
plan-repository-type: s3
metadata-repository-type: dynamo
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
# Set `plan` empty if you don't want to assume IAM role before terraform plan
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
#### Azure
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
plan-repository-type: azureblob
metadata-repository-type: cosmos
blob-account-name: tfplans
blob-container-name: plans
cosmos-container-name: terraform-plan-storage
cosmos-database-name: terraform-plan-storage
cosmos-endpoint: "https://my-cosmo-account.documents.azure.com:443/"
# We remove the `role` section as it is AWS specific
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
### Stack level configuration
:::important
Wherever it is possible to specify `integration.github.gitops` on stack level
it is required to define default values in `atmos.yaml`
:::
It is possible to override integration settings on a stack level by defining `settings.integrations`.
```yaml
components:
terraform:
foobar:
settings:
integrations:
github:
gitops:
artifact-storage:
bucket: cptest-plat-ue2-auto-gitops
table: cptest-plat-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-plat-ue2-auto-gitops-gha
role:
# Set `plan` empty if you don't want to assume IAM role before terraform plan
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-plat-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-plat-gbl-identity-gitops
```
### Support OpenTofu
This action supports [OpenTofu](https://opentofu.org/).
:::important
**Please note!** OpenTofu supported by Atmos `>= 1.73.0`.
For details [read](https://atmos.tools/core-concepts/projects/configuration/opentofu/)
:::
To enable OpenTofu add the following settings to `atmos.yaml`
* Set the `opentofu-version` in the `atmos.yaml` to the desired version
* Set `components.terraform.command` to `tofu`
#### Example
```yaml
components:
terraform:
command: tofu
...
integrations:
github:
gitops:
opentofu-version: 1.7.3
...
```
### Workflow example
```yaml
name: "atmos-terraform-plan"
on:
workflow_dispatch: {}
pull_request:
types:
- opened
- synchronize
- reopened
branches:
- main
# These permissions are required for GitHub to assume roles in AWS
permissions:
id-token: write
contents: read
jobs:
plan:
runs-on: ubuntu-latest
steps:
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v2
with:
component: "foobar"
stack: "plat-ue2-sandbox"
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: 1.158.0
```
### Migrating from `v4` to `v5`
The notable changes in `v5` are:
- `v5` works only with `atmos >= 1.158.0`
- `v5` supports atnos `templates` and `functions`
### Migrating from `v3` to `v4`
The notable changes in `v4` are:
- `v4` works only with `atmos >= 1.99.0`
- `v4` support azure plan and metadata storage
- `v4` supports stack level integration gitops settings
- `v4` allow to skip internal checkout with `skip-checkout` input
- `v4` support creating summary comments to PR
The only required migration step is updating atmos version to `>= 1.99.0`
### Migrating from `v2` to `v3`
The notable changes in `v3` are:
- `v3` use `actions/upload-artifact@v4` to share artifacts so it is not compatible with `cloudposse/github-action-atmos-terraform-drift-detection` `< v2.0.0`
- `v3` support .terraform caching to performance improvment
No special migration steps required
### Migrating from `v1` to `v2`
The notable changes in `v2` are:
- `v2` works only with `atmos >= 1.63.0`
- `v2` drops `install-terraform` input because terraform is not required for affected stacks call
- `v2` drops `atmos-gitops-config-path` input and the `./.github/config/atmos-gitops.yaml` config file. Now you have to use GitHub Actions environment variables to specify the location of the `atmos.yaml`.
The following configuration fields now moved to GitHub action inputs with the same names
| name |
|-------------------------|
| `atmos-version` |
| `atmos-config-path` |
The following configuration fields moved to the `atmos.yaml` configuration file.
| name | YAML path in `atmos.yaml` |
|--------------------------|-------------------------------------------------|
| `aws-region` | `integrations.github.gitops.artifact-storage.region` |
| `terraform-state-bucket` | `integrations.github.gitops.artifact-storage.bucket` |
| `terraform-state-table` | `integrations.github.gitops.artifact-storage.table` |
| `terraform-state-role` | `integrations.github.gitops.artifact-storage.role` |
| `terraform-plan-role` | `integrations.github.gitops.role.plan` |
| `terraform-apply-role` | `integrations.github.gitops.role.apply` |
| `terraform-version` | `integrations.github.gitops.terraform-version` |
| `enable-infracost` | `integrations.github.gitops.infracost-enabled` |
| `sort-by` | `integrations.github.gitops.matrix.sort-by` |
| `group-by` | `integrations.github.gitops.matrix.group-by` |
For example, to migrate from `v1` to `v2`, you should have something similar to the following in your `atmos.yaml`:
`./.github/config/atmos.yaml`
```yaml
# ... your existing configuration
integrations:
github:
gitops:
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
`.github/workflows/main.yaml`
```yaml
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v2
with:
component: "foobar"
stack: "plat-ue2-sandbox"
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: 1.63.0
```
This corresponds to the `v1` configuration (deprecated) below.
The `v1` configuration file `./.github/config/atmos-gitops.yaml` looked like this:
```yaml
atmos-version: 1.45.3
atmos-config-path: ./rootfs/usr/local/etc/atmos/
terraform-state-bucket: cptest-core-ue2-auto-gitops
terraform-state-table: cptest-core-ue2-auto-gitops
terraform-state-role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
terraform-plan-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-apply-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-version: 1.5.2
aws-region: us-east-2
enable-infracost: false
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
And the `v1` GitHub Action Workflow looked like this.
`.github/workflows/main.yaml`
```yaml
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v1
with:
component: "foobar"
stack: "plat-ue2-sandbox"
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
```
### Migrating from `v1` to `v2`
1. `v2` drops the `component-path` variable and instead fetches if directly from the [`atmos.yaml` file](https://atmos.tools/cli/configuration/) automatically. Simply remove the `component-path` argument from your invocations of the `cloudposse/github-action-atmos-terraform-plan` action.
2. `v2` moves most of the `inputs` to the Atmos GitOps config path `./.github/config/atmos-gitops.yaml`. Simply create this file, transfer your settings to it, then remove the corresponding arguments from your invocations of the `cloudposse/github-action-atmos-terraform-plan` action.
| name |
|--------------------------|
| `atmos-version` |
| `atmos-config-path` |
| `terraform-state-bucket` |
| `terraform-state-table` |
| `terraform-state-role` |
| `terraform-plan-role` |
| `terraform-apply-role` |
| `terraform-version` |
| `aws-region` |
| `enable-infracost` |
If you want the same behavior in `v2` as in `v1` you should create config `./.github/config/atmos-gitops.yaml` with the same variables as in `v1` inputs.
```yaml
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v1
with:
component: "foobar"
stack: "plat-ue2-sandbox"
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
```
Which would produce the same behavior as in `v1`, doing this:
```yaml
- name: Plan Atmos Component
uses: cloudposse/github-action-atmos-terraform-plan@v1
with:
component: "foobar"
stack: "plat-ue2-sandbox"
component-path: "components/terraform/s3-bucket"
terraform-plan-role: "arn:aws:iam::111111111111:role/acme-core-gbl-identity-gitops"
terraform-state-bucket: "acme-core-ue2-auto-gitops"
terraform-state-role: "arn:aws:iam::999999999999:role/acme-core-ue2-auto-gitops-gha"
terraform-state-table: "acme-core-ue2-auto-gitops"
aws-region: "us-east-2"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| atmos-config-path | The path to the atmos.yaml file | N/A | true |
| atmos-pro-base-url | The base URL of Atmos Pro | https://atmos-pro.com | false |
| atmos-pro-upload-status | If set atmos will upload the plan result to the pro API | false | false |
| atmos-version | The version of atmos to install | >= 1.158.0 | false |
| branding-logo-image | Branding logo image url | https://cloudposse.com/logo-300x69.svg | false |
| branding-logo-url | Branding logo url | https://cloudposse.com/ | false |
| component | The name of the component to plan. | N/A | true |
| debug | Enable action debug mode. Default: 'false' | false | false |
| drift-detection-mode-enabled | Indicate whether this action is used in drift detection workflow. | false | true |
| identity | Atmos auth identity | | false |
| infracost-api-key | Infracost API key | N/A | false |
| metadata-retention-days | Infracost API key | 1 | false |
| plan-storage | Enable plan storage. Default: 'true'. Set to 'false' to disable plan storage. | true | false |
| pr-comment | Set to 'true' to create a PR comment with the summary of the plan | false | false |
| sha | Commit SHA to plan. Default: github.sha | $\{\{ github.event.pull\_request.head.sha \}\} | true |
| skip-checkout | Disable actions/checkout. Useful for when the checkout happens in a previous step and file are modified outside of git through other actions | false | false |
| stack | The stack name for the given component. | N/A | true |
| token | Used to pull node distributions for Atmos from Cloud Posse's GitHub repository. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. | $\{\{ github.server\_url == 'https://github.com' && github.token \|\| '' \}\} | false |
## Outputs
| Name | Description |
|------|-------------|
| has-changes | Whether the plan has changes. Value is string 'true' or 'false' |
| plan\_file | Path to the terraform plan file |
| plan\_json | Path to the terraform plan in JSON format |
| summary | Summary |
---
## atmos-terraform-select-components
# GitHub Action: `atmos-terraform-select-components`
GitHub Action that outputs list of Atmos components by jq query
## Introduction
GitHub Action that outputs list of Atmos components by jq query.
For example following query will fetch components that have in settings set `github.actions_enabled: true`:
```
.value.settings.github.actions_enabled // false
```
Output of this action is a list of basic component information. For example:
```json
[
{
"stack": "plat-ue2-sandbox",
"component": "test-component-01",
"stack_slug": "plat-ue2-sandbox-test-component-01",
"component_path": "components/terraform/s3-bucket"
}
]
```
## Usage
### Config
The action expects the atmos configuration file `atmos.yaml` to be present in the repository.
The config should have the following structure:
```yaml
integrations:
github:
gitops:
opentofu-version: 1.7.3
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
:::important
**Please note!** This GitHub Action only works with `atmos >= 1.63.0`. If you are using `atmos < 1.63.0` please use `v1` version of this action.
:::
### Support OpenTofu
This action supports [OpenTofu](https://opentofu.org/).
:::important
**Please note!** OpenTofu supported by Atmos `>= 1.73.0`.
For details [read](https://atmos.tools/core-concepts/projects/configuration/opentofu/)
:::
To enable OpenTofu add the following settings to `atmos.yaml`
* Set the `opentofu-version` in the `atmos.yaml` to the desired version
* Set `components.terraform.command` to `tofu`
#### Example
```yaml
components:
terraform:
command: tofu
...
integrations:
github:
gitops:
opentofu-version: 1.7.3
...
```
### GitHub Actions Workflow Example
In following GitHub workflow example first job will filter components that have settings `github.actions_enabled: true` and then in following job `stack_slug` will be printed to stdout.
```yaml
jobs:
selected-components:
runs-on: ubuntu-latest
name: Select Components
outputs:
matrix: ${{ steps.components.outputs.matrix }}
steps:
- name: Selected Components
id: components
uses: cloudposse/github-action-atmos-terraform-select-components@v2
with:
atmos-config-path: "${{ github.workspace }}/rootfs/usr/local/etc/atmos/"
jq-query: 'to_entries[] | .key as $parent | .value.components.terraform | to_entries[] | select(.value.settings.github.actions_enabled // false) | [$parent, .key] | join(",")'
print-stack-slug:
runs-on: ubuntu-latest
needs:
- selected-components
if: ${{ needs.selected-components.outputs.matrix != '{"include":[]}' }}
strategy:
matrix: ${{ fromJson(needs.selected-components.outputs.matrix) }}
name: ${{ matrix.stack_slug }}
steps:
- name: echo
run:
echo "${{ matrix.stack_slug }}"
```
### Migrating from `v1` to `v2`
The notable changes in `v2` are:
- `v2` works only with `atmos >= 1.63.0`
- `v2` drops `install-terraform` input because terraform is not required for affected stacks call
- `v2` drops `atmos-gitops-config-path` input and the `./.github/config/atmos-gitops.yaml` config file. Now you have to use GitHub Actions environment variables to specify the location of the `atmos.yaml`.
The following configuration fields now moved to GitHub action inputs with the same names
| name |
|-------------------------|
| `atmos-version` |
| `atmos-config-path` |
The following configuration fields moved to the `atmos.yaml` configuration file.
| name | YAML path in `atmos.yaml` |
|--------------------------|-------------------------------------------------|
| `aws-region` | `integrations.github.gitops.artifact-storage.region` |
| `terraform-state-bucket` | `integrations.github.gitops.artifact-storage.bucket` |
| `terraform-state-table` | `integrations.github.gitops.artifact-storage.table` |
| `terraform-state-role` | `integrations.github.gitops.artifact-storage.role` |
| `terraform-plan-role` | `integrations.github.gitops.role.plan` |
| `terraform-apply-role` | `integrations.github.gitops.role.apply` |
| `terraform-version` | `integrations.github.gitops.terraform-version` |
| `enable-infracost` | `integrations.github.gitops.infracost-enabled` |
| `sort-by` | `integrations.github.gitops.matrix.sort-by` |
| `group-by` | `integrations.github.gitops.matrix.group-by` |
| `process-functions` | `integrations.github.gitops.matrix.process-functions` |
For example, to migrate from `v1` to `v2`, you should have something similar to the following in your `atmos.yaml`:
`./.github/config/atmos.yaml`
```yaml
# ... your existing configuration
integrations:
github:
gitops:
terraform-version: 1.5.2
infracost-enabled: false
artifact-storage:
region: us-east-2
bucket: cptest-core-ue2-auto-gitops
table: cptest-core-ue2-auto-gitops-plan-storage
role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
role:
plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
`.github/workflows/main.yaml`
```yaml
- name: Selected Components
id: components
uses: cloudposse/github-action-atmos-terraform-select-components@v2
with:
atmos-config-path: ./rootfs/usr/local/etc/atmos/
jq-query: 'to_entries[] | .key as $parent | .value.components.terraform | to_entries[] | select(.value.settings.github.actions_enabled // false) | [$parent, .key] | join(",")'
```
This corresponds to the `v1` configuration (deprecated) below.
The `v1` configuration file `./.github/config/atmos-gitops.yaml` looked like this:
```yaml
atmos-version: 1.45.3
atmos-config-path: ./rootfs/usr/local/etc/atmos/
terraform-state-bucket: cptest-core-ue2-auto-gitops
terraform-state-table: cptest-core-ue2-auto-gitops
terraform-state-role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
terraform-plan-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-apply-role: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
terraform-version: 1.5.2
aws-region: us-east-2
enable-infracost: false
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
```
And the `v1` GitHub Action Workflow looked like this.
`.github/workflows/main.yaml`
```yaml
- name: Selected Components
id: components
uses: cloudposse/github-action-atmos-terraform-select-components@v1
with:
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
jq-query: 'to_entries[] | .key as $parent | .value.components.terraform | to_entries[] | select(.value.settings.github.actions_enabled // false) | [$parent, .key] | join(",")'
```
### Migrating from `v0` to `v1`
1. `v1` replaces the `jq-query` input parameter with a new parameter called `selected-filter` to simplify the query for end-users.
Now you need to specify only the part used inside of the `select(...)` function of the `jq-query`.
2.`v1` moves most of the `inputs` to the Atmos GitOps config path `./.github/config/atmos-gitops.yaml`. Simply create this file, transfer your settings to it, then remove the corresponding arguments from your invocations of the `cloudposse/github-action-atmos-terraform-select-components` action.
| name |
|--------------------------|
| `atmos-version` |
| `atmos-config-path` |
If you want the same behavior in `v2` as in `v1` you should create config `./.github/config/atmos-gitops.yaml` with the same variables as in `v0` inputs.
```yaml
- name: Selected Components
id: components
uses: cloudposse/github-action-atmos-terraform-select-components@v1
with:
atmos-gitops-config-path: ./.github/config/atmos-gitops.yaml
select-filter: '.settings.github.actions_enabled // false'
```
Which would produce the same behavior as in `v2`, doing this:
```yaml
- name: Selected Components
id: components
uses: cloudposse/github-action-atmos-terraform-select-components@v0
with:
atmos-config-path: "${{ github.workspace }}/rootfs/usr/local/etc/atmos/"
jq-query: 'to_entries[] | .key as $parent | .value.components.terraform | to_entries[] | select(.value.settings.github.actions_enabled // false) | [$parent, .key] | join(",")'
```
Please note that the `atmos-gitops-config-path` is not the same file as the `atmos-config-path`.
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| atmos-config-path | The path to the atmos.yaml file | N/A | true |
| atmos-version | The version of atmos to install | >= 1.63.0 | false |
| debug | Enable action debug mode. Default: 'false' | false | false |
| head-ref | The head ref to checkout. If not provided, the head default branch is used. | $\{\{ github.sha \}\} | false |
| jq-version | The version of jq to install if install-jq is true | 1.7 | false |
| nested-matrices-count | Number of nested matrices that should be returned as the output (from 1 to 3) | 2 | false |
| process-functions | Whether to process atmos functions | true | false |
| select-filter | jq query that will be used to select atmos components | . | false |
| skip-checkout | Disable actions/checkout for head-ref. Useful for when the checkout happens in a previous step and file are modified outside of git through other actions | false | false |
## Outputs
| Name | Description |
|------|-------------|
| has-selected-components | Whether there are selected components |
| matrix | The selected components as matrix structure suitable for extending matrix size workaround (see README) |
| selected-components | Selected GitOps components |
---
## auto-format
# GitHub Action: `auto-format`
Github Action Auto-Format runs several repository "hygiene" tasks for repositories:
- The `readme` target will rebuild `README.md` from `README.yaml`.
- The `github_format` target adds all of Cloud Posse's standard repository housekeeping files (including GitHub Actions workflows) to the repository's `.github` folder.
- The `terraform_format` target ensures consistent formatting across all Terraform files in the repository.
## Usage
If you haven't already, follow the steps in the [quickstart](#quickstart) section.
To choose which pieces of functionality will be executed, modify the `script-names:` input to the `cloudposse/github-action-auto-format` step to be a comma-separated list of one or more targets (e.g., `script-names: readme,terraform_format,github_format`).
This is an exhaustive list of all valid `script-name`s:
- `readme`
- `github_format`
- `terraform_format`
If you're using the `auto-format.yml` workflow file distributed within this repository, then the Auto-format GitHub Action will trigger on pull request events, once a day at 7am UTC, and upon manual triggering via the `workflow_dispatch` mechanism.
## Quick Start
Here's how to get started...
1. Copy `.github/workflows/auto-format.yml` to the corresponding folder in your target repo.
2. Generate a Personal Access Token (PAT) that with the `workflow` permission *using a GitHub account that has `write` permissions in the target repo* by following the directions [here](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) and selecting the `workflow` tick box on the token creation screen.
3. Add this token as a GitHub secret in your target repository and set the `workflow-token` input of the `github-action-auto-format` step to the name of your secret.
4. Set the `bot-name` input of the `github-action-auto-format` step to the GitHub username of the user who generated the token in step 2. *This user must have `write` permissions in the target repo.`
5. By default, the Auto-Format GitHub Action will execute all of its scripts when run. If you'd like to use a subset of the full functionality, modify the `script-names` input of the `github-action-auto-format` step as described in the [usage](#usage) section.
6. (Optional) You may want to change when the scheduled cron trigger is executed. If you'd like a guide, here's a useful resource for help in crafting cron strings - https://crontab.guru/
7. (Optional) CloudPosse recommends pinning to specific versions of actions for ease of long-term maintenance. If you care to edit the pin in `auto-format.yml` from `main` to a specific version, feel free to consult https://github.com/cloudposse/github-action-auto-format/releases for a list of available versions.
## Examples
Here's a real world example:
- [`github-action-auto-format`](https://github.com/cloudposse/github-action-auto-format/.github/workflows/auto-format.yml) - Cloud Posse's self-testing Auto-Format GitHub Action
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| actions-files-checkout-path | The path on the github-runner where the auto-format action scripts are checked out at runtime | github-action-auto-format | false |
| bot-email | Email address associated with the GitHub user for writing new commits | N/A | false |
| bot-name | GitHub username for writing new commits | cloudpossebot | false |
| format-task | Name of formatting task to execute. (Options include: readme, github, terraform, and context.) | N/A | true |
| workflow-token | GitHub Token for use in `github\_format.sh` and PR creation steps. This token must be granted `workflows` permissions. | N/A | true |
---
## auto-release
# GitHub Action: `auto-release`
This is an opinionated composite Github Action that implements a workflow based on the popular `release-drafter` action to automatically draft releases with release notes that are derived from PR descriptions as they are merged into the default branch.
Under default settings, `auto-release` will also cut a new release from the default branch after every merge into it. However, releases are not cut for merges of pull requests with a `no-release` label attached. In that case, the release notes are left as a draft and a release with all unreleased changes will be made the next time a pull request without the `no-release` label is merged into the default branch.
## Usage
Copy the `.github/workflows/auto-release.yml` and `.github/configs/release-drafter.yml` files from this repository into the corresponding folders of the repository to which you'd like to add Auto-release functionality.
This will trigger the `auto-release` functionality every time merges are made into the default branch.
## Quick Start
Here's how to get started...
1. Copy the `.github/workflows/auto-release.yml` github action workflow from this repository into the corresponding folder of the target repo
2. Copy the `.github/configs/release-drafter.yml` auto-release config file from this repository into the corresponding folder of the target repo
3. Customize the config file as desired, per the [config documentation](https://github.com/release-drafter/release-drafter#configuration)
## Examples
Here's a real world example:
- [`github-action-auto-release`](https://github.com/cloudposse/github-action-auto-release/.github/workflows/auto-release.yml) - The self-testing Cloud Posse Auto-format GitHub Action
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| config-name | If your workflow requires multiple release-drafter configs it is helpful to override the config-name.The config should still be located inside `.github` as that's where we are looking for config files. | configs/draft-release.yml | false |
| latest | A string indicating whether the release being created or updated should be marked as latest. | | false |
| prerelease | Boolean indicating whether this release should be a prerelease | | false |
| publish | Whether to publish a new release immediately | false | false |
| summary-enabled | Enable github action summary. | true | false |
| token | Standard GitHub token (e.g., secrets.GITHUB\_TOKEN) | $\{\{ github.token \}\} | false |
## Outputs
| Name | Description |
|------|-------------|
| body | The body of the drafted release. |
| exists | Tag exists so skip new release issue |
| html\_url | The URL users can navigate to in order to view the release |
| id | The ID of therelease that was created or updated. |
| major\_version | The next major version number. For example, if the last tag or release was v1.2.3, the value would be v2.0.0. |
| minor\_version | The next minor version number. For example, if the last tag or release was v1.2.3, the value would be v1.3.0. |
| name | The name of the release |
| patch\_version | The next patch version number. For example, if the last tag or release was v1.2.3, the value would be v1.2.4. |
| resolved\_version | The next resolved version number, based on GitHub labels. |
| tag\_name | The name of the tag associated with the release. |
| upload\_url | The URL for uploading assets to the release, which could be used by GitHub Actions for additional uses, for example the @actions/upload-release-asset GitHub Action. |
---
## aws-region-reduction-map
# GitHub Action: `aws-region-reduction-map`
Converts AWS region names from full names to abbreviations
## Introduction
Converts AWS region names from full names to either
"fixed" (always 3 characters) or "short" (usually 4 or 5 characters)
abbreviations, following the same map as
https://github.com/cloudposse/terraform-aws-utils
Short abbreviations are generally the same as official AWS
availability zone IDs.
Generally, AWS region names have 3 parts and the "fixed" abbreviation
is the first character of each part. Exceptions (due to collisions):
- Africa and China use second letter of first part.
- ap-south-1 is shortened to as0 to avoid conflict with ap-southeast-1
- cn-north-1 is shortened to nn0 to avoid conflict with cn-northwest-1
You should be able to list all regions with this command:
```shell
aws ec2 describe-regions --all-regions --query "Regions[].{Name:RegionName}" --output text
```
but actually it leaves out GovCloud and China
See https://github.com/jsonmaur/aws-regions for more complete list
| long | fixed | short |
|------------------|-------|---------|
| `ap-east-1` | `ae1` | `ape1` |
| `ap-northeast-1` | `an1` | `apne1` |
| `ap-northeast-2` | `an2` | `apne2` |
| `ap-northeast-3` | `an3` | `apne3` |
| `ap-south-1` | `as0` | `aps1` |
| `ap-southeast-1` | `as1` | `apse1` |
| `ap-southeast-2` | `as2` | `apse2` |
| `ca-central-1` | `cc1` | `cac1` |
| `eu-central-1` | `ec1` | `euc1` |
| `eu-north-1` | `en1` | `eun1` |
| `eu-south-1` | `es1` | `eus1` |
| `eu-west-1` | `ew1` | `euw1` |
| `eu-west-2` | `ew2` | `euw2` |
| `eu-west-3` | `ew3` | `euw3` |
| `af-south-1` | `fs1` | `afs1` |
| `us-gov-east-1` | `ge1` | `usge1` |
| `us-gov-west-1` | `gw1` | `usgw1` |
| `me-south-1` | `ms1` | `mes1` |
| `cn-north-1` | `nn0` | `cnn1` |
| `cn-northwest-1` | `nn1` | `cnnw1` |
| `sa-east-1` | `se1` | `sae1` |
| `us-east-1` | `ue1` | `use1` |
| `us-east-2` | `ue2` | `use2` |
| `us-west-1` | `uw1` | `usw1` |
| `us-west-2` | `uw2` | `usw2` |
## Usage
### Convert AWS region (ex.: `us-west-2`) to fixed abbreviation - will be `uw2`.
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/github-action-aws-region-reduction-map@main
id: aws_map
with:
region: 'us-west-2'
## Format can be skipped - default format would be `fixed` if region is long
format: 'fixed'
outputs:
result: ${{ steps.aws_map.outputs.result }}
```
### Convert AWS region (ex.: `us-west-2`) to short abbreviation - will be `usw2`.
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/github-action-aws-region-reduction-map@main
id: aws_map
with:
region: 'us-west-2'
format: 'short'
outputs:
result: ${{ steps.aws_map.outputs.result }}
```
### Convert short AWS region (ex.: `usw2`) to long abbreviation - will be `us-west-2`.
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/github-action-aws-region-reduction-map@main
id: aws_map
with:
region: 'usw2'
format: 'long'
outputs:
result: ${{ steps.aws_map.outputs.result }}
```
### Convert fixed AWS region (ex.: `uw2`) to long abbreviation - will be `us-west-2`.
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/github-action-aws-region-reduction-map@main
id: aws_map
with:
region: 'uw2'
format: 'long'
outputs:
result: ${{ steps.aws_map.outputs.result }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| format | Format convert to. Valid values ('long', 'short', 'fixed'). If empty short and fixed inputs are converted to long, long inputs are converted to fixed. | N/A | false |
| region | Input region code | N/A | true |
## Outputs
| Name | Description |
|------|-------------|
| result | Converted AWS region |
---
## datadog-notify
# GitHub Action: `datadog-notify`
Create Datadog Notify Event
## Introduction
This repository contains the action for sending an event to datadog.
## Usage
Minimal Usage:
```yaml
- name: Notify Datadog
uses: cloudposse/github-action-datadog-notify@main
with:
api_key: ## ${{ env.DATADOG_API_KEY }} ## ${{secrets.DATADOG_API_KEY}}
title: "GitHub Action: ${{ github.event_name }}"
text: "GitHub Action: ${{ github.event_name }}"
tags: "source:github,repo:${{ github.repository }},event:${{ github.event_name }}"
alert_type: "info"
```
Below is a snippet that will send an event to datadog when a pull request is sync'd.
Below uses the `dkershner6/aws-ssm-getparameters-action` to get the datadog api key from ssm.
```yaml
name: Datadog Notify
on:
workflow_dispatch:
pull_request:
branches:
- 'main'
permissions:
contents: read
pull-requests: write
id-token: write
jobs:
datadog-notify:
runs-on: ["self-hosted"]
steps:
- uses: actions/checkout@v3
- name: Configure AWS credentials
id: aws-credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
role-session-name: "gha-datadog-notify"
aws-region: "us-east-1"
- uses: dkershner6/aws-ssm-getparameters-action@v1
with:
parameterPairs: "/datadog/datadog_api_key = DATADOG_API_KEY"
- name: Notify Datadog
uses: cloudposse/github-action-datadog-notify@main
with:
api_key: ${{ env.DATADOG_API_KEY }}
title: "GitHub Action: ${{ github.event_name }}"
text: "GitHub Action: ${{ github.event_name }}"
tags: "source:github,repo:${{ github.repository }},event:${{ github.event_name }}"
alert_type: "info"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| alert\_type | Type of the event, one of: error,warning,info,success,user\_update,recommendation,snapshot | info | true |
| api\_key | Datadog API Key | N/A | true |
| append\_hostname\_tag | Should we append the hostname as a tag to the event, set this to the key of the tag | | false |
| tags | Space separated list of Tags for the event | N/A | true |
| text | Description of the event | N/A | true |
| title | Title of the event | N/A | true |
## Outputs
| Name | Description |
|------|-------------|
---
## deploy-argocd
# GitHub Action: `deploy-argocd`
Deploy on Kubernetes with ArgoCD
## Introduction
Deploy on Kubernetes with Helm/HelmFile and ArgoCD.
## Usage
Deploy environment
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened]
jobs:
deploy:
runs-on: ubuntu-latest
environment:
name: preview
url: ${{ steps.deploy.outputs.webapp-url }}
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1.7.0
with:
aws-region: us-west-2
role-to-assume: arn:aws:iam::111111111111:role/preview
role-session-name: deploy
- name: Deploy
uses: cloudposse/github-action-deploy-argocd@main
id: deploy
with:
cluster: https://github.com/cloudposse/argocd-deploy-non-prod-test/blob/main/plat/ue2-sandbox/apps
toolchain: helmfile
environment: preview
namespace: preview
application: test-app
github-pat: ${{ secrets.GITHUB_AUTH_PAT }}
repository: ${{ github.repository }}
ref: ${{ github.event.pull_request.head.ref }}
image: nginx
image-tag: latest
operation: deploy
debug: false
synchronously: true
```
Destroy environment
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [closed]
jobs:
destroy:
runs-on: ubuntu-latest
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1.7.0
with:
aws-region: us-west-2
role-to-assume: arn:aws:iam::111111111111:role/preview
role-session-name: destroy
- name: Destroy
uses: cloudposse/github-action-deploy-helmfile@main
id: destroy
with:
cluster: https://github.com/cloudposse/argocd-deploy-non-prod-test/blob/main/plat/ue2-sandbox/apps
toolchain: helmfile
environment: preview
namespace: preview
application: test-app
github-pat: ${{ secrets.GITHUB_AUTH_PAT }}
repository: ${{ github.repository }}
ref: ${{ github.event.pull_request.head.ref }}
image: ""
image-tag: ""
operation: destroy
debug: false
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| application | Application name | N/A | true |
| aws-region | AWS region | us-east-1 | false |
| check-retry-count | Check retry count (for synchronously mode) | 5 | false |
| check-retry-interval | Check retry interval (in seconds) (for synchronously mode) | 10 | false |
| cluster | Cluster name | N/A | true |
| commit-retry-count | Commit retry count | 4 | false |
| commit-retry-interval | Commit retry interval (in seconds) | 10 | false |
| commit-status-github-token | Github token to access the app repository. Defaults to github-pat if not set. | N/A | false |
| commit-timeout | Commit timeout (in seconds) | 60 | false |
| debug | Debug mode | false | false |
| environment | Helmfile environment | preview | false |
| github-pat | Github PAT to access argocd configuration repository | N/A | true |
| gitref-sha | Git SHA (Depricated. Use `ref` instead) | | false |
| helm-args | Additional helm arguments | | false |
| helm-dependency-build | Run helm dependency build, only for helm toolchain, `true` or `false` | false | false |
| helm-version | Helm version | v3.10.2 | false |
| helmfile-args | Additional helmfile arguments | | false |
| helmfile-version | Helmfile version | v0.148.1 | false |
| image | Docker image | N/A | true |
| image-tag | Docker image tag | N/A | true |
| namespace | Kubernetes namespace | N/A | true |
| operation | Operation with helmfiles. (valid options - `deploy`, `destroy`) | deploy | true |
| path | The path where lives the helmfile or helm chart. | N/A | true |
| ref | Git ref | N/A | true |
| release\_label\_name | The name of the label used to describe the helm release | release | false |
| repository | Application GitHub repository full name | N/A | true |
| ssm-path | SSM path to read environment secrets | N/A | true |
| synchronously | Wait until ArgoCD successfully apply the changes | false | false |
| toolchain | Toolchain ('helm', 'helmfile') | helmfile | false |
| values\_file | Helm values file, this can be a single file or a comma separated list of files | | false |
## Outputs
| Name | Description |
|------|-------------|
| sha | Git commit SHA into argocd repo |
| webapp-url | Web Application url |
---
## deploy-ecspresso
# GitHub Action: `deploy-ecspresso`
Deploy on ECS with [Escpresso](https://github.com/kayac/ecspresso)
## Introduction
This is template repository to create composite GitHub Actions.
Feel free to use it as reference and starting point.
## Usage
```yaml
name: Pull Request
on:
push:
branches: [ 'main' ]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Example action
uses: cloudposse/example-github-action-deploy-ecspresso@main
id: example
with:
image: 1111111111111.dkr.ecr.us-east-2.amazonaws.com/cloudposse/example-app-on-ecs
image-tag: latest
region: us-east-2
operation: deploy
cluster: acme-plat-ue2-sandbox
application: acme-plat-ue2-sandbox-example-app-on-ecs
taskdef-path: taskdef.json
outputs:
result: ${{ steps.example.outputs.webapp-url }}
```
## S3 Mirroring
S3 Mirroring is a pattern of uploading the deployed task definition to an S3 Bucket. This is so that the task definition can be updated with the latest image tag, and the terraform does not reset it back to a previous tag set in the infrastructure repository.
## Partial Task Definition
A "Partial Task Definition" is an authoring pattern where the application repository maintains only the parts of the ECS task definition that the app team owns or changes frequently (for example, container image/tag, environment variables, command/args, CPU/memory for a container). The more static, infrastructure-owned parts (for example, IAM roles, volumes, EFS mounts, log configuration, task-level networking) are provided by a template maintained in the infrastructure repository.
During deployment, this action merges the infrastructure-provided template (optionally fetched from S3) with the local partial task definition from the application repo to produce a complete `task-definition.json`, which is then deployed by `ecspresso`.

## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| application | Application name | N/A | true |
| cluster | Cluster name | N/A | true |
| debug | Debug mode | false | false |
| ecspresso-version | Ecspresso version | v2.1.0 | false |
| image | Docker image | N/A | true |
| image-tag | Docker image tag | N/A | true |
| mirror\_to\_s3\_bucket | Mirror task definition to s3 bucket | N/A | false |
| operation | Operation (valid options - `deploy`, `destroy`) | deploy | true |
| region | AWS Region | N/A | true |
| taskdef-path | Task definition path | N/A | true |
| timeout | Ecspresso timeout | 5m | false |
| use\_partial\_taskdefinition | NOTE: Experimental. Load templated task definition from S3 bucket, which is created by the `ecs-service` component. This is useful when you want to manage the task definition in the infrastructure repository and the application repository. The infrastructure repository manages things like Volumes and EFS mounts, and the Application repository manages the application code and environment variables. | N/A | false |
## Outputs
| Name | Description |
|------|-------------|
| webapp-url | Web Application url |
---
## deploy-helmfile
# GitHub Action: `deploy-helmfile`
Deploy on Kubernetes with HelmFile
## Introduction
Deploy on Kubernetes with HelmFile.
## Usage
Deploy environment
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened]
jobs:
deploy:
runs-on: ubuntu-latest
environment:
name: preview
url: ${{ steps.deploy.outputs.webapp-url }}
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1.7.0
with:
aws-region: us-west-2
role-to-assume: arn:aws:iam::111111111111:role/preview
role-session-name: deploy
- name: Deploy
uses: cloudposse/github-action-deploy-helmfile@main
id: deploy
with:
aws-region: us-west-2
cluster: preview-eks
environment: preview
namespace: preview
image: nginx
image-tag: latest
operation: deploy
debug: false
```
Destroy environment
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [closed]
jobs:
destroy:
runs-on: ubuntu-latest
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1.7.0
with:
aws-region: us-west-2
role-to-assume: arn:aws:iam::111111111111:role/preview
role-session-name: destroy
- name: Destroy
uses: cloudposse/github-action-deploy-helmfile@main
id: destroy
with:
aws-region: us-west-2
cluster: preview-eks
environment: preview
namespace: preview
image: ""
image-tag: ""
operation: destroy
debug: false
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| aws-region | AWS region | us-east-1 | false |
| chamber\_version | Kubectl version | 2.11.1 | false |
| cluster | Cluster name | N/A | true |
| debug | Debug mode | false | false |
| environment | Helmfile environment | preview | false |
| gitref-sha | Git SHA | | false |
| helm\_version | Helm version | 3.11.1 | false |
| helmfile | Helmfile name | helmfile.yaml | false |
| helmfile-path | The path where lives the helmfile. | deploy | false |
| helmfile\_version | Helmfile version | 0.143.5 | false |
| image | Docker image | N/A | true |
| image-tag | Docker image tag | N/A | true |
| kubectl\_version | Kubectl version | 1.26.3 | false |
| namespace | Kubernetes namespace | N/A | true |
| operation | Operation with helmfiles. (valid options - `deploy`, `destroy`) | deploy | true |
| release\_label\_name | The name of the label used to describe the helm release | release | false |
| url-resource-type | The type of the resource to get the URL from | ingress | false |
| values\_yaml | YAML string with extra values to use in a helmfile deploy | N/A | false |
## Outputs
| Name | Description |
|------|-------------|
| webapp-url | Web Application url |
---
## deploy-spacelift
# GitHub Action: `deploy-spacelift`
Opinionated way to deploy Docker image app with Spacelift
## Introduction
Set Docker image uri into SSM parameter store and trigger Spacelift stack that should handle deployment.
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
deploy:
runs-on: ubuntu-latest
environment:
name: production
url: ${{ steps.deploy.outputs.webapp-url }}
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1.7.0
with:
aws-region: us-west-2
role-to-assume: arn:aws:iam::123456789012:role/AllowWriteSSM
- name: Deploy
uses: cloudposse/github-action-deploy-spacelift@main
id: deploy
with:
stack: ecs-service-production
region: us-west-2
ssm-path: /ecs-service/image
image: nginx
image-tag: latest
operation: deploy
debug: false
github_token: ${{ secrets.GITHUB_TOKEN }}
organization: acme
api_key_id: ${{ secrets.SPACELIFT_API_KEY_ID }}
api_key_secret: ${{ secrets.SPACELIFT_API_KEY_SECRET }}
outputs:
url: ${{ steps.deploy.outputs.webapp-url }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| api\_key\_id | Spacelift API Key ID | N/A | true |
| api\_key\_secret | Spacelift API Key Secret | N/A | true |
| debug | Debug mode | false | false |
| github\_token | GitHub Token | N/A | true |
| image | Docker image | N/A | true |
| image-tag | Docker image tag | N/A | true |
| namespace | Namespace | N/A | false |
| operation | Operation (valid options - `deploy`, `destroy`) | deploy | true |
| organization | Spacelift organization name | N/A | true |
| region | AWS Region | N/A | true |
| ssm-path | SSM path for Docker image | N/A | false |
| stack | Spacelift stack name | N/A | true |
| webapp-output-name | Spacelist stack output field contains webapp host name | full\_domain | false |
## Outputs
| Name | Description |
|------|-------------|
| webapp-url | Web Application url |
---
## docker-build-push
# GitHub Action: `docker-build-push`
Build Docker image and push it
## Introduction
Build Docker image and push it.
## Usage
```yaml
name: Push into main branch
on:
push:
branches: [ master ]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Build
id: build
uses: cloudposse/github-action-docker-build-push@main
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
login: "${{ secrets.DOCKERHUB_USERNAME }}"
password: "${{ secrets.DOCKERHUB_PASSWORD }}"
platforms: linux/amd64,linux/arm64
outputs:
image: ${{ steps.build.outputs.image }}
tag: ${{ steps.build.outputs.tag }}
```
:::tip
If omitted, `cache-from` and `cache-to` will default to `gha`.
In an AWS environment, we recommend using [ECR as a remote cache](https://aws.amazon.com/blogs/containers/announcing-remote-cache-support-in-amazon-ecr-for-buildkit-clients/).
:::
```diff
- name: Build
id: build
uses: cloudposse/github-action-docker-build-push@main
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
+ cache-from: "type=registry,ref=registry.hub.docker.com/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:cache"
+ cache-to: "mode=max,image-manifest=true,oci-mediatypes=true,type=registry,ref=registry.hub.docker.com/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:cache"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| allow | List of extra privileged entitlement (e.g., network.host,security.insecure) | N/A | false |
| binfmt-image | Binfmt image | public.ecr.aws/eks-distro-build-tooling/binfmt-misc:qemu-v7.0.0 | false |
| build-args | List of build-time variables | N/A | false |
| build-contexts | List of additional build contexts (e.g., name=path) | N/A | false |
| buildkitd-flags | BuildKit daemon flags | --allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host | false |
| cache-from | List of external cache sources for buildx (e.g., user/app:cache, type=local,src=path/to/dir) | type=gha | false |
| cache-to | List of cache export destinations for buildx (e.g., user/app:cache, type=local,dest=path/to/dir) | type=gha,mode=max | false |
| debug | Enable debug mode | false | false |
| docker-metadata-pr-head-sha | Set to `true` to tag images with the PR HEAD SHA instead of the merge commit SHA within pull requests. | false | false |
| driver-opts | List of additional driver-specific options. (eg. image=moby/buildkit:master) | image=public.ecr.aws/vend/moby/buildkit:buildx-stable-1 | false |
| file | Dockerfile name | Dockerfile | false |
| image\_name | Image name (excluding registry). Defaults to \{\{$organization/$repository\}\}. | | false |
| inspect | Set to `true` will pull and inspect the image and output it to the step summary. | false | false |
| login | Docker login | | false |
| network | Set the networking mode for the RUN instructions during build | N/A | false |
| no-cache | Send the --no-cache flag to the docker build process | false | false |
| organization | Organization | N/A | true |
| password | Docker password | | false |
| platforms | List of target platforms for build (e.g. linux/amd64,linux/arm64,linux/riscv64,linux/ppc64le,linux/s390x,etc) | linux/amd64 | false |
| provenance | Generate provenance attestation for the build | N/A | false |
| registry | Docker registry | N/A | true |
| repository | Repository | N/A | true |
| secret-files | List of secret files to expose to the build (e.g., key=filename, MY\_SECRET=./secret.txt) | N/A | false |
| secrets | List of secrets to expose to the build (e.g., key=string, GIT\_AUTH\_TOKEN=mytoken) | N/A | false |
| ssh | List of SSH agent socket or keys to expose to the build | N/A | false |
| tags | List of tags (supports https://github.com/docker/metadata-action#tags-input) | N/A | false |
| target | Sets the target stage to build | | false |
| workdir | Working directory | ./ | false |
## Outputs
| Name | Description |
|------|-------------|
| image | Docker image name |
| metadata | Docker image metadata |
| tag | Docker image tag |
---
## docker-compose-test-run
# GitHub Action: `docker-compose-test-run`
Up docker compose and run tests in specific container
## Introduction
Run tests in enviroment defined with Docker Compose
## Usage
```yaml
name: Push into Main
on:
push:
branches: [ master ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Tests
uses: cloudposse/github-action-docker-compose-test-run@main
with:
file: test/docker-compose.yml
service: app
command: test/unit-tests.sh
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| command | Command to run tests | N/A | true |
| docker-compose-version | Docker compose version | 1.29.2 | false |
| entrypoint | Entrypoint | /bin/sh | false |
| file | Docker compose file | N/A | true |
| login | Docker login | | false |
| password | Docker password | | false |
| registry | Docker registry | N/A | true |
| service | Service run tests inside | N/A | true |
| workdir | Working directory | ./ | false |
## Outputs
| Name | Description |
|------|-------------|
---
## docker-image-exists
# GitHub Action: `docker-image-exists`
Check if docker image exists by pulling it
## Usage
```yaml
name: Push into main branch
on:
push:
branches: [ master ]
jobs:
context:
runs-on: ubuntu-latest
continue-on-error: true
steps:
- name: Check image
id: image_exists
uses: cloudposse/github-action-docker-image-exists@main
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
login: "${{ secrets.DOCKERHUB_USERNAME }}"
password: "${{ secrets.DOCKERHUB_PASSWORD }}"
tag: latest
outputs:
result: ${{ steps.image_exists.conclusion }}
image: ${{ steps.image_exists.outputs.image }}
tag: ${{ steps.image_exists.outputs.tag }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| image\_name | Image name (excluding registry). Defaults to \{\{$organization/$repository\}\}. | | false |
| login | Docker login | | false |
| organization | Organization | N/A | true |
| password | Docker password | | false |
| registry | Docker registry | N/A | true |
| repository | Repository | N/A | true |
| tag | Tag | N/A | true |
## Outputs
| Name | Description |
|------|-------------|
| image | Docker image name |
| tag | Docker image tag |
---
## docker-promote
# GitHub Action: `docker-promote`
Promote docker image
## Introduction
Promote Docker image to specific tags provided explicitly or implicitly with
[Docker Metadata action](https://github.com/marketplace/actions/docker-metadata-action)
## Usage
### Promote a docker image to specific tag
```yaml
name: Release
on:
release:
types: [published]
permissions:
id-token: write
contents: write
jobs:
promote:
runs-on: ubuntu-latest
steps:
- name: Docker image promote
uses: cloudposse/github-action-docker-promote@main
id: promote
with:
registry: registry.hub.docker.com
organization: ${{ github.event.repository.owner.login }}
repository: ${{ github.event.repository.name }}
login: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
from: sha-${{ github.sha }}
to: ${{ github.event.release.tag_name }}
use_metadata: false
outputs:
image: ${{ steps.promote.outputs.image }}
tag: ${{ steps.promote.outputs.tag }}
```
### Promote a docker image to tags detected from metadata
Promote action use [Docker Metadata action](https://github.com/marketplace/actions/docker-metadata-action) under the
hood and can detect `to` tags based on Git reference and GitHub events.
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get previous commit
id: prev-commit
run: echo "sha=$(git rev-parse --verify HEAD^1)" >> $GITHUB_OUTPUT
- name: Docker image promote
uses: cloudposse/github-action-docker-promote@main
id: promote
with:
registry: registry.hub.docker.com
organization: ${{ github.event.repository.owner.login }}
repository: ${{ github.event.repository.name }}
login: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
from: sha-${{ steps.prev-commit.outputs.sha }}
use_metadata: true
outputs:
image: ${{ steps.promote.outputs.image }}
tag: ${{ steps.promote.outputs.tag }}
```
### Promote a docker image with `from` fetched from metadata
If you skip `from` tag then it would be populated as SHA of the current commit in long format.
```yaml
name: Release
on:
release:
types: [published]
permissions:
id-token: write
contents: write
jobs:
promote:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Docker image promote
uses: cloudposse/github-action-docker-promote@main
id: promote
with:
registry: registry.hub.docker.com
organization: ${{ github.event.repository.owner.login }}
repository: ${{ github.event.repository.name }}
login: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
## `from` is long SHA
to: ${{ github.event.release.tag_name }}
use_metadata: true
outputs:
image: ${{ steps.promote.outputs.image }}
tag: ${{ steps.promote.outputs.tag }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| from | Source tag | N/A | false |
| image\_name | Image name (excluding registry). Defaults to \{\{$organization/$repository\}\}. | | false |
| login | Docker login | | false |
| organization | Organization | N/A | true |
| password | Docker password | | false |
| promote-retry-max-attempts | Promote retry max attempts | 3 | false |
| promote-retry-timeout-seconds | Promote retry timeout seconds | 3000 | false |
| promote-retry-wait-seconds | Promote retry wait seconds | 30 | false |
| registry | Docker registry | N/A | true |
| repository | Repository | N/A | true |
| to | Target tags | N/A | false |
| use\_metadata | Extract target tags from Git reference and GitHub events | true | false |
## Outputs
| Name | Description |
|------|-------------|
| image | Docker image name |
| tag | Docker image tag |
---
## interface-environment
# GitHub Action: `interface-environment`
Get Environments settings from private settings action provider
## Introduction
Get Environments settings from private settings action provider.
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Example action
uses: cloudposse/example-github-action-composite@main
id: environment
with:
implementation_repository: cloudposse/actions-private
implementation_path: environments
implementation_ref: main
implementation_github_pat: ${{ secrets.GITHUB_REPO_ACCESS_TOKEN }}
environment: dev
namespace: dev
outputs:
name: "${{ steps.environment.outputs.name }}"
region: "${{ steps.environment.outputs.region }}"
role: "${{ steps.environment.outputs.role }}"
cluster: "${{ steps.environment.outputs.cluster }}"
namespace: "${{ steps.environment.outputs.namespace }}"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| application | Application name | N/A | false |
| attributes | Comma separated attributes | N/A | false |
| environment | Environment name | N/A | true |
| implementation\_file | Repository filename with Environment action implementation | action.yaml | true |
| implementation\_github\_pat | GitHub PAT allow fetch environment action implementation | N/A | true |
| implementation\_path | Repository path with Environment action implementation | | true |
| implementation\_ref | Ref of environment action implementation | main | true |
| implementation\_repository | Repository with Environment action implementation | N/A | true |
| namespace | Namespace name | N/A | true |
| repository | Repository name | N/A | false |
## Outputs
| Name | Description |
|------|-------------|
| cluster | Environments that need to be destroyed |
| name | Environment name |
| namespace | Namespace |
| region | JSON formatted \{label\}: \{environment\} map |
| role | Environments that need to be deployed |
| s3-bucket | S3 Bucket for ECS taskdef mirroring |
| ssm-path | Path to ssm secrets |
---
## jq
# GitHub Action: `jq`
Process a input with a jq script and output result as step output
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
example:
outputs:
result: "${{ steps.current.outputs.output }}"
steps:
- uses: cloudposse/github-action-jq@main
id: current
with:
compact: true
input: '["test", "test2", "test3"]'
script: |-
map(select(. == "test"))
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| compact | Compact instead of pretty-printed output | false | false |
| input | JSON file or JSON formatted string | N/A | true |
| raw-output | Output raw strings, not JSON texts | false | false |
| remove-trailing-newline | Remove trailing newline | true | false |
| script | JQ query string | N/A | true |
## Outputs
| Name | Description |
|------|-------------|
| output | Output from the jq command |
---
## kubernetes-environment
# GitHub Action: `kubernetes-environment`
This repository wraps the environment information action, allowing it to be used as a replacement in support of various string functions and namespace standardization.
## Introduction
We often find when deploying with various environments that we need to standardize the namespace names. This repository wraps the environment information action, allowing it to be used as a replacement in support of various string functions and namespace standardization.
With this action, you can use pipe functions to standardize the namespace names, for example, to lowercase, or to replace a dash with an underscore you can use ` | kebabcase` or `| toLower`
## Usage
To use this action, you'll want to create a workflow and argocd repository.
This action is intended to replace cloudposse/github-action-yaml-config-query by wrapping it with helper actions.
With this action your `config` input can have several helper functions.
* `reformat` this replaces the namespace with a flavor of your choice. this is a key added to an environments configuration. See snipped below for example
* `branch-name` will use the branch name as the namespace
* `pr-number` will use the PR number as the namespace
* `| functions`: you can now perform simple string operations on keys in your environment configuration. This can help prevent dns invalid characters from becoming your namespace based on the branch name.
* `| kebabcase` will convert the string to kebabcase (alternatively you can use `| toKebab` or `| kebab`)
* `| lowercase` will convert the string to lowercase (alternatively you can use `| toLower` or `| lower`)
* `| uppercase` will convert the string to uppercase (alternatively you can use `| toUpper` or `| upper`) Though this is perhaps less helpful as it is not valid in kubernetes nor dns.
```yaml
- name: Environment info
# We recommend pinning this action to a specific release or version range to ensure stability
uses: cloudposse/github-action-kubernetes-environment@main
id: result
with:
environment: ${{ inputs.environment }}
namespace: ${{ inputs.namespace }}
application: ${{ inputs.application }}
config: |
preview:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-dev/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: ${{ inputs.namespace }}
reformat: branch-name # reformats namespace to be branch name as kebabcase, alternatively use `pr-number` here for `pr-123` as your namespace
ssm-path: platform/dev-cluster
qa1:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-staging/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: QA1/MY-APP | kebabcase
# output namespace will become qa1-my-app
ssm-path: platform/staging-cluster
```
* To get custom key value pairs you can query the selected environment with a follow up step:
```yaml
- name: Environment info
uses: cloudposse/github-action-yaml-config-query@v1.0.0
id: environment-info
with:
query: .
config: ${{ steps.result.outputs.environment-config }}
```
Full Workflow example:
```yaml
name: 'Environments - ArgoCD'
description: 'Get information about environment'
inputs:
environment:
description: "Environment name"
required: true
application:
description: "The application name"
required: false
namespace:
description: "Namespace name"
required: true
outputs:
name:
description: "Environment name"
value: ${{ inputs.environment }}
region:
description: "Default AWS Region"
value: us-east-2
role:
description: "Environments that need to be deployed"
value: ${{ steps.result.outputs.role }}
cluster:
description: "Environments that need to be destroyed"
value: ${{ steps.result.outputs.cluster }}
namespace:
description: "Namespace"
value: ${{ steps.result.outputs.namespace }}
ssm-path:
description: "Path to ssm secrets"
value: ${{ steps.result.outputs.ssm-path }}
runs:
using: "composite"
steps:
- name: Environment info
# We recommend pinning this action to a specific release or version range to ensure stability
uses: cloudposse/github-action-kubernetes-environment@main
id: result
with:
environment: ${{ inputs.environment }}
namespace: ${{ inputs.namespace }}
application: ${{ inputs.application }}
config: |
preview:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-dev/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: ${{ inputs.namespace }}
ssm-path: platform/dev-cluster
reformat: branch-name
qa1:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-staging/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: qa1
ssm-path: platform/staging-cluster
qa2:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-staging/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: qa2
ssm-path: platform/staging-cluster
qa3:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-staging/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: qa3
ssm-path: platform/staging-cluster
qa4:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-staging/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: qa4
ssm-path: platform/staging-cluster
production:
cluster: https://github.com/athoteldev/argocd-deploy-prod/blob/main/plat/ue2-prod/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: production
ssm-path: platform/prod-cluster
staging:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-staging/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: staging
ssm-path: platform/staging-cluster
sandbox:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-sandbox/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: sandbox
ssm-path: platform/sandbox-cluster
dev:
cluster: https://github.com/cloudposse/argocd-repo/blob/main/plat/ue2-dev/apps
cluster-role: arn:aws:iam::123456789012:role/my-gha-cluster-role
namespace: dev
ssm-path: platform/dev-cluster
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| application | The application name | N/A | false |
| config | configuration | N/A | true |
| environment | Environment | N/A | true |
| namespace | Kubernetes namespace | N/A | true |
| namespace-deny-list | Kubernetes namespace deny list, generated names cannot contain this comma separated list. | kube-system,kube-public,default | false |
| namespace-prefix | Kubernetes namespace prefix | | false |
| namespace-suffix | Kubernetes namespace suffix | | false |
## Outputs
| Name | Description |
|------|-------------|
| cluster | Environments that need to be destroyed |
| environment-config | Environment configuration |
| name | Environment name |
| namespace | Namespace |
| role | Environments that need to be deployed |
| ssm-path | Path to ssm secrets |
---
## major-release-tagger
# GitHub Action: `major-release-tagger`
GitHub Action that automatically generates or updates `v` tags every time a new release is published.
## Introduction
This GitHub Action automatically generates or updates `v` tags every time a new release is published, making it effortless to keep track of your project's major versions.
Imagine your Git repository has the following tags:
```
1.0.0
1.1.0
2.0.0
2.0.1
2.1.0
3.0.0
```
By simply incorporating Major Release Tagger, your repo will be enriched with the corresponding v-tags:
```
1.0.0
1.1.0 v1
2.0.0
2.0.1 v2
2.1.0
3.0.0 v3
```
When you create a new release tagged `3.1.0`, the `v3` tag will automatically point to it:
```
1.0.0
1.1.0 v1
2.0.0
2.0.1 v2
2.1.0
3.0.0
3.1.0 v3
```
Stay organized and efficient with Major Release Tagger - the ultimate GitHub Action to streamline your versioning process.
## Usage
```yaml
name: Major Release Tagger
on:
release:
types:
- published
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/github-action-major-release-tagger@v1
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| dry-run | Run action without pushing changes to upstream | false | false |
| git-user-email | Git user email that will be used for git config | actions-bot@users.noreply.github.com | false |
| git-user-name | Git user name that will be used for git config | actions-bot | false |
| log-level | Log level for this action. Available options: ['off', 'error', 'warn', 'info', 'debug']. Default 'info' | info | false |
| token | Standard GitHub token (e.g., secrets.GITHUB\_TOKEN) | $\{\{ github.token \}\} | false |
## Outputs
| Name | Description |
|------|-------------|
| response | Response in json format for example: \{"succeeded":true,"reason":"MAPPED\_TAGS","message":"Successfully created/update v-tags.","data":\{"v1": \{"state":"updated", "oldSHA": "d9b3a3034766ac20294fd1c36cacc017ae4a3898", "newSHA":"e5c6309b473934cfe3e556013781b8757c1e0422"\}, "v2": \{"state":"created", "oldSHA": "bbf9f924752c61dcef084757bcf4440e23f2e16b", "newSHA":"5ae37ee514b73cf8146fe389ad839469e7f3a6d2"\}\}\} |
---
## matrix-extended
# GitHub Action: `matrix-extended`
GitHub Action that when used together with reusable workflows makes it easier to workaround the limit of 256 jobs in a matrix.
## Introduction
GitHub Actions matrix have [limit to 256 items](https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs#using-a-matrix-strategy)
There is workaround to extend the limit with [reusable workflows](https://github.com/orgs/community/discussions/38704)
This GitHub Action outputs a JSON structure for up to 3 levels deep of nested matrixes.
In theory run 256 ^ 3 (i.e., 16 777 216) jobs per workflow run!
| Matrix max nested level | Total jobs count limit |
|-------------------------|--------------------------|
| 1 | 256 |
| 2 | 65 536 |
| 3 | 16 777 216 |
If `nested-matrices-count` input is `1`, the output `matrix` would be JSON formatted string with the following structure
```yaml
{
"include": [matrix items]
}
```
If `nested-matrices-count` input is `2` output `matrix` whould be a JSON formatted string with the following structure
```yaml
{
"include": [{
"name": "group name",
"items": {
"include": [matrix items]
} ## serialized as string
}]
}
```
If `nested-matrices-count` input is `3` output `matrix` would be a JSON formatted string with the following structure
```yaml
{
"include": [{
"name": "group name",
"items": [{
"name": "chunk 256 range name",
"include": [
"items": {
"include": [matrix items] ## serialized as string
}
]
}] ## serialized as string
} ## serialized as string
}]
}
```
:::warning
Make sure you [restrict the concurrency](https://docs.github.com/en/actions/using-jobs/using-concurrency) of your jobs to avoid DDOS'ing the GitHub Actions API, which might cause restrictions to be applied to your account.
| Matrix max nested level | First Matrix Concurrency | Second Matrix Concurrency | Third Matrix Concurrency |
|-------------------------|--------------------------|---------------------------|--------------------------|
| 1 | x | - | - |
| 2 | 1 | x | - |
| 3 | 1 | 1 | x |
:::
## Usage
The action have 3 modes depends of how many nested levels you want.
The settings affect to reusable workflows count and usage pattern.
## 1 Level of nested matrices
`.github/workflows/matrices-1.yml`
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
matrix-builder:
runs-on: self-hosted
name: Affected stacks
outputs:
matrix: ${{ steps.extend.outputs.matrix }}
steps:
- id: setup-matrix
uses: druzsan/setup-matrix@v1
with:
matrix: |
os: ubuntu-latest windows-latest macos-latest,
python-version: 3.8 3.9 3.10
arch: arm64 amd64
- uses: cloudposse/github-action-matrix-extended@main
id: extend
with:
matrix: ${{ steps.setup-matrix.outputs.matrix }}
sort-by: '[.python-version, .os, .arch] | join("-")'
group-by: '.arch'
nested-matrices-count: '1'
operation:
if: ${{ needs.matrix-builder.outputs.matrix != '{"include":[]}' }}
needs:
- matrix-builder
strategy:
max-parallel: 10
fail-fast: false # Don't fail fast to avoid locking TF State
matrix: ${{ fromJson(needs.matrix-builder.outputs.matrix) }}
name: Do (${{ matrix.arch }})
runs-on: self-hosted
steps:
- shell: bash
run: |
echo "Do real work - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ matrix.python-version }}"
```
## 2 Level of nested matrices
`.github/workflows/matrices-1.yml`
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
matrix-builder:
runs-on: self-hosted
name: Affected stacks
outputs:
matrix: ${{ steps.extend.outputs.matrix }}
steps:
- id: setup-matrix
uses: druzsan/setup-matrix@v1
with:
matrix: |
os: ubuntu-latest windows-latest macos-latest,
python-version: 3.8 3.9 3.10
arch: arm64 amd64
- uses: cloudposse/github-action-matrix-extended@main
id: extend
with:
sort-by: '[.python-version, .os, .arch] | join("-")'
group-by: '.arch'
nested-matrices-count: '1'
matrix: ${{ steps.setup-matrix.outputs.matrix }}
operation:
if: ${{ needs.matrix-builder.outputs.matrix != '{"include":[]}' }}
uses: ./.github/workflows/matrices-2.yml
needs:
- matrix-builder
strategy:
max-parallel: 1 # This is important to avoid ddos GHA API
fail-fast: false # Don't fail fast to avoid locking TF State
matrix: ${{ fromJson(needs.matrix-builder.outputs.matrix) }}
name: Group (${{ matrix.name }})
with:
items: ${{ matrix.items }}
```
`.github/workflows/matrices-2.yml`
```yaml
name: Reusable workflow for 2 level of nested matrices
on:
workflow_call:
inputs:
items:
description: "Items"
required: true
type: string
jobs:
operation:
if: ${{ inputs.items != '{"include":[]}' }}
strategy:
max-parallel: 10
fail-fast: false # Don't fail fast to avoid locking TF State
matrix: ${{ fromJson(inputs.items) }}
name: Do (${{ matrix.arch }})
runs-on: self-hosted
steps:
- shell: bash
run: |
echo "Do real work - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ matrix.python-version }}"
```
## 3 Level of nested matrices
`.github/workflows/matrices-1.yml`
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
matrix-builder:
runs-on: self-hosted
name: Affected stacks
outputs:
matrix: ${{ steps.extend.outputs.matrix }}
steps:
- id: setup-matrix
uses: druzsan/setup-matrix@v1
with:
matrix: |
os: ubuntu-latest windows-latest macos-latest,
python-version: 3.8 3.9 3.10
arch: arm64 amd64
- uses: cloudposse/github-action-matrix-extended@main
id: query
with:
sort-by: '[.python-version, .os, .arch] | join("-")'
group-by: '.arch'
nested-matrices-count: '1'
matrix: ${{ steps.setup-matrix.outputs.matrix }}
operation:
if: ${{ needs.matrix-builder.outputs.matrix != '{"include":[]}' }}
uses: ./.github/workflows/matrices-2.yml
needs:
- matrix-builder
strategy:
max-parallel: 1 # This is important to avoid ddos GHA API
fail-fast: false # Don't fail fast to avoid locking TF State
matrix: ${{ fromJson(needs.matrix-builder.outputs.matrix) }}
name: Group (${{ matrix.name }})
with:
items: ${{ matrix.items }}
```
`.github/workflows/matrices-2.yml`
```yaml
name: Reusable workflow for 2 level of nested matrices
on:
workflow_call:
inputs:
items:
description: "Items"
required: true
type: string
jobs:
operation:
if: ${{ inputs.items != '{"include":[]}' }}
uses: ./.github/workflows/matrices-3.yml
strategy:
max-parallel: 1 # This is important to avoid ddos GHA API
fail-fast: false # Don't fail fast to avoid locking TF State
matrix: ${{ fromJson(inputs.items) }}
name: Group (${{ matrix.name }})
with:
items: ${{ matrix.items }}
```
`.github/workflows/matrices-3.yml`
```yaml
name: Reusable workflow for 3 level of nested matrices
on:
workflow_call:
inputs:
items:
description: "Items"
required: true
type: string
jobs:
operation:
if: ${{ inputs.items != '{"include":[]}' }}
strategy:
max-parallel: 10
fail-fast: false # Don't fail fast to avoid locking TF State
matrix: ${{ fromJson(inputs.items) }}
name: Do (${{ matrix.arch }})
runs-on: self-hosted
steps:
- shell: bash
run: |
echo "Do real work - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ matrix.python-version }}"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| group-by | Group by query | empty | false |
| matrix | Matrix inputs (JSON array or object which includes property passed as string or file path) | N/A | true |
| nested-matrices-count | Number of nested matrices that should be returned as the output (from 1 to 3) | 1 | false |
| sort-by | Sort by query | empty | false |
## Outputs
| Name | Description |
|------|-------------|
| matrix | A matrix suitable for extending matrix size workaround (see README) |
---
## matrix-outputs-read
# GitHub Action: `matrix-outputs-read`
[Workaround implementation](https://github.com/community/community/discussions/17245#discussioncomment-3814009) - Read matrix jobs outputs
## Introduction
GitHub actions have an [Jobs need a way to reference all outputs of matrix jobs](https://github.com/community/community/discussions/17245) issue.
If there is a job that runs multiple times with `strategy.matrix` only the latest iteration's output availiable for
reference in other jobs.
There is a [workaround](https://github.com/community/community/discussions/17245#discussioncomment-3814009) to address the limitation.
We implement the workaround with two GitHub Actions:
* [Matrix Outputs Write](https://github.com/cloudposse/github-action-matrix-outputs-write)
* [Matrix Outputs Read](https://github.com/cloudposse/github-action-matrix-outputs-read)
## v1 - What's new
:::important
cloudposse/github-action-matrix-outputs-read@v1+ is not currently supported on GHES yet. If you are on GHES, you
must use [v0](https://github.com/cloudposse/github-action-matrix-outputs-read/releases/tag/0.1.2).
:::
The release of `cloudposse/github-action-matrix-outputs-write@v1` and `cloudposse/github-action-matrix-outputs-read@v1`
are major changes to the backend architecture of Artifacts. They have numerous performance and behavioral improvements.
For more information, see the [`@actions/artifact`](https://github.com/actions/toolkit/tree/main/packages/artifact) documentation.
### Breaking Changes
1. On self hosted runners, additional [firewall rules](https://github.com/actions/toolkit/tree/main/packages/artifact#breaking-changes) may be required.
2. `cloudposse/github-action-matrix-outputs-read@v1` can not be read outputs writen by `cloudposse/github-action-matrix-outputs-write@v0`.
## Usage
Example how you can use workaround to reference matrix job outputs.
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
platform: ["i386", "arm64v8"]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Build
id: build
uses: cloudposse/github-action-docker-build-push@1.9.0
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
build-args: |-
PLATFORM=${{ matrix.platform }}
## Write for matrix outputs workaround
- uses: cloudposse/github-action-matrix-outputs-write@v1
id: out
with:
matrix-step-name: ${{ github.job }}
matrix-key: ${{ matrix.platform }}
outputs: |-
image: ${{ steps.build.outputs.image }}:${{ steps.build.outputs.tag }}
## Read matrix outputs
read:
runs-on: ubuntu-latest
needs: [build]
steps:
- uses: cloudposse/github-action-matrix-outputs-read@v1
id: read
with:
matrix-step-name: build
outputs:
result: "${{ steps.read.outputs.result }}"
## This how you can reference matrix output
assert:
runs-on: ubuntu-latest
needs: [read]
steps:
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:i386
## This how you can reference matrix output
actual: ${{ fromJson(needs.read.outputs.result).image.i386 }}
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:arm64v8
## This how you can reference matrix output
actual: ${{ fromJson(needs.read.outputs.result).image.arm64v8 }}
```
### Reusable workflow example
Reusable workflow that support matrix outputs
`./.github/workflow/build-reusabled.yaml`
```yaml
name: Build - Reusable workflow
on:
workflow_call:
inputs:
registry:
required: true
type: string
organization:
required: true
type: string
repository:
required: true
type: string
platform:
required: true
type: string
matrix-step-name:
required: false
type: string
matrix-key:
required: false
type: string
outputs:
image:
description: "Image"
value: ${{ jobs.write.outputs.image }}
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Build
id: build
uses: cloudposse/github-action-docker-build-push@1.9.0
with:
registry: ${{ inputs.registry }}
organization: ${{ inputs.organization }}
repository: ${{ inputs.repository }}
build-args: |-
PLATFORM=${{ inputs.platform }}
outputs:
image: ${{ needs.build.outputs.image }}:${{ needs.build.outputs.tag }}
write:
runs-on: ubuntu-latest
needs: [build]
steps:
## Write for matrix outputs workaround
- uses: cloudposse/github-action-matrix-outputs-write@v1
id: out
with:
matrix-step-name: ${{ inputs.matrix-step-name }}
matrix-key: ${{ inputs.matrix-key }}
outputs: |-
image: ${{ needs.build.outputs.image }}
outputs:
image: ${{ fromJson(steps.out.outputs.result).image }}
```
Then you can use the workflow with matrix
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
build:
usage: ./.github/workflow/build-reusabled.yaml
strategy:
matrix:
platform: ["i386", "arm64v8"]
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
platform: ${{ matrix.platform }}
matrix-step-name: ${{ github.job }}
matrix-key: ${{ matrix.platform }}
## Read matrix outputs
read:
runs-on: ubuntu-latest
needs: [build]
steps:
- uses: cloudposse/github-action-matrix-outputs-read@v1
id: read
with:
matrix-step-name: build
outputs:
result: "${{ steps.read.outputs.result }}"
## This how you can reference matrix output
assert:
runs-on: ubuntu-latest
needs: [read]
steps:
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:i386
## This how you can reference matrix output
actual: ${{ fromJson(needs.read.outputs.result).image.i386 }}
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:arm64v8
## This how you can reference matrix output
actual: ${{ fromJson(needs.read.outputs.result).image.arm64v8 }}
```
or as a simple job
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
build:
usage: ./.github/workflow/build-reusabled.yaml
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
platform: "i386"
## This how you can reference single job output
assert:
runs-on: ubuntu-latest
needs: [build]
steps:
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:i386
## This how you can reference matrix output
actual: ${{ needs.build.outputs.image }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| matrix-step-name | Matrix step name | N/A | true |
## Outputs
| Name | Description |
|------|-------------|
| result | Outputs result |
---
## matrix-outputs-write
# GitHub Action: `matrix-outputs-write`
[Workaround implementation](https://github.com/community/community/discussions/17245#discussioncomment-3814009) - Write matrix jobs outputs
## Introduction
GitHub actions have an [Jobs need a way to reference all outputs of matrix jobs](https://github.com/community/community/discussions/17245) issue.
If there is a job that runs multiple times with `strategy.matrix` only the latest iteration's output availiable for
reference in other jobs.
There is a [workaround](https://github.com/community/community/discussions/17245#discussioncomment-3814009) to address the limitation.
We implement the workaround with two GitHub Actions:
* [Matrix Outputs Write](https://github.com/cloudposse/github-action-matrix-outputs-write)
* [Matrix Outputs Read](https://github.com/cloudposse/github-action-matrix-outputs-read)
## v1 - What's new
:::important
cloudposse/github-action-matrix-outputs-write@v1+ is not currently supported on GHES yet. If you are on GHES, you
must use [v0](https://github.com/cloudposse/github-action-matrix-outputs-write/releases/tag/0.5.0).
:::
The release of `cloudposse/github-action-matrix-outputs-write@v1` and `cloudposse/github-action-matrix-outputs-read@v1`
are major changes to the backend architecture of Artifacts. They have numerous performance and behavioral improvements.
For more information, see the [`@actions/artifact`](https://github.com/actions/toolkit/tree/main/packages/artifact) documentation.
### Breaking Changes
1. On self hosted runners, additional [firewall rules](https://github.com/actions/toolkit/tree/main/packages/artifact#breaking-changes) may be required.
2. Outputs writen with `cloudposse/github-action-matrix-outputs-write@v1` can not be read by `cloudposse/github-action-matrix-outputs-read@v0`and below versions.
## Usage
Example how you can use workaround to reference matrix job outputs.
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
platform: ["i386", "arm64v8"]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Build
id: build
uses: cloudposse/github-action-docker-build-push@1.9.0
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
build-args: |-
PLATFORM=${{ matrix.platform }}
## Write for matrix outputs workaround
- uses: cloudposse/github-action-matrix-outputs-write@v1
id: out
with:
matrix-step-name: ${{ github.job }}
matrix-key: ${{ matrix.platform }}
outputs: |-
image: ${{ steps.build.outputs.image }}:${{ steps.build.outputs.tag }}
## Multiline string
tags: ${{ toJson(steps.build.outputs.image) }}
## Read matrix outputs
read:
runs-on: ubuntu-latest
needs: [build]
steps:
- uses: cloudposse/github-action-matrix-outputs-read@v1
id: read
with:
matrix-step-name: build
outputs:
result: "${{ steps.read.outputs.result }}"
## This how you can reference matrix output
assert:
runs-on: ubuntu-latest
needs: [read]
steps:
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:i386
## This how you can reference matrix output
actual: ${{ fromJson(needs.read.outputs.result).image.i386 }}
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:arm64v8
## This how you can reference matrix output
actual: ${{ fromJson(needs.read.outputs.result).image.arm64v8 }}
```
### Reusable workflow example
Reusable workflow that support matrix outputs
`./.github/workflow/build-reusabled.yaml`
```yaml
name: Build - Reusable workflow
on:
workflow_call:
inputs:
registry:
required: true
type: string
organization:
required: true
type: string
repository:
required: true
type: string
platform:
required: true
type: string
matrix-step-name:
required: false
type: string
matrix-key:
required: false
type: string
outputs:
image:
description: "Image"
value: ${{ jobs.write.outputs.image }}
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Build
id: build
uses: cloudposse/github-action-docker-build-push@1.9.0
with:
registry: ${{ inputs.registry }}
organization: ${{ inputs.organization }}
repository: ${{ inputs.repository }}
build-args: |-
PLATFORM=${{ inputs.platform }}
outputs:
image: ${{ needs.build.outputs.image }}:${{ needs.build.outputs.tag }}
write:
runs-on: ubuntu-latest
needs: [build]
steps:
## Write for matrix outputs workaround
- uses: cloudposse/github-action-matrix-outputs-write@v1
id: out
with:
matrix-step-name: ${{ inputs.matrix-step-name }}
matrix-key: ${{ inputs.matrix-key }}
outputs: |-
image: ${{ needs.build.outputs.image }}
outputs:
image: ${{ fromJson(steps.out.outputs.result).image }}
image_alternative: ${{ steps.out.outputs.image }}
```
Then you can use the workflow with matrix
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
build:
usage: ./.github/workflow/build-reusabled.yaml
strategy:
matrix:
platform: ["i386", "arm64v8"]
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
platform: ${{ matrix.platform }}
matrix-step-name: ${{ github.job }}
matrix-key: ${{ matrix.platform }}
## Read matrix outputs
read:
runs-on: ubuntu-latest
needs: [build]
steps:
- uses: cloudposse/github-action-matrix-outputs-read@v1
id: read
with:
matrix-step-name: build
outputs:
result: "${{ steps.read.outputs.result }}"
## This how you can reference matrix output
assert:
runs-on: ubuntu-latest
needs: [read]
steps:
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:i386
## This how you can reference matrix output
actual: ${{ fromJson(needs.read.outputs.result).image.i386 }}
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:arm64v8
## This how you can reference matrix output
actual: ${{ fromJson(needs.read.outputs.result).image.arm64v8 }}
```
or as a simple job
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
build:
usage: ./.github/workflow/build-reusabled.yaml
with:
registry: registry.hub.docker.com
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
platform: "i386"
## This how you can reference single job output
assert:
runs-on: ubuntu-latest
needs: [build]
steps:
- uses: nick-fields/assert-action@v1
with:
expected: ${{ registry.hub.docker.com }}/${{ github.event.repository.owner.login }}/${{ github.event.repository.name }}:i386
## This how you can reference matrix output
actual: ${{ needs.build.outputs.image }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| matrix-key | Matrix key | N/A | false |
| matrix-step-name | Matrix step name | N/A | false |
| outputs | YAML structured map of outputs | N/A | false |
## Outputs
| Name | Description |
|------|-------------|
| result | Outputs result (Deprecated!!!) |
---
## monorepo-random-controller
# GitHub Action: `monorepo-random-controller`
Monorepo random controller used for demo
## Introduction
Monorepo pattern for CI/CD use this action as controller to detect list of applications, applications with changes.
The GitHub action detects as applications directories from specified path and use random to separate them into
changed and unchanged lists.
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
monorepo:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Monorepo controller
id: controller
uses: cloudposse/github-action-monorepo-random-controller@0.1.1
with:
dir: ./applications/
outputs:
applications: ${{ steps.controller.outputs.apps }}
changes: ${{ steps.controller.outputs.changes }}
no-changes: ${{ steps.controller.outputs.no-changes }}
ci:
runs-on: ubuntu-latest
needs: [monorepo]
if: ${{ needs.monorepo.outputs.applications != '[]' }}
strategy:
matrix:
application: ${{ fromJson(needs.monorepo.outputs.applications) }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Build
id: build
uses: cloudposse/github-action-docker-build-push@1.9.0
with:
registry: registry.hub.docker.com
organization: ${{ github.event.repository.owner.login }}
repository: ${{ github.event.repository.name }}/${{ matrix.application }}
workdir: ./applications/${{ matrix.application }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| dir | Applications dir | N/A | true |
## Outputs
| Name | Description |
|------|-------------|
| apps | Applications list |
| changes | Applications that have changes |
| no-changes | Applications that have no changes |
---
## preview-environment-controller
# GitHub Action: `preview-environment-controller`
Action to manage to deploy and purge preview environments depends on PR labels
## Introduction
Testing Pull Request changes usually lead to having it deployed on a preview environment.
The environment can be ephemeral or pre-provisioned. In the last case, there is a countable number of preview environments.
This GitHub Action follows a pattern when the developer set PR label to specify a preview environment to deploy.
`github-action-preview-environment-controller` allow to define map of `environment => label`.
Depending on current PR labels the action outputs a list of deploy and destroy environments.
So it performs a `controller` role and does not limit deployment methods or tools.
## Usage
Use `github-action-preview-environment-controller` in Pull Request triggered pipeline, and use it's outputs to determinate
what environments should be deployed and what cleaned up.
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Preview deployments controller
uses: cloudposse/github-action-preview-environment-controller@main
id: controller
with:
labels: ${{ toJSON(github.event.pull_request.labels.*.name) }}
open: ${{ github.event.pull_request.state == 'open' }}
env-label: |
preview: deploy
qa1: deploy/qa1
qa2: deploy/qa2
outputs:
labels_env: ${{ steps.controller.outputs.labels_env }}
deploy_envs: ${{ steps.controller.outputs.deploy_envs }}
destroy_envs: ${{ steps.controller.outputs.destroy_envs }}
deploy:
runs-on: ubuntu-latest
if: ${{ needs.context.outputs.deploy_envs != '[]' }}
strategy:
matrix:
env: ${{ fromJson(needs.context.outputs.deploy_envs) }}
environment:
name: ${{ matrix.env }}
needs: [ context ]
steps:
- name: Deploy
uses: example/deploy@main
id: deploy
with:
environment: ${{ matrix.env }}
operation: deploy
destroy:
runs-on: ubuntu-latest
if: ${{ needs.context.outputs.destroy_envs != '[]' }}
strategy:
matrix:
env: ${{ fromJson(needs.context.outputs.destroy_envs) }}
needs: [ context ]
steps:
- name: Destroy
uses: example/deploy@main
id: deploy
with:
environment: ${{ matrix.env }}
operation: destroy
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| env-label | YAML formatted \{environment\}: \{label\} map | preview: deploy | true |
| labels | Existing PR labels | [] | true |
| open | Is PR open? | true | true |
## Outputs
| Name | Description |
|------|-------------|
| deploy\_envs | Environments that need to be deployed |
| destroy\_envs | Environments that need to be destroyed |
| labels\_env | JSON formatted \{label\}: \{environment\} map |
---
## preview-labels-cleanup
# GitHub Action: `preview-labels-cleanup`
Remove labels used to control deployments with [github-action-preview-environment-controller](https://github.com/cloudposse/github-action-preview-environment-controller)
## Introduction
On close a pull request we need to cleanup all labels that specify a preview environments where the PR was deployed.
This GitHub action integrates with [github-action-preview-environment-controller](https://github.com/cloudposse/github-action-preview-environment-controller) action
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Preview deployments controller
uses: cloudposse/github-action-preview-environment-controller@v0.7.0
id: controller
with:
labels: ${{ toJSON(github.event.pull_request.labels.*.name) }}
open: ${{ github.event.pull_request.state == 'open' }}
env-label: |
preview: deploy
outputs:
labels_env: ${{ steps.controller.outputs.labels_env }}
destroy:
runs-on: ubuntu-latest
if: ${{ github.event.pull_request.state != 'open' }}
needs: [ context ]
steps:
- name: Cleanup label
uses: cloudposse/github-action-preview-labels-cleanup
with:
labels_env: ${{ needs.context.outputs.labels_env }}
env: preview
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| env | Environment | N/A | true |
| labels\_env | JSON formatted \{label\}: \{environment\} map | \{\} | true |
| pr\_number | The number of the pull request, which will default to extracting from the workflow event if not specified. | N/A | false |
## Outputs
| Name | Description |
|------|-------------|
---
## release-branch-manager
# GitHub Action: `release-branch-manager`
GitHub Action for Managing Release Branches
## Introduction
This GitHub Action adopts a streamlined approach to managing release branches, drawing on a trunk-based branching strategy. In this model, the `DEFAULT_BRANCH` consistently represents the most recent release, while release branches are exclusively created for previous major releases, if applicable. This structure simplifies the process for contributors when submitting Pull Requests for bug fixes or backporting modifications to older releases, as it enables them to target a specific major release.
**How it works:** upon publishing a new major release `N`, a corresponding branch for the previous release `N-1` will be automatically generated.
Imagine you have tags like this in your repo:
```
0.1.0
0.2.0
1.0.0
1.1.0
1.2.0
1.2.1
2.0.0
2.1.0
2.2.0
3.0.0
3.1.0 main
```
Upon the first release published event, the "release branch manager" will generate new branches named `release/vN-1`, where N corresponds to the latest tag of each major release. In this case, several new branches will be created:
```
0.1.0
0.2.0 release/v0
1.0.0
1.1.0
1.2.0
1.2.1 release/v1
2.0.0
2.1.0
2.2.0 release/v2
3.0.0
3.1.0 main
```
Note that `3.1.0` is latest tag and release branch manager wouldn't create release branch because latest major release is maintained in `main` branch.
If you wish to make changes to `2.2.0`, you must create a pull request for the `release/v2` branch and generate a corresponding release/tag with a major version of `2`, for example, `2.3.0`.
This action requires GitHub releases to follow the [SemVer versioning](https://semver.org/) scheme.
## Usage
Example of workflow that that will create major release tags. To use it, just add this workflow to your `.github/workflows` directory.
```yaml
name: Manager Release Branch
on:
release:
types:
- published
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/github-action-release-branch-manager@v1
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| dry-run | Run action without pushing changes to upstream | false | false |
| git-user-email | Git user email that will be used for git config | actions-bot@users.noreply.github.com | false |
| git-user-name | Git user name that will be used for git config | actions-bot | false |
| log-level | Log level for this action. Available options: ['off', 'error', 'warn', 'info', 'debug']. Default 'info' | info | false |
| minimal-version | Minimal 'major' version that release branch creation should start from | 0 | false |
| token | GitHub Token used to perform git and GitHub operations | $\{\{ github.token \}\} | false |
## Outputs
| Name | Description |
|------|-------------|
| response | Response in json format for example: \{"succeeded":true,"reason":"CREATED\_BRANCHES","message":"Successfully created release branches","data":\{"release/v3":"3.1.0","release/v2":"2.0.0","release/v1":"1.1.0"\}\} |
---
## release-label-validator
# GitHub Action: `release-label-validator`
This GitHub Action validates that the major label is only assigned to Pull Requests targeting the default branch, enhancing the management of significant changes.
## Introduction
This is a GitHub Action to validate that only Pull Requests targeting the default branch can have the `major` label set. This is useful in combination with the [`release-drafter`](https://github.com/release-drafter/release-drafter) and the Cloud Posse [`release-branch-manager`](https://github.com/cloudposse/github-action-release-branch-manager) GitHub Actions, to ensure the `major` label can only be assigned to Pull Requests created against the default branch, ensuring that significant changes are clearly identified and properly managed.
## Usage
```yaml
name: validate-release-labels
on:
pull_request:
types:
- labeled
- unlabeled
- opened
- synchronize
- reopened
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/github-action-release-label-validator@v1
```
---
## run-ecspresso
# GitHub Action: `run-ecspresso`
Run ECS task with [Escpresso](https://github.com/kayac/ecspresso)
## Introduction
This is template repository to create composite GitHub Actions.
Feel free to use it as reference and starting point.
## Usage
```yaml
name: Pull Request
on:
push:
branches: [ 'main' ]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Example action
uses: cloudposse/example-github-action-run-ecspresso@main
id: example
with:
image: 1111111111111.dkr.ecr.us-east-2.amazonaws.com/cloudposse/example-app-on-ecs
image-tag: latest
region: us-east-2
operation: deploy
cluster: acme-plat-ue2-sandbox
application: acme-plat-ue2-sandbox-example-app-on-ecs
taskdef-path: taskdef.json
overrides: |-
{
"containerOverrides":[
{
"name": "app",
"command": ["/db-migrate.sh"]
}
]
}
outputs:
result: ${{ steps.example.outputs.webapp-url }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| application | Application name | N/A | true |
| cluster | Cluster name | N/A | true |
| debug | Debug mode | false | false |
| ecspresso-version | Ecspresso version | v2.1.0 | false |
| image | Docker image | N/A | true |
| image-tag | Docker image tag | N/A | true |
| mirror\_to\_s3\_bucket | Mirror task definition to s3 bucket | N/A | false |
| overrides | A list of container overrides in JSON format that specify the name of a container in the specified task definition and the overrides it should receive. | \{\} | false |
| region | AWS Region | N/A | true |
| taskdef-path | Task definition path | N/A | true |
| timeout | Ecspresso timeout | 5m | false |
| use\_partial\_taskdefinition | NOTE: Experimental. Load templated task definition from S3 bucket, which is created by the `ecs-service` component. This is useful when you want to manage the task definition in the infrastructure repository and the application repository. The infrastructure repository manages things like Volumes and EFS mounts, and the Application repository manages the application code and environment variables. | N/A | false |
## Outputs
| Name | Description |
|------|-------------|
| webapp-url | Web Application url |
---
## secret-outputs
# GitHub Action: `secret-outputs`
This GitHub Action implement [workaround](https://nitratine.net/blog/post/how-to-pass-secrets-between-runners-in-github-actions/) for the problem
[`Combining job outputs with masking leads to empty output`](https://github.com/actions/runner/issues/1498).
The problem was described in
[`GitHub Action documentation`](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idoutputs)
- `Outputs containing secrets are redacted on the runner and not sent to GitHub Actions`.
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Step with the secret output
id: iam
run: |
echo "role=arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/admin" >> $GITHUB_OUTPUT
- uses: cloudposse/github-action-secret-outputs@main
id: role
with:
## PASSWORD is a gpg passphrase stored in Github Secrets.
secret: ${{ secrets.PASSWORD }}
op: encode
in: ${{ steps.iam.outputs.role }}
outputs:
role: ${{ steps.role.outputs.out }}
usage:
runs-on: ubuntu-latest
needs: [context]
steps:
- uses: cloudposse/github-action-secret-outputs@main
id: role
with:
## PASSWORD is a gpg passphrase stored in Github Secrets.
secret: ${{ secrets.PASSWORD }}
op: decode
in: ${{ needs.context.outputs.role }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: ${{ steps.role.outputs.out }}
aws-region: us-east-2
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| in | Input data | N/A | true |
| op | Operation to perform (encode or decode) | encode | true |
| secret | Secret to encrypt/decrypt data | N/A | true |
## Outputs
| Name | Description |
|------|-------------|
| out | Result of encryption/decryption |
---
## seek-deployment
# GitHub Action: `seek-deployment`
Get GitHub deployment object by ref and environment name
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Seek deployment
uses: cloudposse/github-action-seek-deployment@main
id: deployment
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
environment: dev
ref: ${{ github.event.pull_request.head.ref }}
status: success
outputs:
id: "${{ steps.deployment.outputs.id }}"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| environment | Environment name | N/A | true |
| github-token | The GitHub token | $\{\{ github.token \}\} | true |
| ref | Branch or commit SHA | N/A | true |
| status | Deployment status | N/A | false |
## Outputs
| Name | Description |
|------|-------------|
| id | Top Deployment ID |
| ids | All Matching Deployment IDs |
---
## setup-atmos
# GitHub Action: `setup-atmos`
Install atmos for use in GitHub Actions
## Introduction
This repo contains a GitHub Action to setup [atmos](https://github.com/cloudposse/atmos) for use in GitHub Actions. It
installs the specified version of atmos and adds it to the `PATH` so it can be used in subsequent steps. In addition,
it optionally installs a wrapper script that will capture the `stdout`, `stderr`, and `exitcode` of the `atmos`
command and make them available to subsequent steps via outputs of the same name.
## Usage
```yaml
steps:
- uses: hashicorp/setup-terraform@v2
- name: Setup atmos
uses: cloudposse/github-action-setup-atmos@v2
````
To install a specific version of atmos, set the `atmos-version` input:
```yaml
steps:
- uses: hashicorp/setup-terraform@v2
- name: Setup atmos
uses: cloudposse/github-action-setup-atmos@v2
with:
atmos-version: 0.15.0
````
The wrapper script installation can be skipped by setting the `install-wrapper` input to `false`:
```yaml
steps:
- uses: hashicorp/setup-terraform@v2
- name: Setup atmos
uses: cloudposse/github-action-setup-atmos@v2
with:
install-wrapper: false
````
Subsequent steps of the GitHub action can use the wrapper scipt to capture the `stdout`, `stderr`, and `exitcode` if
the wrapper script was installed:
```yaml
steps:
- uses: hashicorp/setup-terraform@v2
- name: Setup atmos
uses: cloudposse/github-action-setup-atmos@v2
with:
install-wrapper: true
- name: Run atmos
id: atmos
run: atmos terraform plan
- run: echo ${{ steps.atmos.outputs.stdout }}
- run: echo ${{ steps.atmos.outputs.stderr }}
- run: echo ${{ steps.atmos.outputs.exitcode }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| atmos-version | Version Spec of the version to use. Examples: 1.x, 10.15.1, >=10.15.0. | latest | false |
| install-wrapper | Flag to indicate if the wrapper script will be installed to wrap subsequent calls of the `atmos` binary and expose its STDOUT, STDERR, and exit code as outputs named `stdout`, `stderr`, and `exitcode` respectively. Defaults to `true`. | true | false |
| token | Used to pull atmos distributions from Cloud Posse's GitHub repository. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. | $\{\{ github.server\_url == 'https://github.com' && github.token \|\| '' \}\} | false |
## Outputs
| Name | Description |
|------|-------------|
| atmos-version | The installed atmos version. |
---
## spacelift-stack-deploy
# GitHub Action: `spacelift-stack-deploy`
Trigger Spacelist stack synchronously
## Introduction
[Spacelift](https://spacelift.io) is a sophisticated, continuous integration
and deployment (CI/CD) platform for infrastructure-as-code.
The GitHub action triggers Spacelift stack run to provistion infrastructure.
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
infrastructure:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/github-action-spacelift-stack-deploy@main
id: spacelift
with:
stack: eks-cluster
github_token: ${{ secrets.PUBLIC_REPO_ACCESS_TOKEN }}
organization: acme
api_key_id: ${{ secrets.SPACELIFT_API_KEY_ID }}
api_key_secret: ${{ secrets.SPACELIFT_API_KEY_SECRET }}
outputs:
outputs: ${{ steps.spacelift.outputs.outputs }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| api\_key\_id | API Key ID | N/A | true |
| api\_key\_secret | API Key Secret | N/A | true |
| autodeploy | If true, automatically deploy the stack without manual confirmation | false | false |
| github\_token | GitHub Token (Required to install Spacelift CLI) | N/A | true |
| organization | Organization name | N/A | true |
| stack | Stack name | N/A | true |
## Outputs
| Name | Description |
|------|-------------|
| outputs | Stack outputs |
---
## sync-docker-repos
# GitHub Action: `sync-docker-repos`
GitHub Action to sync two docker repositories.
## Introduction
GitHub Action to sync two docker repositories
## Usage
Below is an example workflow that uses the `github-action-sync-docker-repos` action to sync a Docker Hub repository
with an AWS ECR repository.
```yaml
jobs:
example:
runs-on: ubuntu-latest
steps:
- name: Configure AWS credentials
id: login-aws
uses: aws-actions/configure-aws-credentials@v2
with:
aws-region: us-east-1
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Login to Amazon ECR Private
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
with:
mask-password: "true"
- name: sync
uses: cloudposse/github-action-sync-docker-repos@main
with:
src: busybox
dest: 111111111111.dkr.ecr.us-east-1.amazonaws.com
dest-credentials: "${{ steps.login-ecr.outputs.docker_username_111111111111_dkr_ecr_us_east_1_amazonaws_com }}:${{ steps.login-ecr.outputs.docker_password_111111111111_dkr_ecr_us_east_1_amazonaws_com }}"
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| dest | The destination repository to sync to. | N/A | true |
| dest-credentials | The destination repository credentials. | N/A | false |
| override-arch | Override the architecture of the src image. | N/A | false |
| override-multi-arch | If one of the images in src refers to a list of images, instead of copying just the image which matches thecurrent OS and architecture, attempt to copy all of the images in the list, and the list itself. | true | false |
| override-os | Override the operating system of the src image. | N/A | false |
| src | The source repository to sync from. | N/A | true |
| src-credentials | The source repository credentials. | N/A | false |
---
## terraform-auto-context
# GitHub Action: `terraform-auto-context`
This is a Github Action that will automatically update the `context.tf` file in the calling repo against the most recent version published by Cloud Posse. If a new version is detected, a Pull Request will be opened to update it.
If the repo version is found to be out of date, a pull request is opened to update it.
## Usage
Copy this repository's `.github/workflows/auto-context.yml` file into the `.github/workflows` folder of the repository to which you'd like to add Terraform Auto-context functionality.
This will cause Auto-context functionality to execute daily at the time specified by the `cron` option (all times are UTC).
If you'd like to modify the schedule of the Auto-context action, you can follow the standard [cron](https://en.wikipedia.org/wiki/Cron) syntax, as detailed below:
```
schedule:
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
# Update README.md nightly at 4am UTC
- cron: '0 4 * * *'
```
The default username and email address attached to the commits generated by the Auto-context action are `cloudpossebot` and `11232728+cloudpossebot@users.noreply.github.com`. If you would like to change these defaults, please set the `bot-name` and `bot-email` inputs in the workflow file:
```
with:
bot-name: [name]
bot-email: [email]
```
## Quick Start
Here's how to get started...
1. Copy this repository's `.github/workflows/auto-context.yml` file into the `.github/workflows` folder of the repository to which you'd like to add Terraform Auto-context functionality.
2. (Optional) Update the `main` pin inside `auto-context.yml` to a fixed version. Consult https://github.com/cloudposse/github-action-auto-context/releases for a list of available versions.
## Examples
Here are some real world examples:
- [`github-action-auto-context`](https://github.com/cloudposse/github-action-auto-context/.github/workflows/auto-context.yml) - Cloud Posse's self-testing Auto-context GitHub Action
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| bot-email | Email to write commits under | N/A | false |
| bot-name | Username to write commits under | cloudpossebot | false |
| branch-name | Name of branch to commit updated context.tf to | auto-update/context.tf | false |
| token | Token for authenticating to GitHub API server. No special permissions needed. | N/A | true |
---
## terraform-plan-storage
# GitHub Action: `terraform-plan-storage`
A GitHub Action to securely store Terraform plan files in a cloud storage (S3 or Azure Blob Storage) with metadata storage in cloud document database (DynamoDB or CosmosDB).
## Introduction
A Github Action to securely store Terraform plan files in a cloud storage (S3 or Azure Blob Storage) with metadata
storage in cloud document database (DynamoDB or CosmosDB). This is useful in CI/CD pipelines where you want to store
the plan files when a feature branch is opened and applied when merged.
## Usage
## AWS (default)
Standard usage for this action is with AWS. In AWS, we store Terraform plan files in a S3 Bucket and store metadata in DynamoDB. Specify the DynamoDB table name and S3 bucket name with `tableName` and `bucketName` respectively.
The filepath in S3 and the attributes in DynamoDB will use the given `component` and `stack` values to update or create a unique target for each Terraform plan file.
The plan file itself is pulled from or writen to a local file path. Set this with `planPath`.
Finally, choose whether to store the plan file or retrieve an existing plan file. To create or update a plan file, set `action` to `storePlan`. To pull an existing plan file, set `action` to `getPlan`.
```yaml
- name: Store Plan
uses: cloudposse/github-action-terraform-plan-storage@v1
id: store-plan
with:
action: storePlan
planPath: my-plan.tfplan
component: mycomponent
stack: core-mycomponent-use1
tableName: acme-terraform-plan-metadata
bucketName: acme-terraform-plans
- name: Get Plan
uses: cloudposse/github-action-terraform-plan-storage@v1
id: get-plan
with:
action: getPlan
planPath: my-plan.tfplan
component: mycomponent
stack: core-mycomponent-use1
tableName: acme-terraform-plan-metadata
bucketName: acme-terraform-plans
```
## Azure
This action also supports Azure. In Azure, we store Terraform plan files with Blob Storage and store metadata in Cosmos DB.
To use the Azure implementation rather than the default AWS implementation, specify `planRepositoryType` as `azureblob` and `metadataRepositoryType` as `cosmos`. Then pass the Blob Account and Container names with `blobAccountName` and `blobContainerName` and the Cosmos Container name, Database name, and Endpoint with `cosmosContainerName`, `cosmosDatabaseName`, and `cosmosEndpoint`.
Again set the `component`, `stack`, `planPath`, and `action` in the same manner as AWS above.
```yaml
- name: Store Plan
uses: cloudposse/github-action-terraform-plan-storage@v1
id: store-plan
with:
action: storePlan
planPath: my-plan.tfplan
component: mycomponent
stack: core-mycomponent-use1
planRepositoryType: azureblob
blobAccountName: tfplans
blobContainerName: plans
metadataRepositoryType: cosmos
cosmosContainerName: terraform-plan-storage
cosmosDatabaseName: terraform-plan-storage
cosmosEndpoint: "https://my-cosmo-account.documents.azure.com:443/"
- name: Get Plan
uses: cloudposse/github-action-terraform-plan-storage@v1
id: get-plan
with:
action: getPlan
planPath: my-plan.tfplan
component: mycomponent
stack: core-mycomponent-use1
planRepositoryType: azureblob
blobAccountName: tfplans
blobContainerName: plans
metadataRepositoryType: cosmos
cosmosContainerName: terraform-plan-storage
cosmosDatabaseName: terraform-plan-storage
cosmosEndpoint: "https://my-cosmo-account.documents.azure.com:443/"
```
## Google Cloud
This action supports Google Cloud Platform (GCP). In GCP, we store Terraform plan files in Google Cloud Storage and metadata in Firestore.
To use the GCP implementation, specify `planRepositoryType` as `gcs` and `metadataRepositoryType` as `firestore`, then provide the following GCP-specific settings: `googleProjectId` to specify the project for both GCS bucket and Firestore, `bucketName` for GCS storage, and `googleFirestoreDatabaseName`/`googleFirestoreCollectionName` for Firestore metadata.
The `component`, `stack`, `planPath`, and `action` parameters work the same way as in AWS and Azure examples.
```yaml
- name: Store Plan
uses: cloudposse/github-action-terraform-plan-storage@v2
id: store-plan
with:
action: storePlan
planPath: my-plan.tfplan
component: mycomponent
stack: core-mycomponent-use1
planRepositoryType: gcs
metadataRepositoryType: firestore
bucketName: my-terraform-plans
gcpProjectId: my-gcp-project
gcpFirestoreDatabaseName: terraform-plan-metadata
gcpFirestoreCollectionName: terraform-plan-storage
- name: Get Plan
uses: cloudposse/github-action-terraform-plan-storage@v2
id: get-plan
with:
action: getPlan
planPath: my-plan.tfplan
component: mycomponent
stack: core-mycomponent-use1
planRepositoryType: gcs
metadataRepositoryType: firestore
bucketName: my-terraform-plans
gcpProjectId: my-gcp-project
gcpFirestoreDatabaseName: terraform-plan-metadata
gcpFirestoreCollectionName: terraform-plan-storage
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| action | which action to perform. Valid values are: 'storePlan', 'getPlan', 'taintPlan' | storePlan | true |
| blobAccountName | the name of the Azure Blob Storage account to store the plan file | N/A | false |
| blobContainerName | the name of the Azure Blob Storage container to store the plan file | N/A | false |
| bucketName | the name of the S3 or GCS bucket to store the plan file | terraform-plan-storage | false |
| commitSHA | Commit SHA to use for fetching plan | | false |
| component | the name of the component corresponding to the plan file | N/A | false |
| cosmosConnectionString | the connection string to the CosmosDB account to store the metadata | N/A | false |
| cosmosContainerName | the name of the CosmosDB container to store the metadata | N/A | false |
| cosmosDatabaseName | the name of the CosmosDB database to store the metadata | N/A | false |
| cosmosEndpoint | the endpoint of the CosmosDB account to store the metadata | N/A | false |
| failOnMissingPlan | Fail if plan is missing | true | false |
| gcpFirestoreCollectionName | the name of the Firestore collection to store the metadata | terraform-plan-storage | false |
| gcpFirestoreDatabaseName | the name of the Firestore database to store the metadata | (default) | false |
| gcpProjectId | the Google Cloud project ID for GCP services (GCS, Firestore) | N/A | false |
| metadataRepositoryType | the type of repository where the plan file is stored. Valid values are: 'dynamo', 'cosmodb', 'firestore' | dynamo | false |
| planPath | path to the Terraform plan file. Required for 'storePlan' and 'getPlan' actions | N/A | false |
| planRepositoryType | the type of repository where the metadata is stored. Valid values are: 's3', 'azureblob', 'gcs' | s3 | false |
| stack | the name of the stack corresponding to the plan file | N/A | false |
| tableName | the name of the dynamodb table to store metadata | terraform-plan-storage | false |
## Outputs
| Name | Description |
|------|-------------|
---
## terratest
# GitHub Action: `terratest`
A GitHub Action to run Terratest tests and post the results as a build artifact.
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Run Terratest
uses: cloudposse/github-action-terratest@main
with:
sourceDir: test/src
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| sourceDir | The directory containing the source code to test | . | true |
---
## validate-codeowners
# GitHub Action: `validate-codeowners`
This is a Github Action to validate the `CODEOWNERS` file by running a series of checks against the `CODEOWNERS` file to ensure that it's valid and well-linted.
Ensuring your repository's `CODEOWNERS` file is valid can be critical to the development process if, for instance, your project uses [branch protection](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches) conditions that rely on definitions in `CODEOWNERS`.
## Usage
Copy the `.github/workflows/validate-codeowners.yml` file from this repository into the `.github/workflows` folder of the repository to which you'd like to add Validate `CODEOWNERS` functionality, and ensure that you are using an appropriate token in the workflow file.
This will cause the validation functionality to execute whenever any event occurs on any pull request.
## Quick Start
Here's how to get started...
1. Copy the `.github/workflows/validate-codeowners.yml` file from this repository into the `.github/workflows` folder of the repository to which you'd like to add Validate CODEOWNERS functionality.
2. Replace `${{ secrets.CODEOWNERS_VALIDATOR_TOKEN_PUBLIC }}` with the name of a token whose permissions are in line with your target repo's requirements, according to the instructions [here](https://github.com/mszostok/codeowners-validator/blob/main/docs/gh-token.md).
3. (Optional) Update the `main` pin inside `validate-codeowners.yml` to a fixed version. Consult https://github.com/cloudposse/github-action-validate-codeowners/releases for a list of available versions.
## Examples
Here's a real world example:
- [`github-action-validate-codeowners`](https://github.com/cloudposse/github-action-validate-codeowners/.github/workflows/validate-codeowners.yml) - Cloud Posse's self-testing Validate CODEOWNERS GitHub Action
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| token | GitHub token (see: https://github.com/mszostok/codeowners-validator/blob/main/docs/gh-token.md) | N/A | false |
---
## wait-commit-status
# GitHub Action: `wait-commit-status`
Wait for commit status
## Introduction
Checks GitHub API for a given commit and look the commit status.
## Usage
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Wait commit status
uses: cloudposse/github-action-wait-commit-status@main
with:
repository: ${{ github.repository }}
sha: ${{ github.sha }}
status: continuous-delivery/example-app
lookup: "success"
token: ${{ github.token }}
check-timeout: 120
check-retry-count: 5
check-retry-interval: 20
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| check-retry-count | Check retry count | 5 | false |
| check-retry-interval | Check retry interval (in seconds) | 10 | false |
| expected\_state | Commit status state wait for. Valid values 'success', 'error', 'failure', 'pending' | success | false |
| repository | Repository | N/A | true |
| sha | Commit SHA | N/A | true |
| status | Commit status name | N/A | true |
| token | Github authentication token | $\{\{ github.token \}\} | false |
---
## yaml-config-query
# GitHub Action: `yaml-config-query`
Define YAML document, filter it with JSON query and get result as outputs
## Introduction
Utility action allow to declare YAML structured document as an input and get it's part as the action outputs
referenced using JQ.
This action is useful in simplifing complext GitHub action workflows in different ways.
For examples follow [usage](#usage) section.
## Migration `v0` to `v1`
There is an issue [The query contains `true` or `false` fails with an error](https://github.com/alexxander/jq-tools/issues/4).
A workaround is to use a quote around `"true" and `"false" in a query.
To migrate from `v0` to `v1`, quote in your queries all `true`/`false` and Github actions substitutions resovled to the values.
### Example
* `query: .true` replace with `query: ."true"`
* `query: .${{ inputs.from == '' }}` replace with `query: ."${{ inputs.from == '' }}"`
## Usage
### Define constants
```yaml
name: Pull Request
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
jobs:
demo:
runs-on: ubuntu-latest
steps:
- name: Context
id: context
uses: cloudposse/github-action-yaml-config-query@main
with:
config: |
image: acme/example
tag: sha-${{ github.sha }}
- run: |
docker run ${{ steps.context.outputs.image }}:${{ steps.context.outputs.tag }}
```
### Implement if/else
```yaml
name: Promote
on:
workflow_call:
inputs:
from:
required: false
type: string
jobs:
demo:
runs-on: ubuntu-latest
steps:
- name: Context
id: from
uses: cloudposse/github-action-yaml-config-query@main
with:
query: ."${{ inputs.from == '' }}"
config: |-
true:
tag: ${{ github.sha }}
false:
tag: ${{ inputs.from }}
- run: |
docker tag acme/example:${{ steps.context.outputs.tag }}
```
### Implement switch
```yaml
name: Build
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened]
push:
branches: [ main ]
release:
types: [published]
jobs:
context:
runs-on: ubuntu-latest
steps:
- name: Context
id: controller
uses: cloudposse/github-action-yaml-config-query@main
with:
query: .${{ github.event_name }}
config: |-
pull_request:
build: true
promote: false
test: true
deploy: ["preview"]
push:
build: true
promote: false
test: true
deploy: ["dev"]
release:
build: false
promote: true
test: false
deploy: ["staging", "production"]
outputs:
build: ${{ steps.controlle.outputs.build }}
promote: ${{ steps.controlle.outputs.promote }}
test: ${{ steps.controlle.outputs.test }}
deploy: ${{ steps.controlle.outputs.deploy }}
build:
needs: [context]
if: ${{ needs.context.outputs.build }}
uses: ./.github/workflows/reusable-build.yaml
test:
needs: [context, test]
if: ${{ needs.context.outputs.test }}
uses: ./.github/workflows/reusable-test.yaml
promote:
needs: [context]
if: ${{ needs.context.outputs.promote }}
uses: ./.github/workflows/reusable-promote.yaml
deploy:
needs: [context]
if: ${{ needs.context.outputs.deploy != '[]' }}
strategy:
matrix:
environment: ${{ fromJson(needs.context.outputs.deploy) }}
uses: ./.github/workflows/reusable-deploy.yaml
with:
environment: ${{ matrix.environment }}
```
## Inputs
| Name | Description | Default | Required |
|------|-------------|---------|----------|
| config | YAML config | N/A | true |
| query | JQ Query | . | true |
---
## GitHub Actions(Library)
import Intro from '@site/src/components/Intro'
import DocCardList from '@theme/DocCardList'
In this library you'll find all the GitHub Actions we've implemented to solve common CI/CD challenges.
---
## Reference Architecture Overview
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import DocCardList from '@theme/DocCardList';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
Cloud Posse's reference architecture is a commercial, enterprise-ready infrastructure solution backed by [professional support with SLA](/support). Built on open-source foundations with the industry's [largest library of Terraform modules](/reference) and a comprehensive framework using [Atmos](https://atmos.tools), it provides a proven path to production-grade AWS infrastructure without the risk of going it alone.
```mermaid
graph LR
A(Build Your Foundation) --> B(Set Up Your Platform)
B --> C(Deploy Your Apps)
C --> D(Monitor Everything)
D --> E(Upgrade &\n Maintain)
E --> E
```
Before you jump into the Cloud Posse reference architecture, let’s review what makes it tick. Cloud Posse has helped companies—from scrappy startups to massive enterprises—win big with Terraform. But here’s the key: we do things differently. Everything is based on open source that your team owns and operates. We’re not another enterprise platform with hidden lock-in. If your team depends on our work, we offer [support](/support) to help you move faster and build with confidence.
Our goal? To create a collaborative ecosystem where everyone, regardless of the company, can work together on infrastructure, so we stop reinventing the wheel.
How do we make this magic happen? First, we built the industry's largest library of Terraform modules for AWS, Datadog, and GitHub Actions. Then, we crafted reusable components to give you rock-solid ways to set up your infrastructure. Finally, we wrapped it all up in a neat framework using [Atmos](https://atmos.tools) that ties everything together, fully automated with GitHub Actions.
## Documentation Structure
This documentation site breaks down SweetOps into the following sections to help you get up and running:
### Learn
This is section where you go to [**learn our framework**](/learn) for AWS and follow the structured guide to get it set up.
Each section of our "Learn" journey is designed to help you get up and running with SweetOps and the Cloud Posse Reference Architecture.
- **Build your foundation** (Organization, OUs Accounts, Network & VPCs, IAM & Single Sign-On)
- **Set up your platform** (Kubernetes, ECS, EKS, etc.)
- **Deploy your apps** (CI/CD, GitHub Actions, GitOps.)
- **Monitor everything** (Datadog, Prometheus, Grafana, etc. Also, monitoring for Security & Compliance)
- **Upgrade & Maintain** (Day-2 operations including upgrades, backups, disaster recovery, etc.)
Inside of each of these sections, you'll find:
- [**Design Decisions**](/tags/design-decision): Up-to-date context on the decisions for implementing well-built infrastructure
- [**Setup**](/tags/): show how to step up a specific layer or resource in easy-to-follow steps.
- [**How-To Guides & Tutorials**](/tags/tutorial): show how to solve specific problems with SweetOps via a series of easy-to-follow steps.
Then we include some of our [Best Practices](/best-practices) and [Architectural Design Decisions](/resources/adrs) to help you understand the reasoning behind our choices.
### Reference
The [reference documentation](/reference) is where you find the the underlying building blocks such as Terraform modules & components, GitHub Actions, and more.
This is where you go when you need to look up the documentation for how a particular component or module works, or when you're trying to build something, this is the first place to look to see if we already support it. It's the nuts and bolts of the SweetOps framework for AWS.
### Community
The [**Community**](/community) section is for those who want to engage with the SweetOps community and get support.
- Join our Slack community
- Attend our weekly office hours
- Contribute back to the project
## Who is this documentation for?
This documentation is written for DevOps or platform engineering teams that want an opinionated way to build software platforms in the cloud.
If the below sounds like you, then SweetOps is what you’re looking for:
1. **You’re on AWS** (the majority of our modules are for AWS)
2. **You’re using Terraform** as your primary IaC tool (and not Cloud Formation)
3. **Your platform needs to be secure** and potentially requires passing compliance audits (PCI, SOC2, HIPAA, HITRUST, FedRAMP, etc.)
4. You don’t want to reinvent the wheel
With SweetOps you can implement the following complex architectural patterns with ease:
1. An AWS multi-account Landing Zone built on strong, well-established principles including Separation of Concerns and Principle of Least Privilege (POLP).
2. Multi-region, globally available application environments with disaster recovery capabilities.
3. Foundational AWS-focused security practices that make complex compliance audits a breeze.
4. Microservice architectures that are ready for massive scale running on Docker and Kubernetes.
5. Reusable service catalogs and components to promote reuse across an organization and accelerate adoption
## What are the alternatives?
The reference architecture is comparable to various other solutions that bundle ready-to-go Terraform "templates" and offer subscription plans for access to their modules.
How does it differentiate from these solutions?
1. **It’s based 100% on Open Source**: SweetOps [is on GitHub](https://github.com/cloudposse) and is free to use with no strings attached under Apache 2.0.
2. **It’s comprehensive**: SweetOps is not only about Terraform. It provides a framework with conventions for building cloud-native platforms that are security-focused, Kubernetes or ECS-based, with comprehensive monitoring and incident management, and driven by continuous delivery.
3. **It’s community-focused**: SweetOps has [over 9500 users in Slack](https://sweetops.com/slack/), and well-attended weekly [office hours](https://cloudposse.com/office-hours/).
Now that you know what the reference architecture is about, you're ready to get started by with your first project.
Next Step
---
## Choose Your Path
import Intro from "@site/src/components/Intro";
import ActionCard from "@site/src/components/ActionCard";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PrimaryCTA from "@site/src/components/PrimaryCTA";
import SecondaryCTA from "@site/src/components/SecondaryCTA";
import Steps from "@site/src/components/Steps";
There are two easy ways to get started. You can dive right in with our Quickstart documentation if you're a hands-on learner. Alternatively, if you prefer Cloud Posse to do it for you, try our Jumpstart. Pick what works best for you! Plus, we provide [multiple support options](/support) if you get stuck.
All of the documentation on this site corresponds to our [AWS Reference Architecture](https://cloudposse.com/services).
We give away all the tools for free—[Terraform modules](https://github.com/cloudposse), [components](https://github.com/cloudposse-terraform-components), [Atmos](https://atmos.tools), and TONS more. But the system that ties it all together? That’s what we sell. Our reference architecture funds the Open Source that powers your business.
Our Quickstart provides an end-to-end configuration of our AWS reference architecture [customized to your needs](/quickstart/kickoff/#-review-design-decisions) and implemented by you at your own pace.
You can start today, by following along with our [Quickstart documentation](/quickstart) to get a sense of what's involved.
To get started, roll up your sleeves and follow the steps below to start building out your infrastructure!
- [Buy our "Quickstart"](https://cloudposse.com/pricing) to receive all the configurations.
- We’ll send you a form so you can share your [Design Decisions](/quickstart/kickoff/#-review-design-decisions) with us.
- Then schedule a [kick off call](/quickstart/kickoff/) to review them with you.
- Receive tailored configurations in 2-3 business days after the kick off call.
Buy QuickstartRead the Quickstart Docs
Every investment in Cloud Posse—whether through QuickStart, JumpStart, or Support—helps us keep building, maintaining, and improving the open source ecosystem for everyone.
When you invest in Cloud Posse, you’re not just helping your team—you’re strengthening the ecosystem your business depends on.
### Just need a little help?
If you just need a little help getting started, we offer multiple [support options](/support) to help you get unstuck.
Our [Jumpstart accelerator](https://cloudposse.com/services) provides an end-to-end implementation by Cloud Posse of this reference architecture in your AWS organization, customized to your needs, with a guaranteed outcome, fixed price, predictable timeline, and a money-back guarantee.
If you need assistance, our Jumpstart service provides an end-to-end implementation in your AWS organization. We swiftly implement the reference architecture, tailored to your design decisions, with a guaranteed outcome, fixed price, timeline, and a hassle-free money-back guarantee.
Get PriceLearn More
---
## Action Items
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
import Admonition from '@theme/Admonition';
import Note from '@site/src/components/Note';
Cloud Posse will need a few subscriptions set up from you in order to deploy your infrastructure. Some of these may not apply to all engagements, but please start setting up the relevant subscriptions now.
## Getting Started
Before we can get started, here's the minimum information we need from you.
### 1Password
Cloud Posse will use 1Password to share secrets your team. You do not need to use 1Password internally, but Cloud Posse will need to use 1Password to transfer secrets.
You can either create your own 1Password Vault and add Cloud Posse as members or request that Cloud Posse create a temporary vault (free for you). However, if Cloud Posse creates that vault for you,
only three users can be added at a time.
**We cannot create AWS accounts until we have access to 1Password.**
### Slack
We should already be using Slack for a shared general channel between Cloud Posse and your team. However, we will need an additional channel for AWS notifications and to access AWS account setup emails. We'll also use this channel for AWS budget alerts.
- [ ] Create a new Slack channel for AWS notifications, for example `#aws-notifications`
- [ ] Invite Cloud Posse
- [ ] [Set Up AWS Email Notifications](/layers/accounts/tutorials/how-to-set-up-aws-email-notifications) with your chosen email address for each account. If you are using plus-addressing, you will only need to connect the primary email address.
- [ ] [Create a Slack Webhook for that same channel](https://api.slack.com/messaging/webhooks). This is required to enable Budget alerts in Slack. Please share the Webhook URL and the final name of the Slack channel with Cloud Posse.
### Create New AWS Root Account (a.k.a. "Payer Account")
We will be launching a new AWS Organization from a single root account. Cloud Posse will be terraforming your entire organization, creating 12-plus accounts, and doing everything from the ground up. We're responsible for configuring SSO, fine-grained IAM roles, and more. We'll need a net-new Organization, so we cannot jeopardize any of your current operations.
Please create a new AWS root account and add the root credentials to 1Password. Cloud Posse will take it from there.
### Share GitHub Repository for Infrastructure as Code
Please create a new repository in your GitHub organization and grant the Cloud Posse team access. We will need GitHub access to create your Infrastructure as Code repository.
### AWS IAM Identity Center (AWS SSO)
In order to connect your chosen IdP to AWS IAM Identity Center (AWS SSO), we will need to configure your provider and create a metadata file. Please follow the relevant linked guide and follow the steps for the Identity Provider. All steps in AWS will be handled by Cloud Posse.
Please also provision a single test user in your IdP for Cloud Posse to use for testing and add those user credentials to 1Password.
- [Setup AWS Identity Center (SSO)](/layers/identity/aws-sso/)
- GSuite does not automatically sync Users and Groups with AWS Identity Center without additional configuration! If using GSuite as an IdP, considering deploying the [ssosync tool](https://github.com/awslabs/ssosync).
- The official AWS documentation for setting up JumpCloud with AWS IAM Identity Center is not accurate. Instead, please refer to the [JumpCloud official documentation](https://jumpcloud.com/support/integrate-with-aws-iam-identity-center)
### AWS SAML (Optional)
If deploying AWS SAML as an alternative to AWS SSO, we will need a separate configuration and metadata file. Again, please refer to the relevant linked guide.
Please see the following guide and follow the steps to export metadata for your Identity Provider integration. All steps in AWS will be handled by Cloud Posse.
- [Setup AWS SAML](/layers/identity/tutorials/aws-saml/)
## GitHub Self-Hosted Runners
We recommend [Runs On](/layers/github-actions/runs-on/) for self-hosted GitHub runners. It provides zero infrastructure management, simple setup, and cost-effective pricing.
### Purchase a RunsOn License
RunsOn requires a license to operate. We recommend purchasing the **Commercial License** ($300/year) for production deployments. A free 15-day trial is available if you need to evaluate first.
Please [purchase a RunsOn license](/layers/github-actions/runs-on/#-acquire-a-runson-license) and share the license key with Cloud Posse via 1Password.
If you have an existing deployment using a legacy approach, see:
- [Actions Runner Controller (EKS)](/layers/github-actions/tutorials/eks-github-actions-controller/)
- [Philips Labs GitHub Runners](/layers/github-actions/tutorials/philips-labs-github-runners/)
## Atmos Component Updater Requirements
Cloud Posse will deploy a GitHub Action that will automatically suggest pull requests in your new repository.
To do so, we need to create and install a GitHub App and allow GitHub Actions to create and approve pull requests within your GitHub Organization.
For more on the Atmos Component Updater, see [atmos.tools](https://atmos.tools/integrations/github-actions/component-updater).
### Create and install a GitHub App for Atmos
1. Create a new GitHub App
2. Name this new app whatever you prefer. For example, `Atmos Component Updater`.
3. List a Homepage URL of your choosing. This is required by GitHub, but you can use any URL. For example use our documentation page: `https://atmos.tools/integrations/github-actions/component-updater/`
4. (Optional) Add an icon for your new app (example provided below)
5. Assign only the following Repository permissions:
```diff
+ Contents: Read and write
+ Pull Requests: Read and write
+ Metadata: Read-only
```
6. Generate a new private key [following the GitHub documentation](https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/managing-private-keys-for-github-apps#generating-private-keys).
7. Share both the App ID and the new private key with Cloud Posse in 1Password
Feel free to download and use our Atmos icon with your GitHub App!

### Allow GitHub Actions to create and approve pull requests
1. Go to `https://github.com/organizations/YOUR_ORG/settings/actions`
2. Check "Allow GitHub Actions to create and approve pull requests"
### Create `atmos` GitHub Environment
If you grant Cloud Posse `admin` in your new infrastructure repository, we will do this for you.
We recommend creating a new GitHub environment for Atmos. With environments, the Atmos Component Updater workflow will be required to follow any branch protection rules before running or accessing the environment's secrets. Plus, GitHub natively organizes these Deployments separately in the GitHub UI.
1. Open "Settings" for your repository
1. Navigate to "Environments"
1. Select "New environment"
1. Name the new environment, "atmos".
1. In the drop-down next to "Deployment branches and tags", select "Protected branches only"
1. In "Environment secrets", create the two required secrets for App ID and App Private Key created above and in 1Password. We will pull these secrets from GitHub Actions with `secrets.ATMOS_APP_ID` and `secrets.ATMOS_PRIVATE_KEY` respectively.
### Requirements for Purchasing Domains
If we plan to use the `core-dns` account to register domains, we will need to add a credit card directly to that individual account. When the account is ready, please add a credit card to the `core-dns` account following the [AWS documentation](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cc.html#Add-cc).
## Additional Integrations
Confirm if you plan to deploy any of the following integrations. If so, we will need access to these services. If you haven't already signed up for these services, please soon.
### Datadog Access
Sign up for Datadog following the [How to Sign Up for Datadog?](/layers/monitoring/datadog/tutorials/how-to-sign-up-for-datadog) documentation.
Cloud Posse will need "admin" access for Datadog as well to complete the Datadog setup.
## Release Engineering
If your engagement with Cloud Posse includes Release Engineering, we will also need some more things.
### Sign up GitHub Enterprise (Optional)
GitHub Enterprise is required to support native approval gates on deployments to environments.
Startups can score a discount for the first 20 users. Reach out to GitHub for details.
### Configure GitHub Settings
If we are deploying release engineering as part of the engagement, we will need a few additional items from your team.
- [ ] [Enable GitHub Actions for your GitHub Organization](https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization).
- [ ] [Allow access via fine-grained personal access tokens for your GitHub Organization](https://docs.github.com/en/organizations/managing-programmatic-access-to-your-organization/setting-a-personal-access-token-policy-for-your-organization#restricting-access-by-fine-grained-personal-access-tokens).
- [ ] Create an empty `example-app` private repository in your Organization. We'll deploy an example for release engineering here.
### PATs for ECS with `ecspresso`
- Create one fine-grained PAT with the following permission.
Please see [Creating a fine-grained personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-fine-grained-personal-access-token)
This PAT needs read access to your `infrastructure` repository:
```diff
Repository
+ Contents: Read-only
+ Metadata: Read-only
```
- Save the new fine-grained PAT as a GitHub environment secret in the new `example-app` private repository in your Organization.
### PATs for EKS with ArgoCD
ArgoCD requires a number of PATs. Please see [How to set up Authorization for ArgoCD with GitHub PATs](/layers/software-delivery/eks-argocd/tutorials/pats)
---
## FAQ
import Intro from '@site/src/components/Intro';
These are some of the most frequently asked questions by customers during our Kick Off calls.
### What is the difference between a Service Discovery Domain and a Vanity Domain?
This is an extremely common question. Please see [What is the difference between a Vanity and a Service Domain?](/layers/network/faq/#what-is-the-difference-between-a-vanity-and-a-service-domain)
### Do we have to use 1Password?
Yes, for Cloud Posse engagements we only uses 1Password to share secrets. You do not need to use 1Password internally, but Cloud Posse will need to use 1Password to transfer secrets to your team. You can either create your own 1Password Vault and add Cloud Posse as members or request that Cloud Posse create a temporary vault (free for you).
### Do we have to create a new Organization?
Yes! We need this single root account to start a new AWS Organization. Cloud Posse will be terraforming your entire organization, creating 12-plus accounts, and doing everything from the ground up. We're responsible for configuring SSO, fine-grained IAM roles, and more. We'll need a net-new Organization, so we cannot jeopardize any of your current operations.
Once created, we will invite your team to join the new Organization.
### How many email addresses do we need to create?
Only one email with `+` addressing is required. This email will be used to create your AWS accounts. For example, `aws+%s@acme.com`.
### What is plus email addressing?
Plus email addressing, also known as plus addressing or subaddressing, is a feature offered by some email providers that allows users to create multiple variations of their email address by adding a "+" sign and a unique identifier after their username and before the "@" symbol.
For example, if the email address is "john.doe@example.com", a user can create variations such as "john.doe+newsletter@example.com" or "john.doe+work@example.com". Emails sent to these variations will still be delivered to the original email address, but the unique identifier can be used to filter or organize incoming emails.
### How can we track progress?
We send status updates on Fridays via Slack! Or feel free to reach out anytime for an update.
### Why are the initial Pull Requests so large?
The reason that these PRs are so large is because we are generating content for your entire infrastructure repository.
A complete infrastructure set up requires dozens of components, each with Terraform modules, configuration, account setup, and documentation.
We've organized these full infrastructure configurations into "layers", which generally reflect the topics of the handoff calls.
Specifically, these layers are typically: baseline, accounts, identity, network, spacelift, eks, monitoring, and data, as well as a few
miscellaneous additions for smaller addons.
In order to deploy any given layer, we must create all content for that given layer. For example, eks adds 200+ files.
These are all required to be able to deploy EKS, so we cannot make this PR smaller. However, as the foundation is built out,
these PRs will naturally become small, as additional layers have fewer requirements.
Regarding your team's internal review, we do not intend for your team to be required to review these massive PRs.
Cloud Posse internally reviews these PRs extensively to ensure that the final product works as intended. Once we're
confident that we've deployed a given layer entirely, then we schedule the handoff calls. A handoff call is intended
to explain a given topic and provide the opportunity for your team to review and provide feedback on any given layer,
as well as answer other questions.
### How can we customize our architecture?
Customizations are out of scope typically, but we can assess each on a case-by-case basis.
You will learn your environment and be confident to make customizations on your own.
Often we can deploy an example of the customization, but it's up to you to implement the full deployment
### What if we need more help?
Cloud Posse offers multiple levels of support designed to fit your needs and budget. For more information, see [Support](/support).
For everything else, we provide fanatical support via our Professional Services, exclusive to reference architecture customers. We can help with anything from architecture reviews, security audits, custom development, migrations, and more.
Please [book a call](https://cloudposse.com/meet) to discuss your needs.
---
## Watch All Handoffs
import Slider, { Slide } from '@site/src/components/Slider'
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import PrimaryCTA from '@site/src/components/PrimaryCTA'
import ReactPlayer from 'react-player'
import Intro from '@site/src/components/Intro'
We've organized everything into "layers" that represent the different concerns of our infrastructure. Watch these short videos to get an overview of each layer, the problems we faced, and how we solved them.
https://github.com/facebook/docusaurus/issues/6201
## Placeholder {#hidden}
AI generated voice
Learn about the essential tools Cloud Posse uses to manage infrastructure as code. This guide covers the Geodesic Toolbox Container for standardizing development environments, the Atmos framework for implementing conventions and workflows, Terraform for managing cloud infrastructure, and GitHub Actions for CI/CD automation.
Get StartedAI generated voice
Review how Cloud Posse designs and manages AWS Account architectures using Atmos and Terraform, aligning with the AWS Well-Architected Framework.
Get StartedAI generated voice
Learn how Cloud Posse sets up fine-grained access control for an entire organization using IAM roles, AWS SAML, and AWS IAM Identity Center (SSO). We addresses the challenges we encountered of using various login methods and tools and introduce our solution involving Teams and Team Roles to manage access across multiple AWS accounts.
Get StartedAI generated voice
Understand Cloud Posse’s approach to designing robust and scalable Network and DNS architectures on AWS, with a focus on symmetry, account-level isolation, security, and reusability. We cover essential topics such as account isolation, connecting multiple accounts together using Transit Gateways, deploying AWS Client VPN for remote network access by developers, and differentiating between DNS service discovery and branded vanity domains used by customers.
Get StartedAI generated voiceGet StartedAI generated voiceGet StartedAI generated voiceGet StartedAI generated voiceGet StartedAI generated voiceGet Started
---
## Get a Jumpstart with Cloud Posse
import Intro from '@site/src/components/Intro';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import SecondaryCTA from '@site/src/components/SecondaryCTA';
import PillBox from '@site/src/components/PillBox';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
Done For You
Our Jumpstart accelerator provides an end-to-end implementation by Cloud Posse of this reference architecture in your AWS organization, customized to your needs, with a guaranteed outcome, fixed price, predictable timeline, and a money-back guarantee.
This documentation will guide you through the end-to-end configuration of our reference architecture for your AWS organization. While Cloud Posse will implement everything for you, we strongly encourage that you follow along by reading this documentation. Also, don't forget we offer [multiple support options](/support) if you get stuck.
## Start your Engagement with Cloud Posse
All of the documentation refers to the prebuilt configurations that we'll implement for you based on our discussions.
We'll assume from here on out that you've already started an enagement with Cloud Posse. If you haven't, please [request a quote](https://cloudposse.com/meet) to get started.
## Schedule your Kickoff Call with Cloud Posse
This is an opportunity to review your design decisions with Cloud Posse, and ask any questions before you get started.
Review Agenda
## Tackle all the Action Items
First, we'll need to collect some information from you before we can get started. Please review the action items and complete them as soon as possible.
Get Started
## Watch the Overview Videos
We've organized everything into "layers" that represent the different concerns of our infrastructure. Watch these short videos to get an overview of each layer, the problems we faced, and how we solved them.
Get Started
---
## Kick Off with Cloud Posse
import Link from "@docusaurus/Link";
import KeyPoints from "@site/src/components/KeyPoints";
import Steps from "@site/src/components/Steps";
import Step from "@site/src/components/Step";
import StepNumber from "@site/src/components/StepNumber";
import Intro from "@site/src/components/Intro";
import ActionCard from "@site/src/components/ActionCard";
import PrimaryCTA from "@site/src/components/PrimaryCTA";
import TaskList from "@site/src/components/TaskList";
import Admonition from "@theme/Admonition";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
The kickoff process for Jumpstart engagements with Cloud Posse ensures a
smooth start with clear communication for a successful project delivery.
During the call, we will confirm contract requirements and set project
expectations. We also cover how Cloud Posse will deploy your infrastructure in
layers using our reference architecture and introduce various [support options available](/support),
including Slack, customer workshops, and office hours.
- **Kickoff Process:** Establish roles, confirm requirements, and set project expectations for a smooth start
- **Implementation Phase:** Understand how we go about provisioning infrastructure in layers using the reference architecture, including comprehensive handoffs with documentation
- **Support and Communication:** Review the multiple [support options](/support), how we'll use Slack, office hours, and detailed documentation to ensure successful engagement
## Preparing for the Kickoff Meeting
This document outlines what to expect from your first call with Cloud Posse. In order to make the most of this meeting, please read through this document and come prepared with questions. In particular, please review the following:
1. Identify stakeholders and establish ownership of the engagement within your Organization.
2. Read through the [Design Decisions](#review-design-decisions) and prepare questions and decisions.
3. Review the list of [Actions Items](#action-items) following this call.
## Kickoff Meeting Agenda
### Introductions
Here we will review who is on the call, what their roles are, and identify our technical point of contact at Cloud Posse. We will also review the working timezones of the teams.
### Project Overview
Cloud Posse will begin deploying your infrastructure [starting with the foundation](/layers/project) based your team's design decisions. The Reference Architecture is a collection of best practices for building a secure, scalable, and highly available infrastructure on AWS. The Reference Architecture is a living document that is constantly evolving as we learn from our customers and the community.
We will deploy your infrastructure in _layers_. These layers are designed to manage collections of deliverables and will be a mix of generated content from a private reference, vendored Terraform from open-source libraries, and any customization for your Organization. Because we are delivering an entire infrastructure repository, these initial PRs will be massive; a complete infrastructure setup requires dozens of components, each with Terraform modules, configuration, account setup, and documentation. You are absolutely welcome to follow along, but we do not intend for your team to be required to review these massive PRs. Cloud Posse internally reviews these PRs extensively to ensure that the final product works as intended.
Once we're confident that we've deployed a given layer entirely, we then schedule the [Hand-Off Calls](#handoff-calls). A handoff call is intended to explain a given topic and provide the opportunity for your team to review and provide feedback on any given layer, as well as answer other questions. Before each Hand-Off Call, review the fundamentals of every layer by watching the videos. While we can use the handoff calls, teams can feel overwhelmed if they haven't watched any of the handoff videos beforehand. These calls can be a lecture on the material for any given layer, a demo from Cloud Posse, or an opportunity to practice with hands-on labs.
If you come prepared for Hand-Off calls, we can skip the lecture and spend more time answering questions or working through hands-on labs.
### Shared Customer Workshop
> **When:** Thursdays, 7:00-7:30A PT/ 9:00-9:30A CT/ 10:00-10:30A ET
> **Where:** Zoom
> **Who:** [Essential Support Customers Only](/support/essential)
> **When:** Wednesdays, 2:30-3:00P PT/ 4:30-5:00P CT/ 5:30-6:00P ET
> **Where:** Zoom
> **Who:** [Essential Support Customers Only](/support/essential)
This is a great opportunity to get your questions answered and to get help with your project.
### Community Office Hours (FREE)
> **When:** Wednesdays, 11:30a-12:30p PT/ 1:30p-2:30p CT/ 2:30p-3:30p ET
> **Where:** Zoom
> **Who:** Anyone
This is a good way to keep up with the latest developments and trends in the DevOps community. Office Hours are less focused on technical questions that [Customer Workshops](/support/essential), but you can ask anything you like.
Sign up at [cloudposse.com/office-hours](https://cloudposse.com/office-hours/)
### SweetOps Slack
If you are looking for a community of like-minded DevOps practitioners, please join the [SweetOps Slack](https://slack.sweetops.com/).
### Review Design Decisions
- [ ] [Decide on Terraform Version](/layers/project/design-decisions/decide-on-terraform-version)
- [ ] [Decide on Namespace Abbreviation](/layers/project/design-decisions/decide-on-namespace-abbreviation)
- [ ] [Decide on Infrastructure Repository Name](/layers/project/design-decisions/decide-on-infrastructure-repository-name)
- [ ] [Decide on Email Address Format for AWS Accounts](/layers/accounts/design-decisions/decide-on-email-address-format-for-aws-accounts)
- [ ] [Decide on IdP](/layers/identity/design-decisions/decide-on-idp)
- [ ] [Decide on IdP Integration Method](/layers/identity/design-decisions/decide-on-idp-integration)
- [ ] [Decide on Primary AWS Region and Secondary AWS Region](/layers/network/design-decisions/decide-on-primary-aws-region)
- [ ] [Decide on CIDR Allocation Strategy](/layers/network/design-decisions/decide-on-cidr-allocation)
- [ ] [Decide on Service Discovery Domain](/layers/network/design-decisions/decide-on-service-discovery-domain)
- [ ] [Decide on Vanity Domain](/layers/network/design-decisions/decide-on-vanity-branded-domain)
- [ ] [Decide on Release Engineering Strategy](/layers/software-delivery/design-decisions/decide-on-release-engineering-strategy)
These are the design decisions you can customize as part of the Jumpstart package. [All other decisions are pre-made](/tags/design-decision/) for you, but you're welcome to review them. If you'd like to make additional changes, [let us know—we're happy to provide a quote](https://cloudposse.com/meet).
## Review Handoff Calls
Generally, expect to schedule the following Handoff calls. These are subject to change and should be adaptable to fit your individual engagement.
- [Kick Off](/jumpstart/kickoff)
- [Introduction to Toolset](/layers/project)
- [Identity and Authentication](/layers/identity)
- [Component Development](/learn/component-development)
- [Account Management](/layers/accounts/)
- [Network and DNS](/layers/network/)
- [Automate Terraform](/layers/atmos-pro)
- [ECS](/layers/ecs) or [EKS](/layers/eks)
- [Monitoring](/layers/monitoring)
- [Release Engineering](/layers/software-delivery)
- Final Call (Sign-off)
## How to Succeed
Cloud Posse has noticed several patterns that lead to successful projects.
### Come Prepared
Review six pagers and documentation before Hand-Off calls. This will help you to know what questions need to be asked. Coming unprepared will lead to a lot of questions and back-and-forth. This will slow down material resulting in less time for new material.
### Take Initiative
The most successful customers take initiative to make customizations to their Reference Architecture. This is a great way to make the Reference Architecture your own. It also helps to build a deeper understanding of the Reference Architecture and how it works.
### Cameras On
We recommend that all participants have their cameras on. This helps to build trust and rapport. It also helps to keep everyone engaged and focused. This also lets us gauge how everyone is understanding the material. If you are having trouble understanding something, please ask questions.
### Ask Questions
We encourage you to ask questions. We want to make sure that everyone understands the material. We also want to make sure that we are providing the right level of detail. Our meetings are intended to be interactive and encourage conversation. Please feel free to interject at any time if you have a question or a comment to add to the discussion.
## Get Support
The Jumpstart accelerator does not include support. To get help, we offer multiple à la carte [support options](/support) to fit your needs and budget.
### Slack
If you need help scheduling meetings, please post in your team's Cloud Posse channel (e.g., `#acme-general`). This keeps discussions open and prevents duplicated or siloed information in direct messages (DMs). In general, please avoid DMs, as they make it harder to escalate requests or follow up. Feel free to @ a team member if you need assistance—we're here to help!
You can also reach out to our community with our [SweetOps Slack community](#sweetops-slack).
### Community Office Hours
[Community Office Hours](https://cloudposse.com/office-hours) are great opportunities to ask general questions and get help. If you need more technical help, please consider one of our excellent [support options](/support).
### Documentation
You can always find how-to guides, design decisions, and other helpful pages at [docs.cloudposse.com](/)
---
## Onboarding
import TaskList from '@site/src/components/TaskList';
After ensuring you’ve satisfied all the prerequisites, we recommend doing the following
- [ ] Join the Shared “Slack connect” channels with our team. These are usually named something like `#cloudposse-` on
the customer’s side.
- [ ] Ensure you’ve been added to any calls with the Cloud Posse team. Depending on the type of engagement, you may have
weekly cadance calls (Enterprise only). Cloud Posse PMs can help add
members to the calendar event.
- [ ] Ensure you’ve been added to the Cloud Posse Linear Project for your company’s engagement with our team.
Cloud Posse PMs can help add members. **Only applicable for Enterprise engagements**.
- [ ] Learn more about Cloud Posse because as a [DevOps Accelerator](https://cloudposse.com) we are very different from
typical professional services companies.
- [ ] Review the different ways you can get [Support](/support) to fit your needs and budget.
---
## How to Provision Shared Slack Channels
import Note from '@site/src/components/Note'
import Intro from '@site/src/components/Intro'
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
## Problem
Collaborating effectively between teams requires rapid back-and-forth communication. Email threads grow unwieldy and adding members to threads so they get all the context is difficult. Email as a medium is in general more insecure (e.g. more easily susceptible to phishing attacks). Email forums are for long-form communication, not the sort we’re accustomed to when needing to get quick answers.
## Solution
Slack supports sharing slack channels between slack teams (organizations) using what they call _Slack Connect_. Each organization gets to manage its side of the slack channel, including what to call the slack channel and enforce policies.
## Slack Connect Documentation
The official Slack Connect documentation: [https://slack.com/help/articles/115004151203-A-guide-to-Slack-Connect](https://slack.com/help/articles/115004151203-A-guide-to-Slack-Connect)
## Step by Step Instructions
1. Create a new Slack Channel as you would normally
2. Give it a name, usually of the partner, customer or vendor, followed by `-general` since there may be future channels shared down the road. After clicking the “share outside ...” checkbox, the title of the dialg changes to “Slack Connect” which what Slack calls shared channels.
all members can rename the channel on their end without affecting the local name of the channel. This means you should name the channel following your organization’s conventions.
3. Proceed by clicking Next. Then you’re prompted to share the channel. There are (2) ways to do this: either by clicking the “copy share link” option or by entering in the email address of _anyone_ from the other organization. The other organization’s admins will then be prompted to accept the invitation.
Then you’ll see a notice like this from the Slackbot.
4. After the invitation is accepted by the other organization, the inviting organization may need to re-confirm the connect request. This may appear as a notice from the `Slackbot` to the slack admins.
5. Once the connection is established, any members that exist in the channel can chat publicly or via direct message (DMs).
## Managing Slack Connections
You can accept/decline slack connections from the "Slack Connect" sidebar menu.
---
## Offboarding Cloud Posse
### Problem
- Your company is ready to take over all operations and needs to restrict Cloud Posse’s access to mission-critical environments for regulatory compliance (e.g. for HIPAA compliance).
- Your engagement with Cloud Posse is coming to an end and you need to shut down entirely Cloud Posse’s access to all environments or you are pausing the engagement
- Cloud Posse has access to multiple systems and you may want to restrict access accordingly.
## Solution
### AWS
#### **Option 1:** Restrict Federated IAM Access to Some Accounts
#### **Option 2:** Restrict Federated IAM Access to a Single Account
#### **Option 3:** Disable Federated IAM Access to All Accounts
:::info
After disabling all Federated IAM access, you have the option to issue Cloud Posse team members SSO access via your own IdP.
:::
#### **Option 4**: Issue IAM User Accounts
:::caution
We strongly discourage this approach as it’s generally an anti-pattern to bypass SSO and introduces new requirements for offboarding team members.
:::
### Customer managed IdP
For things like Okta, Workspaces, or Azure AD:
- Remove Cloud Posse team members from your IdP.
- Remove any test accounts that were used for evaluating teams/groups.
### GitHub
Typically customers provision a “Cloud Posse” team within their GitHub org.
#### Offboarding Github Access
- Option 1: Revoke All
Revoking this team’s access from repositories should be sufficient to remove all of our access. Also, ensure that any repositories do not have Cloud Posse usernames directly added as external contributors. This happens if repositories were created by our team in your organization.
- Option 2: Downgrade Access
Changing our team’s access to read-only will enable us to still participate in Code Reviews.
#### Offboarding GitHub Ownership
The `CODEOWNERS` file should be checked to make sure that no Cloud Posse usernames or groups are listed. This file is typically located in the root of the repository.
### Spacelift
Depending on how Spacelift was configured, make sure the `LOGIN` policy does not include any Cloud Posse users.
Go to `https://.app.spacelift.io/policies`
Then remove our team’s access or any hardcoded usernames.
Also, make sure to sign out any logged in sessions, by going to `https://.app.spacelift.io/settings/sessions`
### Slack
:::tip Leave Channels Open
We recommend keeping open channels of communication between our teams. That way we are able to help you out in a pinch.
:::
All customer channels are managed via Slack Connect. Some channels may be owned by your team, others by our team. If you desire to close the connection, ask your Slack administrator to remove our organization from the slack connection.
See:
- [Removing Orgs from Slack Connect](https://slack.com/help/articles/360026489273-Remove-organizations-from-a-Slack-Connect-channel-)
- [Removing external members from channels](https://slack.com/help/articles/5682545991443-Slack-Connect--Manage-external-people-#remove-external-people)
### Datadog
Offboard any `@cloudposse.com` email addresses.
### Customer Jira & Confluence
Some customers have added our team directly to their Atlassian products. Make sure to offboard any `@cloudposse.com` email addresses.
### 1password
1Password vaults may be shared between our teams. Sometimes customers add Cloud Posse to their vaults, other times customers were added to vaults controlled by Cloud Posse.
At the end of an engagement, we recommend to stop sharing vaults.
#### Customer Managed Vaults
If your company controls the vault, simply remove Cloud Posse's team access from the vault. We recommend rotating all credentials both as an exercise and as an extra precaution.
#### Cloud Posse Managed Vaults
When vaults are controlled by Cloud Posse, we require the customer to take over ownership by creating their own vault, and manually copying over the secrets.
- Create a new vault for your team
- Recreate all the credentials in the new vault. **We recommend rotating credentials.**
- Share the new vault with your team
- Request Cloud Posse destroy it's vault
---
## Tutorials
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
These tutorials apply specifically to the Jumpstart engagements and provide guides typically related to onboarding and setting up systems that we will need to conduct this engagement.
---
## Account Management
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import ReactPlayer from "react-player";
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import SecondaryCTA from '@site/src/components/SecondaryCTA';
import TaskList from '@site/src/components/TaskList';
import CategoryList from '@site/src/components/CategoryList';
This chapter presents how Cloud Posse designs and manages AWS Account architectures. We will explain how Cloud Posse provisions and manages AWS Accounts using Atmos and Terraform, the reasoning behind our decisions, and how this architecture will better align your organization with the [AWS Well-Architected Framework](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/userguide/wellarchitected-ug.pdf).
- Why to leverage multiple AWS accounts within an AWS Organization
- How we organize accounts into organizational units (OUs) to manage access and apply Service Control Policies (SCPs) to provide guard rails
- The set of components we use to provision, configure, and manage AWS accounts, including account-level settings, service control policies, and Terraform state backends, using native Terraform with Atmos
AI generated voice
## The Problem
The [AWS Well-Architected Framework](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/userguide/wellarchitected-ug.pdf) defines AWS architectural best practices and presents a set of foundational questions to enable you to understand how a specific architecture aligns with cloud best practices.
The AWS Well-Architected Framework provides several foundational recommendations, one of which is to distribute workloads across multiple AWS accounts. However, the framework does not prescribe how this should be achieved. AWS offers resources such as Control Tower or Account Factory for provisioning accounts, but these resources have some limitations. The primary issue is that they cannot be managed sufficiently with Terraform, which means manual effort is required to use them.
## Our Solution
Cloud Posse has developed a set of components to provision, configure, and manage AWS Accounts and Organizations.
{/* TODO: Update diagram to remove core-identity account - identity is now managed in core-root */}
### Using an Organization
Leveraging multiple AWS accounts within an AWS Organization is the only way to satisfy these requirements. Guard rails
can be created to restrict what can happen in an account and by whom.
We then further organize the flat account structure into organizational units. Organizational units (OUs) can then
leverage things like Service Control Policies to restrict what can happen inside the accounts.
`core` (OU)
Responsible for management accounts, such as the organizational root account or a network hub. These accounts are singletons and will never need to be duplicated.
`plat` (OU)
Responsible for platform accounts, such as sandbox, dev, staging, and prod. These accounts are dynamic and can be specific to the needs of your Organizations.
### Account Boundaries
Constructs like VPCs only provide network-level isolation, but not IAM-level isolation. And within a single AWS account,
there’s no practical way to manage IAM-level boundaries between multiple stages like dev/staging/prod. For example, to
provision most Terraform modules, “administrative” level access is required because provisioning any IAM roles requires
admin privileges. That would mean that a developer needs to be an “admin” in order to iterate on a module.
Multiple AWS accounts should be used to provide a higher degree of isolation by segmenting/isolating workloads. There is
no additional cost for operating multiple AWS accounts. It does add additional overhead to manage as a standard set of
components will to manage the account. AWS Support only applies to one account, so it may need to be purchased for each
account unless the organization upgrades to Enterprise Support.
Multiple AWS accounts are all managed underneath an AWS Organization and organized into multiple organizational units
(OUs). Service Control Policies can restrict what runs in an account and place boundaries around an account that even
account-level administrators cannot bypass.
### Account Architecture
By convention, we prefix each account name with its organizational unit (OU) to distinguish it from other accounts of the same type. For example, if we have an OU called `plat` (short for platform) and an account called "production" (or `prod` for short), we would name the account `plat-prod`. In practice, there might be multiple production accounts, such as in a `data` OU, a `network` OU, and a `plat` OU. By prefixing each account with its OU, it is sufficiently disambiguated and follows a consistent convention.
core-root
The "root" (parent, billing) account creates all child accounts. The root account has special capabilities not
found in any other account
An administrator in the root account by default has the OrganizationAccountAccessRole to
all other accounts (admin access)
Organizational CloudTrails can only be provisioned in this account. It’s the only account that can have member
accounts associated with it
Service Control Policies can only be set in this account
It's the only account that can manage the AWS Organization
As the organization owner, this is where IAM Identity Center is deployed for centralized identity management
core-audit
The "audit" account is where all logs end up
core-security
The "security" account is where to run automated security scanning software that might operate in a read-only
fashion against the audit account.
core-network
The “network” account is where the transit gateway is managed and all inter-account routing
core-dns
The “dns” account is the owner for all zones (may have a legal role with Route53Registrar.*{" "}
permissions). Cannot touch zones or anything else. Includes billing.
core-auto
The “automation” account is where any gitops automation will live. Some automation (like Spacelift) has “god” mode
in this account. The auto account will typically have transit gateway access to all other accounts, therefore we
want to limit what is deployed in the automation account to only those services which need it.
core-artifacts
This “artifacts” account is where we recommend centralizing and storing artifacts (e.g. ECR, assets, etc) for
CI/CD
plat-prod
The "production" is the account where you run your most mission-critical applications
plat-staging
The “staging” account is where QA and integration tests will run for public consumption.
This is production for QA engineers and partners doing integration tests. It must be stable for third-parties to
test. It runs a kubernetes cluster.
plat-dev
The "dev" account is where to run automated tests, load tests infrastructure code.
This is where the entire engineering organization operates daily. It needs to be stable for developers. This
environment is Production for developers to develop code.
plat-sandbox
The "sandbox" account is where you let your developers have fun and break things. Developers get admin. This is
where changes happen first. It will be used by developers who need the bleeding edge. Only DevOps work here or
developers trying to get net-new applications added to tools like slice.
### Terraform State
We need someplace to store the terraform state. Multiple options exist (e.g. Vault, Terraform Enterprise, GitLab,
Spacelift), but the only one we’ll focus on right now is using S3. The terraform state may contain secrets, which is
unavoidable for certain kinds of resources (e.g. master credentials for RDS clusters). For this reason, it is advisable
for companies with security and compliance requirements to segment their state backends to make it easier to control
with IAM who has access to what.
While on the other hand adding multiple state backends is good from a security perspective, on the other it
unnecessarily complicates the architecture for companies that do not need the added layer of security.
We will use a single S3 bucket, as it is the least complicated to maintain. Anyone who should be able to run terraform
locally will need read/write access to this state bucket.
### Components
Cloud Posse manages this process with the following components.
account
This component is responsible for provisioning the full account hierarchy along with Organizational Units (OUs). It
includes the ability to associate Service Control Policies (SCPs) to the Organization, each Organizational Unit and
account.
account-settings
This component is responsible for provisioning account level settings: IAM password policy, AWS Account Alias, EBS
encryption, and Service Quotas. We can also leverage this component to enable account or organization level budgets.
tfstate-backend
Provisions the Terraform state backends. This component already follows all standard best practices around private
ACLs, encryption, versioning, locking, etc.
cloudtrail
This component is responsible for provisioning cloudtrail auditing in an individual account. It's expected to be
used alongside the cloudtrail-bucket component as it utilizes that bucket via remote state.
cloudtrail-bucket
This component is responsible for provisioning a bucket for storing cloudtrail logs for auditing purposes
## Design Decisions
[Review Design Decisions](/layers/project/design-decisions) and record your decisions now. You will need the results of these decisions going forward.
Next, we'll prepare the organization to provision the Terraform State backend, followed by account provisioning.
If you're curious about the though that went into this process, please review the design decisions documentation.
Next StepReview Design Decisions
## References
- [Decide on AWS Organization Strategy](/layers/accounts/design-decisions/decide-on-aws-organization-strategy/)
- [Decide on AWS Account Flavors and Organizational Units](/layers/accounts/design-decisions/decide-on-aws-account-flavors-and-organizational-units/)
- [Decide on AWS Support](/layers/accounts/design-decisions/decide-on-aws-support/)
- [Decide on Email Address Format for AWS Accounts](/layers/accounts/design-decisions/decide-on-email-address-format-for-aws-accounts/)
- [Structure of Terraform S3 State Backend Bucket](/layers/accounts/tutorials/terraform-s3-state/)
---
## Deploying AWS Accounts
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Note from '@site/src/components/Note';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import TaskList from '@site/src/components/TaskList';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
This guide walks through deploying AWS accounts using `atmos` workflows. Before starting, ensure you have completed the [Prepare AWS Organization](/layers/accounts/prepare-aws-organization/) guide, which creates the AWS Organization, enables AWS RAM sharing, and raises account limits.
| Steps | Actions |
| ---------------------------- | --------------------------------------------------------------------------- |
| Validate prerequisites | Review account configuration |
| Import AWS Organization | `atmos terraform import account -s core-gbl-root ...` |
| Deploy accounts | `atmos workflow deploy/accounts -f quickstart/foundation/accounts` |
| Deploy accounts settings | `atmos workflow deploy/account-settings -f quickstart/foundation/accounts` |
| Finalize account setup | Click Ops (optional) |
## Validate Prerequisites for Account Deployment
Before deploying accounts, verify that you have completed all prerequisites from the [Prepare AWS Organization](/layers/accounts/prepare-aws-organization/) guide:
- AWS Organization has been created via ClickOps
- AWS RAM sharing with AWS Organization is enabled
- Account quota increase has been requested (and ideally approved)
- Terraform state backend has been initialized
Next, review the "account" configuration in the stack catalog. **This is the hardest part to change/fix once the accounts are provisioned**. If you aren't confident about the email configuration, account names, or anything else, now is the time to make changes or ask for help.
You should double-check the following:
- Check that `stacks/catalog/account.yaml` has the values you expect, especially account email format
- Run `atmos describe component account -s core-gbl-root` to inspect the final component configuration (e.g. _after_ all the mixins have been imported)
- Plan the run with `atmos terraform plan account -s core-gbl-root`
## Import the AWS Organization
The AWS Organization was created manually as part of the [Prepare AWS Organization](/layers/accounts/prepare-aws-organization/) guide. Now we need to import this existing organization into Terraform so it can be managed as infrastructure-as-code.
Import the existing AWS Organization into Terraform state using the following command. Replace `ORG_ID` with your AWS Organization ID (e.g., `o-abc123def4`):
```bash
atmos terraform import account -s core-gbl-root "aws_organizations_organization.this[0]" ORG_ID
```
:::info Finding Your Organization ID
You can find your Organization ID in the AWS Console under **AWS Organizations** → **Settings**, or by running:
```bash
aws organizations describe-organization --query 'Organization.Id' --output text
```
:::
This command runs `terraform import` to bring the existing AWS Organization under Terraform management. After this step, all organization-level changes will be managed through Atmos and Terraform.
:::tip Verify Import
After the import completes, verify the organization was imported successfully:
```bash
atmos terraform plan account -s core-gbl-root
```
The plan should show no changes for the organization resource, indicating it was imported correctly.
:::
## Deploy Accounts
Again review the "account" configuration in `stacks/catalog/account.yaml`. In particular, check the email address and account names. In the next step, we will create and configure all accounts in the AWS Organization using the configuration in that stack file.
Once confident, begin the accounts deployment:
This workflow creates all AWS member accounts in the AWS Organization using the configuration in your stack files.
## Update Account ID Placeholders
Now that accounts are created, you have real account IDs to work with. The reference architecture contains placeholder account IDs that need to be replaced with your actual values.
To get your account IDs, run:
```bash
atmos terraform output account -s core-gbl-root
## or if on the latest version with instanced components:
atmos terraform output aws-account/core-artifacts -s core-gbl-root
```
**Update the Static Account Map**
Update the static account map in your organization's defaults file (`stacks/orgs/acme/_defaults.yaml`). This configuration provides account ID lookups for components that need them:
```yaml
vars:
# Static account-map variable to replace the account-map component
# This provides account ID lookups for components that need them (e.g., cloudtrail)
# Set to false since we're using static mapping instead of the account-map component
account_map_enabled: false
account_map:
# Name of AWS partition
aws_partition: aws
# Name of the root account (used for organization management)
root_account_account_name: core-root
# Name of the audit account (used by components like cloudtrail)
audit_account_account_name: core-audit
# Identity account name (used by components like ecr)
identity_account_account_name: core-root
# Map of all account names (tenant-stage format) to their account IDs
# TODO: Automate population of this map (e.g., from account component outputs)
full_account_map:
core-artifacts: "__ARTIFACTS_ACCOUNT_NUMBER__"
core-audit: "__AUDIT_ACCOUNT_NUMBER__"
core-auto: "__AUTO_ACCOUNT_NUMBER__"
core-dns: "__DNS_ACCOUNT_NUMBER__"
core-network: "__NETWORK_ACCOUNT_NUMBER__"
core-root: "__ROOT_ACCOUNT_NUMBER__"
core-security: "__SECURITY_ACCOUNT_NUMBER__"
plat-dev: "__DEV_ACCOUNT_NUMBER__"
plat-prod: "__PROD_ACCOUNT_NUMBER__"
plat-sandbox: "__SANDBOX_ACCOUNT_NUMBER__"
plat-staging: "__STAGING_ACCOUNT_NUMBER__"
```
Replace each placeholder (e.g., `__ROOT_ACCOUNT_NUMBER__`) with the actual 12-digit AWS account ID from the output above.
:::caution Root Account Naming Convention
The `root_account_account_name` variable should always be set to `core-root` in your stack configuration, even if your actual AWS account has a different display name. This value is used internally by components for account lookups and must match the key in `full_account_map`.
To verify which account is your organization's management (root) account:
1. Navigate to [AWS Organizations → AWS accounts](https://console.aws.amazon.com/organizations/v2/home/accounts)
1. Look for the account labeled "Management account"
1. Use this account's ID for the `core-root` entry in `full_account_map`
:::
As you continue through the setup process, keep an eye out for other placeholder values in your stack configurations and replace them with actual values as needed.
:::note Stopgap: Deploy the Identity Layer Before Continuing
Before proceeding with the remaining account steps, you need to deploy the Identity layer. The Identity layer provisions permission sets with AWS Identity Center that allow you to access each member account, which is required for deploying account settings, CloudTrail, and ECR. We're working on improving this documentation flow and the SuperAdmin profile, but for now, the Identity layer must be deployed at this point.
:::
Deploy the Identity layer to provision permission sets for accessing each member account. Return here to finish account settings, CloudTrail, and ECR after the Identity layer is deployed.
Deploy Identity Layer
## Deploy Accounts Settings
Once you've created the accounts, you'll need to provision the baseline configuration within the accounts themselves. Run the following:
The workflows will kick off several sequential Terraform runs to provision all the AWS member account settings for
member accounts in the Organization.
## Unsubscribe from Marketing Emails (Optional)
For each new account, unsubscribe the account's email address from AWS marketing emails:
1. Go to [AWS Marketing Preferences](https://pages.awscloud.com/communication-preferences.html)
1. Click "Unsubscribe from Email"
1. Enter the account's email address
1. Check "Unsubscribe from all AWS marketing emails"
:::tip Root User Credentials
With [centralized root access](/layers/identity/centralized-root-access/) enabled, member accounts do not require individual root credentials. If you need per-account root credentials, see [Create Account Root Users](/layers/accounts/tutorials/create-account-root-users/).
:::
Now that all accounts are deployed and configured, you're ready to set up CloudTrail for audit logging across your organization.
Setup CloudTrail
---
## Decide on AWS Account Flavors and Organizational Units
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
When setting up your AWS infrastructure, you need to decide how to organize
your workloads across multiple AWS accounts to ensure optimal isolation and
management. This involves deciding the appropriate account structure and
organizational units (OUs) that align with your operational needs and security
requirements.
## Context and Problem Statement
The [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/userguide/wellarchitected-ug.pdf) recommends splitting workloads across multiple AWS accounts.
When moving to an infrastructure-as-code (IaC) model of infrastructure provisioning, many of the same best practices
that apply to regular software development should apply to IaC. Part of that is not making changes to a production
environment that hasn't been tested in a staging environment. If the production and staging environments are in the same account, then there are insufficient assurances/guarantees/protections in place to prevent breaking production.
Constructs like VPCs only provide network-level isolation, but not IAM-level isolation. And within a single AWS account, there’s no practical way to manage IAM-level boundaries between multiple stages like dev/staging/prod. For example, to provision most terraform modules, “administrative” level access is required because provisioning any IAM roles requires admin privileges. That would mean that a developer needs to be an “admin” in order to iterate on a module.
Leveraging multiple AWS accounts within an AWS Organization is the _only way_ to satisfy these requirements. Guardrails can be be in place to restrict what can happen in an account and by whom.
We must decide how to organize the flat account structure into organizational units. Organizational units can then
leverage things like Service Control Policies to restrict what can happen inside the accounts.
Multiple AWS accounts should be used to provide a higher degree of isolation by segmenting/isolating workloads. There is no additional cost for operating multiple AWS accounts. It does add additional overhead to manage as a standard set of components will to manage the account. AWS support only applies to one account, so it may need to be purchased for each account unless the organization upgrades to Enterprise Support.
Multiple AWS accounts are all managed underneath an AWS Organization and organized into multiple organizational units
(OUs). Service Control Policies can restrict what runs in an account and place boundaries around an account that even
account-level administrators cannot bypass.
## Considered Options
### AWS Well-Architected Account Designations
Here are some common account designations. Not all are required.
:::tip
This is our recommended approach.
:::
:::note
It is advised to keep the names of accounts as short as possible because of resources with low max character limits
[AWS Resources Limitations](/resources/legacy/aws-feature-requests-and-limitations)
:::
core-root
The "root" (parent, billing) account creates all child accounts. The root
account has special capabilities not found in any other account
An administrator in the root account by default has the{" "}
OrganizationAccountAccessRole to all other accounts (admin
access)
Organizational CloudTrails can only be provisioned in this account
It’s the only account that can have member accounts associated with it
Service Control Policies can only be set in this account
It’s the only account that can manage the AWS Organization
plat-prod
The "production" is the account where you run your most mission-critical
applications
plat-staging
The “staging” account is where QA and integration tests will run for
public consumption. This is production for QA engineers and partners doing
integration tests. It must be stable for third-parties to test. It runs a
kubernetes cluster.
plat-sandbox
The "sandbox" account is where you let your developers have fun and break
things. Developers get admin. This is where changes happen first. It will
be used by developers who need the bleeding edge. Only DevOps work here or
developers trying to get net-new applications added to tools like slice.
plat-dev
The "dev" account is where to run automated tests, load tests
infrastructure code. This is where the entire engineering organization
operates daily. It needs to be stable for developers. This environment is
Production for developers to develop code.
plat-uat, qa, etc
Additional or alternative platform accounts
core-audit
The "audit" account is where all logs end up
core-corp
The "corp" account is where you run the shared platform services for the
company. Google calls it “corp”
core-security
The "security" account is where to run automated security scanning
software that might operate in a read-only fashion against the audit
account.
core-identity
The "identity" account is where to add users and delegate access to the
other accounts and is where users log in
core-network
The “network” account is where the transit gateway is managed and all
inter-account routing
core-dns
The “dns” account is the owner for all zones (may have a legal role with{" "}
Route53Registrar.*
permissions). Cannot touch zones or anything else. Includes billing.
Example use-case: Legal team needs to manage DNS and it’s easier to give
them access to an account specific to DNS rather than multiple set of
resources.
core-automation
The “automation” account is where any gitops automation will live. Some
automation (like Spacelift) has “god” mode in this account.
The network account will typically have transit gateway access to all
other accounts, therefore we want to limit what is deployed in the
automation account to only those services which need it.
core-artifacts
This “artifacts” account is where we recommend centralizing and storing
artifacts (e.g. ECR, assets, etc) for CI/CD
core-public
For public S3 buckets, public ECRs, public AMIs, anything public. This
will be the only account that doesn’t have a SCP that blocks public s3
buckets.
Use-cases
All s3 buckets are private by default using a SCP in every account
except for the public account
data-prod, data-staging, data-dev
The "data" account is where the quants live =) Runs systems like Airflow,
Jupyterhub, Batch processing, Redshift
$tenant
The “$tenant” account is a symbolic account representing dedicated account
environment. It’s architecture will likely resemble prod. This relates to{" "}
this link
### Multi-Account (Production, Staging, Dev)
:::caution
Not recommended because there’s not enough isolation.
:::
- Strict, enforceable boundaries between multiple environments (aka stages) at the IAM layer
- Ability to create a release process whereby we stage changes in one account before applying them to the next account
- Ability to grant developers administrative access to sandbox account (dev) so that they can develop/iterate on IAM
policies. These policies then are committed as code and submitted as part of a Pull Request, where they get code
reviewed.
- API limits are scoped to an account. A bug in staging can't take out production.
### Single-account Strategy (Production=Staging=Dev) - NOT RECOMMENDED
- Editing live IAM permissions in the mono account is the equivalent "cowboy coding" in production; we don't do this
with our software, so we should not do this with our infrastructure
- No strict separation between stages; copying and pasting infrastructure could accidentally lead to catastrophic
outcomes
- Very difficult to write/manage complex IAM policies (especially without a staging organization!)
- No way to grant someone IAM permissions to create/manage policies while also restricting access to other production
resources using IAM policies. This makes it very slow/tedious for developers to work on AWS and puts all the burden to
develop IAM policies on a select few individuals, which often leads to a bottleneck
- VPCs only provide network-level isolation. We need IAM level isolation.
- AWS API limits are at the account level. A bug in staging/dev can directly DoS production services.
## Related Components
- [account](/components/library/aws/account/)
## References
Here are some great videos for context
- Re:invent (2016) [https://www.youtube.com/watch?v=pqq39mZKQXU](https://www.youtube.com/watch?v=pqq39mZKQXU)
- Re:invent (2017) [https://www.youtube.com/watch?v=71fD8Oenwxc](https://www.youtube.com/watch?v=71fD8Oenwxc)
---
## Decide on AWS Organization Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
When establishing your AWS infrastructure strategy, you need to decide whether
to create a new AWS Organization or reuse an existing one. This decision
involves evaluating the limitations and capabilities of AWS Control Tower, the
special roles within the root account, and the ability to manage
organizational configurations. Cloud Posse recommends starting with a new
organization to ensure a clear separation between historical and next-gen
infrastructure, leveraging transit gateways or VPC peering for integration.
:::tip
Cloud Posse recommends starting with a **Net-New Organization**
:::
## Problem
- Only one AWS Control Tower can exist in an organization.
- AWS Control Tower only recently became manageable with Terraform, and full support is not available.
Depending on the Scope of Work, Cloud Posse is usually responsible for provisioning accounts with terraform which requires all the same access as Control Tower.
- Member accounts can only be provisioned from the top-level root “organization” account
- The “root” account has a special `OrganizationalAccessRole` that can be used to administrate member accounts
- With only one root organization, a business has no way to model/test/experiment with organizational-level
configuration, which is a massive liability for onboarding new staff engineers responsible for training and to manage
an organization
## Solution
Here are some considerations for how we can work around the problems.
### Use Net-New Organization (Recommended)
This process involves
[How to Register Pristine AWS Root Account](/layers/accounts/tutorials/how-to-register-pristine-aws-root-account) which
will serve as a net-new AWS organization (e.g. top-level payer account). Use transit gateway or VPC peering between
heritage accounts and new accounts.
This process ensures that there’s a clear delineation between the historical infrastructure and next-gen. We’ll treat
the historical infrastructure as _tainted_, as in we do not know what was done with IaC; we’ll treat the next-gen
accounts as pristine, hermetic environments where all changes are performed using IaC.
:::info
Companies with an AWS Enterprise Agreement (EA) can arrange for consolidated invoicing by speaking with their AWS
Account Representative.
:::
:::caution
Reserved Instances and AWS Savings Plans cannot be shared across organizations.
:::
### Use Existing Organization
You will need to grant Cloud Posse Administrative permissions in the root account in order to perform terraform
automation of organizational infrastructure (E.g. Accounts, SCPs, Cloud Trails, etc).
:::danger
Cloud Posse does not prefer to work with **Existing Organizations** due to the liability. Cloud Posse does not know your
organization as well as you do, and in order for us to manage an organization with Terraform we need to be
Organizational Admins.
:::
:::caution
Cloud Posse recommends using the **Model Organization** pattern if you wish to use an **Existing Organization**.
:::
### Use Model Organization
This pattern assumes we’ll provision a Net-New Organization that will be used for model purposes. It will not be used in
a real production setting, but instead will be used as part of the engagement to enable Cloud Posse to set up all
scaffolding.
This pattern builds on the **Net-New Organization** but anticipates the customer re-implementing the entire process in
their own existing organization. This process takes longer, but ensures your team gets the maximum onboarding experience
and validates the documentation end-to-end by running your team through the process.
:::caution
Cloud Posse cannot estimate the time it will take your team to follow the documentation and implement it in your
existing organization.
:::
:::caution
Reserved Instances and AWS Savings Plans cannot be shared across organizations.
:::
:::danger
We do not recommend this pattern if Release Engineering is in scope with your engagement with Cloud Posse.
:::
---
## Decide on AWS Support
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
When setting up AWS Support, you need to decide which accounts require paid
support plans. If you don’t have an AWS Enterprise Agreement, it’s recommended
to enable Business-level support in the root account to expedite requests and
manage organizational limits effectively.
AWS Support is always enabled on a per-account basis. With an AWS Enterprise Agreement, AWS support is already included
from a billing perspective for all accounts, but it still needs to be enabled on a per-account basis.
:::caution
Cross-account support is not provided by AWS. AWS Support will not address support questions that affect one account,
from another account’s support subscription.
See
[https://aws.amazon.com/premiumsupport/faqs/#Cross-account_support](https://aws.amazon.com/premiumsupport/faqs/#Cross-account_support)
:::
If no Enterprise Agreement, then at a minimum we recommend enabling Business-level support in the root account, which
should cost roughly $100/mo (since nothing else runs in the root account). This enables us to expedite requests so that
organizational limits may be raised (e.g. member accounts). Without paid support, requests may take several days and are
more likely to be denied.
For the latest pricing, go to [https://aws.amazon.com/premiumsupport/plans/](https://aws.amazon.com/premiumsupport/plans/)
## Sample Pricing
---
## Decide on Email Address Format for AWS Accounts
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import Note from "@site/src/components/Note";
When creating AWS accounts, you need to decide on the email address format.
Each AWS account requires a unique email address that cannot be reused across
multiple accounts. The chosen format should align with your organization’s
email management strategy and ensure proper delivery and handling of AWS
notifications.
Every AWS account needs a unique email address. Email address cannot be reused across multiple AWS accounts.
we are referring AWS accounts that contain resources, not individual user
accounts
### Use Plus Addressing
We'll use `+` addressing for each account (e.g. `ops+prod@ourcompany.com`)
:::info
Office 365 has finally added support for
[https://docs.microsoft.com/en-us/exchange/recipients-in-exchange-online/plus-addressing-in-exchange-online](https://docs.microsoft.com/en-us/exchange/recipients-in-exchange-online/plus-addressing-in-exchange-online).
:::
### Use Slack Email Gateway
- Create email group/alias for AWS accounts e.g. `ops@ourcompany.com`
- Ideally set up to forward to a shared slack channel like `#aws-notifications`
Follow [this guide to set up slack forwarding](/layers/accounts/tutorials/how-to-set-up-aws-email-notifications/).
### Use Mailgun
Mailgun supports plus addressing and complex forwarding rules. It’s free for 5,000 emails.
### Use Google Group - Recommended
Google Groups are probably the most common solution we see. It
[works very well with plus addressing](https://support.google.com/a/users/answer/9308648?hl=en).
### Use AWS SES with Lambda Forwarder (catch-22)
Provisioning AWS SES is nice, but we need an email address even for the root account, so it doesn’t solve the cold-start
problem.
[https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder](https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder)
---
## Decide on MFA Solution for AWS Root Accounts
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
When setting up MFA for AWS root accounts, you need to decide on the most
suitable solution to ensure security and manageability. The two most common
options are TOTP (Time-Based One-Time Password) and U2F (Universal 2nd
Factor). Cloud Posse recommends using 1Password for Teams or 1Password for
Business to securely share TOTP tokens among stakeholders, ensuring both
accessibility and protection.
We need an MFA solution for protecting the master AWS accounts. The two most common options are TOTP and U2F
(universal authenticator devices).
:::tip
Cloud Posse recommends **1Password for Teams** or **1Password for Business**
:::
### 1Password for Teams, 1Password for Business (TOTP) - Recommended
TOTP tokens can be stored in a shared authenticator app like 1Password. This allows sharing of the secret amongst
designated stakeholders. Additionally, using MFA with 1Password (like Duo) protects access to 1Password.
For this reason, Cloud Posse recommends **1Password for Teams** or
[1Password for Business.](https://1password.com/teams/pricing/)
### Yubikey (U2F)
This is by far the most secure option but comes with a significant liability, if you do not add at least two or more physical devices. This physical device can be lost, broken, or damaged. Distributed team environments where the key can not be easily passed around adds to the difficulty of maintaining continuity when team members are out-of-the-office. Getting locked out of an AWS root account is not fun.
For these reasons, we do not recommend it from a practical security perspective.
### Slack Bots
One option is to hook up a Slack bot to a restricted channel. Using ChatOps, admins can request a token. The nice part
about this is there's a clear audit trail of who is logging in. Also, we recommend a buddy system where each time a code
is requested, a "Buddy" confirms this request to ensure it was merited. For this to be more secure, MFA must be enabled
on Slack.
### Authy
:::danger
Does not support shared TOTP credentials
:::
Authy is the original cloud-based authenticator solution. The downside is it doesn't support shared TOTP secrets, so a
shared login must instead be used. This is not recommended.
### LastPass
:::danger
Does not support shared TOTP credentials
:::
LastPass is **not** an option. It does not support shared TOTP secrets. Do not confuse this with the ability to
authenticate with **LastPass** using TOTP/Authenticator apps. That's not what we need here.
## Related Tasks
- [REFARCH-62 - Setup Root Account Root Credentials MFA](/layers/accounts/prepare-aws-organization/#set-up-mfa-on-root-account)
---
## Decide on Terraform State Backend Architecture
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
When organizing your Terraform state, you need to decide on the backend
architecture. Using S3, you can either opt for a single bucket, which
simplifies management but grants broad access, or multiple buckets, which
enhance security by segmenting access but add complexity. Consider your
company’s security and compliance needs when making this decision.
## Context and Problem Statement
We need someplace to store the terraform state. Multiple options exist (e.g. Vault, Terraform Enterprise, GitLab,
Spacelift), but the only one we’ll focus on right now is using S3. The terraform state may contain secrets, which is
unavoidable for certain kinds of resources (e.g. master credentials for RDS clusters). For this reason, it is advisable
for companies with security and compliance requirements to segment their state backends to make it easier to control
with IAM who has access to what.
While on the other hand adding multiple state backends is good from a security perspective, on the other it
unnecessarily complicates the architecture for companies that do not need the added layer of security.
## Considered Options
We’ll use the
[https://github.com/cloudposse/terraform-aws-tfstate-backend](https://github.com/cloudposse/terraform-aws-tfstate-backend)
module to provision the state backends. This module already follows all standard best practices around private ACLs,
encryption, versioning, locking, etc. Now we need to consider the options for how many buckets to manage.
This decision is reversible but very tedious to change down the road. Therefore, we recommend doing what suits the
long-term objectives of your company.
Anyone who should be able to run `terraform` locally will need read/write access to a state bucket.
### Option 1: Single Bucket (Recommended for Companies without Compliance Requirements )
:::tip
Our Recommendation is to use Option 1 because it’s the least complicated to maintain. Additionally, if you have a small
team, there won’t be a distinction between those who have or do not have access to the bucket.
:::
#### Pros
- Single bucket to manage and protect
#### Cons
- Anyone doing terraform will need access to all state and can modify that state
### Option 2: Hierarchical State Buckets
In this model, there will be one primary bucket that manages the state of all the other state buckets. But based on the
number of segments you need, there will be multiple buckets that maintain the state for all the resources therein.
As part of this decision, we’ll need to decide on what those segments are (e.g. `admin`, `restricted`, `unrestricted`,
`security`; or one state bucket per account) for your use-case.
#### Pros
- It’s easier to secure who can access a state bucket when there are more buckets
#### Cons
- With more buckets, it’s more to oversee
- Remote state lookups need to be aware of which bucket, account and IAM role is required to access the bucket
## References
- Links to any research, ADRs or related Jiras
---
## Design Decisions
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
These are some of the design decisions you should be aware of when
provisioning a new AWS organization.
---
## FAQ(Accounts)
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
Frequently asked questions about managing AWS accounts with Cloud Posse's reference architecture.
### Why not use Control Tower?
AWS Control Tower cannot be managed with Terraform. Depending on the Scope of Work, Cloud Posse is usually responsible
for provisioning accounts with terraform which requires all the same access as Control Tower.
### Why are there so many accounts?
Leveraging multiple AWS accounts within an AWS Organization is the only way to satisfy IAM level isolation. Each account
has a very specific purpose, that all associated resources are isolated in that given account.
### How can we set budgets?
Create budgets with the `account-settings` component. For more, see
[the `account-settings` component documentation](/components/library/aws/account-settings/)
:::info
Budgets created for the `root` account apply to the AWS Organization as a whole
:::
### How do you add or remove Service Control Policies?
Service Control Policies are managed with the `account` component variable, `service_control_policies_config_paths`. For
more, see [the `account` component documentation](/components/library/aws/account/)
:::caution
This component manages the state of all AWS accounts, so apply with extreme caution!
:::
### How can you create an Account?
[Follow the documentation for creating and setting up AWS Accounts](/layers/accounts/tutorials/how-to-create-and-setup-aws-accounts/)
### How do you delete an Account?
[Follow the documentation for deleting AWS Accounts](/layers/accounts/tutorials/how-to-delete-aws-accounts/)
### How can you create a Tenant?
[Follow the documentation for creating a new Organizational Unit](/layers/accounts/tutorials/how-to-add-a-new-organizational-unit/)
### How do I use mixins and imports with Atmos?
As infrastructure grows, we end up with hundreds or thousands of settings for components and stack configurations. If we
copy and paste these settings everywhere, it's error-prone and not DRY. What we really want to do is to define a sane
set of defaults and override those defaults when we need them to change.
We accomplish this with Mixins. Mixins are imported into all stacks and each follow a set of rules. We use the
`mixins/region` and `mixins/account` configurations to define common **variables** for all stacks. For example,
`mixins/region/us-east-1.yaml` will define the variable `region: us-east-1`.
**Note.** Do not import components into the account or region mixins. These are imported multiple times to define common
variables, so any component imports would be duplicated and cause an Atmos error such as this:
```
Executing command:
/usr/bin/atmos terraform deploy account-settings -s core-gbl-artifacts
Found duplicate config for the component 'account-settings' for the stack 'core-gbl-artifacts' in the files: orgs/cch/core/artifacts/global-region/baseline, orgs/cch/core/artifacts/global-region/monitoring, orgs/cch/core/artifacts/global-region/identity.
Check that all context variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the files and not duplicated.
Check that all imports are valid.
exit status 1
```
### How do I access root credentials for member accounts?
With [centralized root access](/layers/identity/centralized-root-access/) enabled, you don't need to maintain individual root credentials for each member account. The management account can perform privileged root operations on any member account using the `RootAccess` permission set.
If you need per-account root credentials for compliance or recovery purposes, see [Create Account Root Users](/layers/accounts/tutorials/create-account-root-users/).
---
## Initializing the Terraform State S3 Backend
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
Follow these steps to configure and initialize the Terraform state backend using Atmos, ensuring proper setup of the infrastructure components and state management.
| Steps | Actions |
| ------------------------- | ----------------------------------------- |
| Configure Terraform state | `atmos workflow init/tfstate -f quickstart/foundation/accounts` |
## Setting up the Terraform State Backend
This is where we configure and run Atmos. Atmos is a workflow automation tool that we will use to call Terraform which
will provision all the accounts and resources you need to create and manage infrastructure. The Atmos configuration can
be found in the `atmos.yaml`.
If you're unfamiliar with atmos, you can read more about it [here](https://atmos.tools).
If you look at `components/terraform/`, you'll see a bunch of directories. These contain Terraform "root modules" that are provisioned with Atmos. At first they'll only have their vendor files, such as `components/terraform/tfstate-backend/component.yaml`.
## Vendor the Terraform State Backend component
Vendor the Terraform State Backend component by running the following command.
:::tip What is Vendoring?
Vendoring downloads the upstream component files from a central repository at a specified version. In this case, we are pulling the baseline components, which include all account components, the Terraform State component, and other necessary files for setting up the account foundation.
This step only downloads the files to your local project - it does not deploy or make any changes to your infrastructure.
[Read more about vendoring with Atmos](https://atmos.tools/core-concepts/vendor/)
:::
Why Do We Use Wildcard Patterns with IAM?
The `tfstate-backend` component creates IAM roles with trust policies that control which principals can assume them.
Understanding how these policies work is important for security.
### The Character Limit Problem
IAM role trust policies have a **hard limit of 4096 characters** (after requesting a quota increase from the default
2048). For organizations with multiple accounts, listing every role and permission set by explicit ARN would easily
exceed this limit—even with the maximum quota.
Instead, the reference architecture uses wildcard ARN patterns like:
- `arn:aws:iam::*:role/acme-*-gbl-*-terraform` for Terraform execution roles
- `arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_Terraform*Access_*` for SSO permission sets
### The Two-Way Security Handshake
Using wildcards in trust policies is secure because access requires a **two-way handshake**:
1. **Trust Policy (this side):** The tfstate role's trust policy allows principals matching the pattern to attempt
assumption, but only if they're within your AWS Organization (`aws:PrincipalOrgID` condition).
2. **Principal's Policy (other side):** The principal (e.g., a Terraform role or SSO permission set) must also have
an IAM policy granting `sts:AssumeRole` on the specific tfstate role ARN.
A role matching the wildcard pattern cannot access Terraform state unless it also has explicit permission to assume
the tfstate role. This defense-in-depth approach maintains security while staying within IAM limits.
### Requesting a Quota Increase (If Needed)
If you customize the trust policies and approach the 2048 character default limit, you can request an increase up to
the maximum of 4096 characters. Requests within this limit are auto-approved instantly:
```bash
atmos auth exec --identity core-root/terraform -- \
aws service-quotas request-service-quota-increase \
--service-code iam \
--quota-code L-C07B4B0D \
--desired-value 4096 \
--region us-east-1
```
:::note
This is only needed if you customize trust policies beyond the defaults. The reference architecture's wildcard
patterns fit comfortably within the default 2048 character limit.
:::
## Initialize the Terraform State Backend
Run the following command to initialize the Terraform State Backend. This workflow has two steps:
- Create the backend using a local Terraform state
- Once the backend bucket exists, migrate the state file into the newly created S3 bucket
## Migrate all workspaces to S3
When prompted, type `yes` to migrate all workspaces to S3.
```shell
Initializing the backend...
Do you want to migrate all workspaces to "s3"?
```
:::info Granting SuperAdmin Access to Terraform State
The IAM User for SuperAdmin will be granted access to Terraform State by principal ARN. This ARN is passed to the
`tfstate-backend` stack catalog under `allowed_principal_arns`. Verify that this ARN is correct now. You may need to
update the root account ID.
:::
## References
- Review the [Structure of Terraform S3 State Backend Bucket](/layers/accounts/tutorials/terraform-s3-state/)
Now that the Terraform state backend is initialized, you're ready to deploy all AWS accounts in your organization.
Deploy Accounts
---
## Preparing Your AWS Organization
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import TaskList from '@site/src/components/TaskList';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
The Cold Start involves more manual steps than other layers. Read through the following steps carefully.
:::tip Cold Start
The set up process for the "baseline" or "account" layer is commonly referred to as the Cold Start.
:::
:::info About Placeholder Values
The reference architecture includes placeholder values that you'll need to replace with your actual configuration. Common placeholders include:
- **Account IDs** like `111111111111`, `123456789012`, or `000000000000` — Replace with your actual AWS account IDs after creating accounts
- **Underscored values** like `_example_` or `__REPLACE_ME__` — These indicate values that require your input. Search for `_ACCOUNT_NUMBER__` to find items to replace like `__DEV_ACCOUNT_NUMBER__`
- **Example domains** like `example.com` or `acme.com` — Replace with your actual domain names
- **Sample ARNs** — Update with ARNs from your environment
You'll update these values at different points during setup. Each guide will call out when specific replacements are needed.
:::
## Before Running Terraform (ClickOps)
First, you'll need to perform some ClickOps to ensure things are set up before we use Terraform to manage AWS accounts.
From the root account:
1. ### Get Business Class Support
Enable [business support](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_support.html) in
the `root` account (in order to expedite requests to raise the AWS member account limits)
1. ### Set up MFA on Root Account
Set up the Virtual MFA device on the root account following [the AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root).
1. Navigate to the root account's security credentials
1. Set up a Virtual MFA device
1. Save the MFA TOTP key in 1Password using 1Password's TOTP field and built-in screen scanner to scan the QR code
1. ### Create the `SuperAdmin` IAM User
[Create a `SuperAdmin` IAM User](/layers/accounts/tutorials/how-to-create-superadmin-user/). This break-glass user is used during cold start bootstrapping before IAM Identity Center is provisioned. After cold start, it should only be used if your IdP is completely unavailable—otherwise, use Permission Sets.
1. Create the IAM user (do not enable "console login")
1. Set up MFA for the user
1. Create a single Access Key
1. Store credentials in 1Password: Access Key ID, Secret Access Key, Assigned MFA device ARN, and TOTP key
1. ### Configure Atmos Auth for SuperAdmin
Configure the `superadmin` profile to authenticate via Atmos during cold start. This allows you to run Atmos commands to deploy the foundation.
1. Set the `ATMOS_PROFILE` environment variable to select the superadmin profile:
```bash
export ATMOS_PROFILE=superadmin
```
1. Configure your user credentials by running the following command. You'll be prompted to enter your Access Key ID, Secret Access Key, and MFA ARN from 1Password:
```bash
atmos auth user configure
```
1. Start an authenticated session. You'll be prompted to enter a one-time MFA token:
```bash
atmos auth login -i core-root/terraform
```
1. Verify you can access the root account:
```bash
atmos auth exec -i core-root/terraform -- aws sts get-caller-identity
```
:::note Daily Usage
Once the profile is set and user credentials are configured, you only need to run `atmos auth login -i core-root/terraform` each day to start a new authenticated session.
:::
:::tip Atmos Profile Persistence
Add `export ATMOS_PROFILE=superadmin` to your shell configuration (`~/.zshrc` or `~/.bashrc`) to persist the setting across terminal sessions during cold start.
After cold start is complete and Identity Center is configured, you'll switch to a different profile (e.g., `devops` or `managers`) as described in [Configure Atmos Auth](/layers/identity/atmos-auth/).
:::
1. ### Enable IAM Access for Billing
By default, only the root user can view billing information. To allow IAM users and SSO roles (e.g., `BillingAdmin` permission set) to access billing, you must activate IAM billing access. This setting can only be changed by the root user.
:::warning Root User Sign-In Required
You must sign in using the **root user** of the management account (the email and password for the AWS account itself). IAM users and SSO permission sets **cannot** change this setting.
To sign in as the root user:
1. Go to [https://console.aws.amazon.com/](https://console.aws.amazon.com/)
1. Select **Root user**, enter the management account's **root email address**, and sign in with the root password
:::
1. Sign in to the AWS Console as the **root user** of the management account
1. Open [Account Settings](https://us-east-1.console.aws.amazon.com/billing/home?region=us-east-1#/Account)
1. Scroll down to **"IAM user and role access to Billing information"**
1. Click **Edit**, then select **Activate IAM Access**
1. Click **Update**
1. ### Enable Centralized Root Access
Enable centralized root access management to eliminate the need for per-account root credentials. This allows the management account to perform privileged root operations on member accounts without maintaining separate root passwords or MFA devices.
1. Navigate to [IAM → Root access management](https://console.aws.amazon.com/iam/home#/root-access-management)
1. Enable **Root credentials management**
1. Enable **Privileged root actions**
For more details, see [Centralized Root Access](/layers/identity/centralized-root-access/).
1. ### Enable Regions (Optional)
The 17 original AWS regions are enabled by default. If you are using a region that is not enabled by default (such as Middle East/Bahrain), you need to [enable it in your AWS account settings](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html).
1. ### Create the AWS Organization
Create the AWS Organization from the existing root account. This will convert your standalone account into an organization management account.
1. Navigate to [AWS Organizations](https://console.aws.amazon.com/organizations/v2/home)
1. Click **Create an organization**
1. Select **Create an organization with all features enabled** (this enables AWS RAM for Organizations)
1. Confirm the organization creation
:::tip Verify Organization Creation
After creating the organization, verify it was created successfully:
```bash
aws organizations describe-organization
```
The `FeatureSet` should return `ALL` if all features are enabled.
:::
1. ### Enable AWS RAM Sharing with AWS Organization
Enable AWS Resource Access Manager (RAM) sharing for your organization. This is required for sharing resources like Transit Gateway, VPC subnets, and other resources across accounts.
1. Navigate to [AWS RAM Settings](https://console.aws.amazon.com/ram/home#Settings:)
1. Enable **Enable sharing with AWS Organizations**
1. Confirm the setting is enabled
**Alternative: Use AWS CLI**
```bash
aws ram enable-sharing-with-aws-organization
```
Verify the setting:
```bash
aws organizations describe-organization --query 'Organization.FeatureSet'
```
This should return `ALL`.
1. ### Raise Account Limits
Request an increase of the Account Quota from AWS support. This request can take a few days to process, so it's important to submit it early to avoid blockers during account deployment.
From the `root` account (not `SuperAdmin`), increase the [account quota to 20+](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/organizations/quotas) for the Cloud Posse reference architecture, or more depending on your business use-case.
**Alternative: Use AWS CLI**
```bash
aws service-quotas request-service-quota-increase \
--service-code organizations \
--quota-code L-E619E033 \
--desired-value 20
```
Where `L-E619E033` is the quota code for "Default maximum number of accounts" under "AWS Organizations" in "us-east-1".
:::caution Processing Time
Account quota increases can take several days to be approved. Plan accordingly and submit this request as early as possible.
:::
Now that your AWS Organization is prepared with MFA, SuperAdmin credentials, billing access configured,
and the organization created with proper account limits, you're ready to initialize the Terraform state backend that will store all infrastructure state.
Initialize Terraform Backend
---
## Setup Organizational CloudTrail
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
Now that all the accounts have been deployed, we need to deploy CloudTrail for audit logging. This foundational component provides visibility into API activity across your AWS Organization.
| Steps | Actions |
| --------------- | ----------------------------------------------------------- |
| Deploy CloudTrail | `atmos workflow deploy/cloudtrail -f quickstart/foundation/accounts` |
## Deploy CloudTrail
Deploy CloudTrail and the CloudTrail bucket to enable audit logging across your organization:
This workflow deploys:
- **CloudTrail bucket** in `core-audit` — Centralized S3 bucket for storing CloudTrail logs
- **Organization CloudTrail** in `core-root` — Organization-wide trail that captures API activity from all accounts
## Auditing CloudTrail Logs
{/* TODO: Add content covering:
- How to query CloudTrail logs with Amazon Athena
- Setting up Athena tables for CloudTrail
- Example queries for common audit scenarios
- Link to cloudtrail-athena component if available
*/}
## What's Next?
With CloudTrail deployed, you have completed the accounts layer. The next step is to configure identity and access management.
Now that your accounts are deployed with audit logging enabled, you're ready to set up identity management with IAM Identity Center.
Setup Identity
---
## Create Account Root Users
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Note from '@site/src/components/Note';
This tutorial explains how to create and configure root user credentials for individual AWS accounts. With centralized root access enabled, this is typically not required for member accounts.
## Do You Need Per-Account Root Credentials?
With [centralized root access](/layers/identity/centralized-root-access/) enabled, member accounts do not require individual root credentials. The management account can perform privileged root operations on any member account using the `RootAccess` permission set.
**You only need per-account root credentials if:**
1. You have not enabled centralized root access
1. You need to perform recovery operations that cannot be done through centralized root access
1. Compliance requirements mandate individual root credentials for each account
:::tip Recommended Approach
For most organizations, we recommend using [centralized root access](/layers/identity/centralized-root-access/) instead of managing individual root credentials for each account. This reduces operational overhead and improves security.
:::
## Creating Root User Credentials
If you need to create root user credentials for an account, follow these steps for each account:
## Reset the Root User Password
1. Attempt to log in to the AWS console as a "root user" using the account's email address
1. Click the "Forgot password?" link
1. You will receive a password reset link via email (forwarded to your shared Slack channel if configured)
1. Click the link and enter a new password
Use 1Password to create a password 26-38 characters long, including at least 3 of each class of character: lower case, uppercase, digit, and symbol
1. Save the email address and generated password as web login credentials in 1Password
1. Record the account number in a separate field of the 1Password item (optional but recommended)
## Configure MFA
1. Log in to the AWS console using the new password
1. Choose "My Security Credentials" from the account dropdown menu
1. Set up Multi-Factor Authentication (MFA) to use a Virtual MFA device
1. Save the MFA TOTP key in 1Password using 1Password's "One-Time Password" field
1. Enter the generated MFA codes in AWS to verify the MFA device
1. Save the Virtual MFA ARN in the same 1Password entry
## References
- [Centralized Root Access](/layers/identity/centralized-root-access/)
---
## Add a new Organizational Unit
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
Learn how to add a new Organizational Unit (OU) to an existing AWS Organization set up to Cloud Posse standards. The process involves updating the `account` catalog with the new OU details and reapplying the `account` component using the `SuperAdmin` user, ensuring all planned changes by Terraform are carefully reviewed before applying.
## Problem
We want to create a new Organizational Unit with an existing AWS Organization set up to Cloud Posse standards
## Solution
Update the `account` catalog
Add the new OU to the `account` catalog and reapply the component.
:::info
The `account` component must be applied with the SuperAdmin user, which is typically found in 1Password. For more on
SuperAdmin, see[How to Create SuperAdmin user](/layers/accounts/tutorials/how-to-create-superadmin-user)
:::
For example to add a new Organizational Unit called `example` with one account called `foo`, add the following to
`stacks/catalog/account.yaml`:
```
components:
terraform:
account:
vars:
organizational_units:
- name: example
accounts:
- name: example-foo
tenant: example
stage: foo
tags:
eks: false
```
Then reapply the `account` component:
:::caution
The `account` component is potentially dangerous! Double-check all changes planned by Terraform
:::
```
assume-role SuperAdmin atmos terraform plan account -s core-gbl-root
assume-role SuperAdmin atmos terraform apply account -s core-gbl-root
```
---
## How to add or mirror a new region
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
**DRAFT**
## Problem
## Solution
The current primary region is `us-west-2` and the new desired region is `us-east-2`
### Additional region
1. Create root stacks of the new region e.g. `ue2*.yaml`
2. Update VPC CIDR documentation
3. Create minimal components in the yaml such as `vpc`, `transit-gateway`, and perhaps `compliance` (or others) if
applicable
4. Deploy minimal components
5. Optionally deploy `dns-delegated` if a new Hosted Zone is required per region
6. This is no longer used going forward as we can use a single Hosted Zone for `.example.com` and create multi domain
records within it such as `postgres-example.ue2` without having to create a `ue2..example.com` HZ.
7. Optionally deploy `transit-gateway-cross-region` component to peer both regions
```
TBD
```
7. Optionally deploy new github runners (if applicable)
1. Retrieve the new github runner IAM role arn
1. Update `iam-primary-roles` to include the new IAM role and deploy it to update `identity-cicd` role
1. Optionally deploy new `spacelift-worker-pool` (if applicable)
1. Set a worker pool id map in the `spacelift` component
1. Set a `worker_pool_name` global variable in the new region
1. Update `iam-primary-roles` to include the new IAM role and deploy it to update `identity-ops` role
### If new region needs to be a mirror of the primary region
1. Same steps as above, except instead of minimal components, we want to copy and paste all of the primary region into
the new desired region. We will not reprovision anything from `gbl*`.
2. Mirror the SSM parameters by exporting them from the primary region and importing them into the new region
```
stage=sandbox
CURRENT_REGION=us-west-2
NEW_REGION=us-east-2
# get services
services=$(AWS_PROFILE=$NAMESPACE-gbl-$stage-admin AWS_REGION=$CURRENT_REGION aws ssm describe-parameters --query 'Parameters[].Name' | grep / | cut -d'/' -f2 | sort | uniq | tr '\n' ' ')
# export
AWS_PROFILE=$NAMESPACE-gbl-$stage-admin AWS_REGION=$CURRENT_REGION chamber export -o chamber-$stage.json $services
# import
for service in $services; do
AWS_PROFILE=$NAMESPACE-gbl-$stage-admin AWS_REGION=$NEW_REGION chamber import $service chamber-$stage.json;
done
```
3. Ensure all hostnames use the correct regional endpoints (either by Hosted Zone or by record)
4. Optionally, it’s not recommended, but if the tfstate bucket needs to be migrated
5. Make sure everything in Spacelift is confirmed/discarded/failed so nothing is left in an unconfirmed state.
6. Schedule a date with the customer so no applies go through
7. Set desired count on the spacelift worker pool to 0 with a max and min count of 0
8. Manually copy from old tfstate bucket to new tfstate bucket
9. PR to change all the `backend.tf.json` files over to the new bucket and set new bucket in the global vars
10. Check locally to see that new bucket is used and stacks show no changes
11. Merge PR
12. revert spacelift worker pool
13. Ensure everything is working in Spacelift
### If an old region needs to be destroyed
The following can be destroyed in Spacelift using a run task with `terraform destroy -auto-approve`
The following should be destroyed locally with `atmos`
---
## How to Adopt/Import Legacy AWS Accounts for Management with Atmos
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
Sometimes provisioning a new account and migrating resources isn’t an option. Adopting a legacy account into your new organization allows you to manage new resources using Atmos and Terraform while leaving old ones unmanaged. This guide provides step-by-step instructions for importing legacy AWS accounts into the Infrastructure Monorepo, updating account configurations, and integrating with Atmos and Spacelift for automation.
## Problem
Legacy AWS accounts may be owned by an organization (the company, not the AWS organization) but not part of the AWS
Organization provisioned by the Infrastructure Monorepo via Atmos. This process is meant to “adopt” or “import” the
accounts in question for use with the Infrastructure Monorepo, so that their infrastructure may be automated via Atmos and Spacelift.
## Solution
:::caution
When you remove a member account from an Organization, the member account’s access to AWS services that are integrated
with the Organization are lost. In some cases, resources in the member account might be deleted. For example, when an
account leaves the Organization, the AWS CloudFormation stacks created using StackSets are removed from the management
of StackSets. You can choose to either delete or retain the resources managed by the stack. For a list of AWS services
that can be integrated with Organizations, see [AWS services that you can use with AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services_list.html). ([source](https://aws.amazon.com/premiumsupport/knowledge-center/organizations-move-accounts/))
:::
:::tip
Legacy AWS Accounts must be invited to the AWS Organization managed by the Infrastructure Monorepo, and imported into
the `account` component. From there, the Geodesic image and the Atmos stacks must be updated to reflect the presence of
the new accounts.
:::
### Invite the AWS Account(s) Into the Organization Managed by the Infrastructure Monorepo
Before the AWS account(s) can be managed by Terraform + Atmos in the Infrastructure Monorepo, they need to invited to
the AWS Organization managed by the Infrastructure Monorepo.
From the management account (usually named `root` or `mgmt-root`), navigate to `AWS Organizations` → `AWS Accounts` →
`Add an AWS Account` → `Invite an existing AWS account`. Enter the ID of the AWS account that is to be invited, then
`Send Invitation`.
This step does **not** have to be done (and in terms of best-practices **should not** be done) via the root user. It can
be done via an assumed role provisioned by `iam-delegated-roles` that has the `organizations:DescribeOrganization` and
`organizations:InviteAccountToOrganization` permissions — i.e. the `admin` delegated role.
The email address of the management account must be verified, if not already done so. Otherwise, the invitation cannot
be sent.
An email will be sent to the email address(es) associated with the AWS account(s) in question. The link leads to the AWS
console where the invitation can be accepted or denied. Once again, this step does **not** have to be done (and in terms
of best-practices **should not** be done) via the root user. However, since this is a legacy account, it does not have
the roles deployed by `iam-delegated-roles`. Thus, ensure that the assumed role used to accept the invitation has the
`organizations:ListHandshakesForAccount`, `organizations:AcceptHandshake` and `iam:CreateServiceLinkedRole` permissions,
for example via the built-in `AdministratorAccess` policy.
### Add the Account(s) to the `account` Component Configuration
:::info
The OU that you are adding the legacy AWS accounts to needs to be adjusted to match your business use case. Some
organizations may choose to employ a “legacy” OU. Others, as in the example below, have an OU for each “tenant” within
the organization, and legacy AWS accounts will live alongside current AWS accounts within a single OU.
:::
Your existing `account` component configuration should look something like this:
```yaml
components:
terraform:
account:
backend:
s3:
workspace_key_prefix: account
role_arn: null
vars:
enabled: true
account_email_format: aws+%s@acme.com
account_iam_user_access_to_billing: DENY
organization_enabled: true
aws_service_access_principals:
- cloudtrail.amazonaws.com
- ram.amazonaws.com
- sso.amazonaws.com
enabled_policy_types:
- SERVICE_CONTROL_POLICY
- TAG_POLICY
organization_config:
root_account:
name: mgmt-root
stage: root
tenant: mgmt
tags:
eks: false
accounts: []
organization:
service_control_policies: []
organizational_units:
- name: mgmt
accounts:
- name: mgmt-artifacts
stage: artifacts
tenant: mgmt
tags:
eks: false
- name: mgmt-audit
stage: audit
tenant: mgmt
tags:
eks: false
- name: mgmt-automation
stage: automation
tenant: mgmt
tags:
eks: true
- name: mgmt-corp
stage: corp
tenant: mgmt
tags:
eks: true
- name: mgmt-dns
stage: dns
tenant: mgmt
tags:
eks: false
- name: mgmt-identity
stage: identity
tenant: mgmt
tags:
eks: false
- name: mgmt-network
stage: network
tenant: mgmt
tags:
eks: false
- name: mgmt-sandbox
stage: sandbox
tenant: mgmt
tags:
eks: true
- name: mgmt-security
stage: security
tenant: mgmt
tags:
eks: false
service_control_policies:
- DenyLeavingOrganization
- name: core
accounts:
- name: core-dev
stage: dev
tenant: core
tags:
eks: true
- name: core-staging
stage: staging
tenant: core
tags:
eks: true
- name: core-prod
stage: prod
tenant: core
tags:
eks: true
service_control_policies:
- DenyLeavingOrganization
service_control_policies_config_paths:
- "https://raw.githubusercontent.com/cloudposse/terraform-aws-service-control-policies/0.9.1/catalog/cloudwatch-logs-policies.yaml"
- "https://raw.githubusercontent.com/cloudposse/terraform-aws-service-control-policies/0.9.1/catalog/deny-all-policies.yaml"
- "https://raw.githubusercontent.com/cloudposse/terraform-aws-service-control-policies/0.9.1/catalog/iam-policies.yaml"
- "https://raw.githubusercontent.com/cloudposse/terraform-aws-service-control-policies/0.9.1/catalog/kms-policies.yaml"
- "https://raw.githubusercontent.com/cloudposse/terraform-aws-service-control-policies/0.9.1/catalog/organization-policies.yaml"
- "https://raw.githubusercontent.com/cloudposse/terraform-aws-service-control-policies/0.9.1/catalog/route53-policies.yaml"
- "https://raw.githubusercontent.com/cloudposse/terraform-aws-service-control-policies/0.9.1/catalog/s3-policies.yaml
```
Entries corresponding to the legacy AWS accounts need to be added before the accounts can be imported:
```
...
- name: core-legacydev
stage: dev
tenant: core
tags:
eks: true
- name: core-legacystaging
stage: staging
tenant: core
tags:
eks: true
- name: core-legacyprod
stage: prod
tenant: core
tags:
eks: true
...
```
Once entries are created, a Terraform plan can be run against the `account` component when assuming the `admin`
delegated role in `mgmt-root`:
:::caution
If the Terraform plan attempts to destroy the newly-added legacy account, do not apply it! See section on working around
destructive Terraform plans for newly-added legacy accounts.
:::
```
√ : [eg-mgmt-gbl-root-admin] (HOST) infrastructure ⨠ atmos terraform plan account -s mgmt-gbl-root
```
```
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_organizations_account.organizational_units_accounts["core-legacydev"] will be created
+ resource "aws_organizations_account" "organizational_units_accounts" {
+ arn = (known after apply)
+ email = "aws+core-legacydev@acme.com"
+ iam_user_access_to_billing = "DENY"
+ id = (known after apply)
+ joined_method = (known after apply)
+ joined_timestamp = (known after apply)
+ name = "core-legacydev"
+ parent_id = "ou-[REDACTED]"
+ status = (known after apply)
+ tags = {
+ "Environment" = "gbl"
+ "Name" = "core-legacydev"
+ "Namespace" = "eg"
+ "Stage" = "root"
+ "Tenant" = "mgmt"
+ "eks" = "true"
}
+ tags_all = {
+ "Environment" = "gbl"
+ "Name" = "core-legacydev"
+ "Namespace" = "eg"
+ "Stage" = "root"
+ "Tenant" = "mgmt"
+ "eks" = "true"
}
}
# aws_organizations_account.organizational_units_accounts["core-legacyprod"] will be created
+ resource "aws_organizations_account" "organizational_units_accounts" {
+ arn = (known after apply)
+ email = "aws+core-legacyprod@acme.com"
+ iam_user_access_to_billing = "DENY"
+ id = (known after apply)
+ joined_method = (known after apply)
+ joined_timestamp = (known after apply)
+ name = "core-legacyprod"
+ parent_id = "ou-[REDACTED]"
+ status = (known after apply)
+ tags = {
+ "Environment" = "gbl"
+ "Name" = "core-legacyprod"
+ "Namespace" = "eg"
+ "Stage" = "root"
+ "Tenant" = "mgmt"
+ "eks" = "true"
}
+ tags_all = {
+ "Environment" = "gbl"
+ "Name" = "core-legacyprod"
+ "Namespace" = "eg"
+ "Stage" = "root"
+ "Tenant" = "mgmt"
+ "eks" = "true"
}
}
# aws_organizations_account.organizational_units_accounts["core-legacystaging"] will be created
+ resource "aws_organizations_account" "organizational_units_accounts" {
+ arn = (known after apply)
+ email = "aws+core-legacystaging@acme.com"
+ iam_user_access_to_billing = "DENY"
+ id = (known after apply)
+ joined_method = (known after apply)
+ joined_timestamp = (known after apply)
+ name = "core-legacystaging"
+ parent_id = "ou-[REDACTED]"
+ status = (known after apply)
+ tags = {
+ "Environment" = "gbl"
+ "Name" = "core-legacystaging"
+ "Namespace" = "eg"
+ "Stage" = "root"
+ "Tenant" = "mgmt"
+ "eks" = "true"
}
+ tags_all = {
+ "Environment" = "gbl"
+ "Name" = "core-legacystaging"
+ "Namespace" = "eg"
+ "Stage" = "root"
+ "Tenant" = "mgmt"
+ "eks" = "true"
}
}
Plan: 3 to add, 0 to change, 0 to destroy.
Changes to Outputs:
...
```
##### Workaround for Destructive Terraform Plans for Legacy Accounts
Terraform will sometimes try to destroy a legacy account because the `iam_user_access_to_billing` attribute is not
modifiable via Terraform (or the AWS CLI, for that matter):
[https://github.com/hashicorp/terraform-provider-aws/issues/12959](https://github.com/hashicorp/terraform-provider-aws/issues/12959)
[https://github.com/hashicorp/terraform-provider-aws/issues/12585](https://github.com/hashicorp/terraform-provider-aws/issues/12585)
[https://github.com/aws/aws-cli/issues/6252](https://github.com/aws/aws-cli/issues/6252)
The workaround for this is to edit the `iam_user_access_to_billing` attribute manually in the Terraform state:
1. Download the Terraform state locally in Geodesic via
`assume-role aws s3 cp s3://eg-mgmt-ue1-root-admin-tfstate/account/mgmt-root/terraform.tfstate ./terraform.tfstate`
2. Find the `iam_user_access_to_billing` attribute for the account in question and change it to either `ENABLE` or
`DENY`, depending on whether or not IAM access to billing is enabled for the account in question. Ensure that the
value inputted into the state manually reflects the actual state of the AWS account in question.
Overwrite the Terraform state via
`assume-role aws s3 cp --sse AES256 ./terraform.tfstate s3://eg-mgmt-ue1-root-admin-tfstate/account/mgmt-root/terraform.tfstate`.
1. Run the same Terraform plan and copy the expected MD5 digest for the Terraform state.
2. In DynamoDB, find the MD5 digest item for the Terraform state for the workspace in question (e.g. `mgmt-root` in
`eg-mgmt-ue1-root`) and overwrite the current digest value with the expected digest value.
### Import Legacy AWS Account(s) Into the `account` Component Workspace and Update Legacy Account Attributes
Using the resources address from the Terraform plan, the AWS account(s) can now be imported:
```
$ cd components/terraform/account
$ terraform import 'aws_organizations_account.organizational_units_accounts["core-legacydev"]' 123456789024
$ terraform import 'aws_organizations_account.organizational_units_accounts["core-legacystaging"]' 123456789025
$ terraform import 'aws_organizations_account.organizational_units_accounts["core-legacyprod"]' 123456789026
```
Once the legacy accounts have been imported to the `account` component workspace, their attributes can be updated by
running a Terraform apply.
```
$ cd ../../../ # go back to the root of the infrastructure monoorepo
$ atmos terraform plan account -s mgmt-gbl-root
... # verify that there are no destructive changes, and that all of the intended attributes are correct, especially the email address associated with each account
$ atmos terraform apply account -s mgmt-gbl-root
```
### Update Geodesic Image
The Geodesic container image contains an `aws-accounts` script that is responsible for creating the AWS CLI profile used
within the image both by users and Spacelift. It needs to be updated to reflect the newly-added AWS accounts.
The existing `rootfs/usr/local/bin/aws-accounts` script should have a section with an associative array representing the
account names and their IDs, and a list representing the order of profiles corresponding to each AWS account:
```
...
declare -A accounts
# root account intentionally omitted
accounts=(
[mgmt-artifacts]="123456789012"
[mgmt-audit]="123456789013"
[mgmt-automation]="123456789014"
[mgmt-corp]="123456789015"
[mgmt-dns]="123456789016"
[mgmt-identity]="123456789017"
[mgmt-network]="123456789018"
[mgmt-sandbox]="123456789019"
[mgmt-security]="123456789020"
[core-dev]="123456789021"
[core-prod]="123456789022"
[core-staging]="123456789023"
)
# When choosing a profile, the users will be presented with a
# list of profiles in this order
readonly profile_order=(
mgmt-artifacts
mgmt-audit
mgmt-automation
mgmt-corp
mgmt-dns
mgmt-identity
mgmt-network
mgmt-sandbox
mgmt-security
mgmt-root
core-dev
core-prod
core-staging
)
...
```
Both the associative array and the list need to be updated to reflect the newly added legacy AWS accounts:
```
...
declare -A accounts
# root account intentionally omitted
accounts=(
...
[core-legacydev]="123456789024"
[core-legacyprod]="123456789025"
[core-legacystaging]="123456789026"
)
# When choosing a profile, the users will be presented with a
# list of profiles in this order
readonly profile_order=(
...
core-legacydev
core-legacyprod
core-legacystaging
)
...
```
The script then needs to be re-run to update both the local AWS CLI config and the versioned CI/CD AWS CLI config:
```
$ aws-accounts config-saml >> ~/.aws/config
$ aws-accounts config-cicd > rootfs/etc/aws-config/aws-config-cicd # (and commit this change upstream)
```
### Add and Deploy Atmos Stacks for Newly-added Legacy AWS Accounts
Once the Geodesic image has been updated, global and regional stacks for the newly-added legacy accounts need to be
added. These stacks should also contain core components for the newly-added accounts.
```
...
core-gbl-legacydev.yaml
core-gbl-legacyprod.yaml
core-gbl-legacystaging.yaml
core-uw2-legacydev.yaml
core-uw2-legacyprod.yaml
core-uw2-legacystaging.yaml
...
```
The global stacks should contain the `iam-delegated-roles` and `dns-delegated` components:
```
import:
- core-gbl-globals
- catalog/iam-delegated-roles
- catalog/dns-delegated
vars:
stage: legacydev
terraform:
vars: {}
components:
terraform:
dns-delegated:
vars:
zone_config:
- subdomain: legacydev
zone_name: core.acme-infra.net
```
The regional stacks should contain `dns-delegated` and `vpc` components:
```
import:
- core-uw2-globals
- catalog/dns-delegated
- catalog/vpc
vars:
stage: legacydev
terraform:
vars: {}
helmfile:
vars: {}
components:
terraform:
dns-delegated:
vars:
zone_config:
- subdomain: uw2.legacydev
zone_name: core.acme-infra.net
vpc:
vars:
cidr_block: 172.2.0.0/18 # change this to the VPC CIDR block in the legacy account
```
The `vpc` component will need to have the existing VPC in the legacy account imported to the component’s workspace:
```
$ atmos terraform plan vpc -s core-uw2-legacydev
... # copy resource ID of VPC
$ cd components/terraform/vpc
$ terraform import ... # import VPC and subnets
```
Once the stacks are added, deploy the following components:
:::info
If AWS SSO is being used via the `aws-sso` component, the configuration of the aforementioned component needs to be
updated in order to configure permission sets for the newly added accounts. Then, the component needs to be redeployed.
:::
- `account-map` (in `mgmt-gbl-root`)
- `iam-primary-roles` (in `mgmt-gbl-identity`)
- `iam-delegated-roles` (in each new global stack)
- `dns-delegated` (in each new global and regional stack)
- `vpc` (in each new regional stack)
- `compliance` (if being used)
### Validate Access
:::info
Existing AWS Accounts invited to an AWS organization lack the OrganizationAccountAccessRole IAM role created
automatically when creating the account with the AWS Organizations service. The role grants admin permissions to the
member account to delegated IAM users in the master account.
Thus the
[https://github.com/cloudposse/terraform-aws-organization-access-role](https://github.com/cloudposse/terraform-aws-organization-access-role)
module should be leveraged as part of the `account-settings` component in order to deploy this IAM role.
:::
Finally, validate access to the newly-added legacy accounts via the delegated IAM roles by signing into the AWS console
and also running `assume-role eg-core-gbl-legacydev-admin aws sts get-caller-identity` (for each newly added account)
within Geodesic.
### References
- [https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html)
- [account](/components/library/aws/account/)
- [Deploy Accounts](/layers/accounts/deploy-accounts/)
---
## How to Create and Setup AWS Accounts
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Note from '@site/src/components/Note';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
This guide covers the process of creating and setting up new AWS accounts. It includes detailed instructions for updating account catalogs, configuring stack settings, deploying necessary components, and managing AWS account profiles.
## Problem
Setting up new AWS accounts can be complex and error-prone without proper guidance and tooling.
## Solution
This guide provides a step-by-step approach to create and configure new AWS accounts using Cloud Posse's reference architecture components and conventions.
### Update Account Catalog
First, add the new account to your account catalog configuration. This defines the account structure within your AWS Organization.
Navigate to your account catalog file (typically `stacks/catalog/account.yaml`) and add the new account under the appropriate Organizational Unit (OU).
**Example for a new account named `foobar`:**
```yaml
organizational_units:
- name: core
accounts:
# ... existing accounts ...
- name: foobar
stage: foobar
tags:
eks: false
```
Choose the appropriate OU based on your organization's account strategy. Common OUs include `core` for core infrastructure and `plat` for platform accounts.
### Create Stack Configuration Files
Create the corresponding stack configuration files for your new account in the `stacks/orgs///` folder (or follow the convention your organization uses). See the following example where the namespace is `acme`, the tenant is `core`, and the stage is `foobar`.
**Example Directory Structure:**
For a new account `foobar` in the `core` tenant under namespace `acme`:
```
stacks/orgs/acme/core/foobar/
├── _defaults.yaml
├── global-region/
│ ├── baseline.yaml # Account settings
│ └── identity.yaml # AWS Team Roles
└── us-east-1/
├── baseline.yaml # Regional baseline
└── app.yaml # Regional application platform
```
Start with a basic global-region stack for account settings and create placeholder regional stacks for future expansion. You can add more regional stacks and components as your account requirements grow.
### Define Stage Mixin
By convention, we treat every account as an operating stage. Stages are configured as mixins, so that each stack operating in that stage can import that mixin to have a consistent stage name.
**For example:** `stacks/mixins/stage/foobar.yaml`
Create a mixin file for your stage to keep configuration DRY. This file contains only variables.
```yaml
# This file is for vars only; do not import components here
# For more information, see "Mixins and Imports with Atmos" in the baseline documentation
vars:
stage: foobar
```
### Configure Account Defaults
**For example:** `stacks/orgs/acme/core/foobar/_defaults.yaml`
This file is necessary for keeping configuration DRY and establishing common settings to be imported in all subsequent files.
```yaml
import:
- orgs/acme/core/_defaults
- mixins/stage/foobar
```
### Configure Account Settings
**For example:** `stacks/orgs/acme/core/foobar/global-region/baseline.yaml`
This file configures account-level settings and policies.
```yaml
import:
- orgs/acme/core/foobar/_defaults
- mixins/region/global-region
- catalog/account-settings
components:
terraform:
account-settings:
vars:
# Allow creating public S3 buckets in this account
# Public buckets are used, for example, to deploy documentation websites into preview environments and serve them via Lambda@Edge
block_public_acls: false
ignore_public_acls: false
block_public_policy: false
restrict_public_buckets: false
```
### Configure AWS Team Roles
**For example:** `stacks/orgs/acme/core/foobar/global-region/identity.yaml`
This file configures cross-account IAM roles and team access permissions.
```yaml
import:
- orgs/acme/core/foobar/_defaults
- mixins/region/global-region
- catalog/aws-team-roles
components:
terraform:
aws-team-roles:
vars:
roles:
reader:
trusted_teams:
- devops
- developers
- managers
poweruser:
trusted_teams:
- devops
- developers
- managers
terraform:
trusted_teams:
- devops
- developers
- managers
- gitops
```
### Regional Stack
**For example:** `stacks/orgs/acme/core/foobar/us-east-1/baseline.yaml`
Create regional stacks based on your specific needs. Start with a basic placeholder that you can expand later.
```yaml
import:
- orgs/acme/core/foobar/_defaults
- mixins/region/us-east-1
components:
terraform: {}
```
### Submit and Merge Configuration PR
Create a pull request with your configuration changes. This is a critical step because once the account is created, it's difficult to reverse the process.
Always review configuration changes carefully before merging, as account creation is irreversible and affects your AWS Organization structure.
Once the PR is reviewed, approved, and merged, proceed to the next step.
### Deploy Account Infrastructure
Deploy the necessary components to create and configure your new account. Use `plan` and `apply` commands without `-auto-approve` for safety.
**Prerequisites:**
- Ensure you have `SuperAdmin`, `managers`, or appropriate elevated permissions
**Deploy Components:**
```bash
# Create the new account
atmos terraform apply account --stack core-gbl-root
# Update account map to include the new account
atmos terraform apply account-map --stack core-gbl-root
# Configure account settings for the new account
atmos terraform apply account-settings --stack core-gbl-foobar
# Deploy AWS Team Roles to enable cross-account access
atmos terraform apply aws-team-roles --stack core-gbl-foobar
```
The order of deployment is important. The `account` component creates the account, `account-map` updates the account mapping required to map accounts to IAM roles, `account-settings` configures the account, and `aws-team-roles` enables cross-account role assumption.
### Complete Account Setup via ClickOps
After deploying the account infrastructure, you need to perform some manual configurations to finalize the account setup.
**Recommended Steps:**
- Reset the root user password and set up MFA
- Enable any necessary optional AWS regions
- Unsubscribe the account's email address from marketing emails
Save the root credentials, MFA TOTP key, and account number in 1Password. Use a highly restricted vault, and only share access on a strict need-to-know basis.
For detailed step-by-step instructions, see [Complete Account Setup via ClickOps](/layers/accounts/deploy-accounts/#step-number-clickops-to-complete-account-setup).
### Update AWS Configuration
After deploying the infrastructure components, update your AWS configuration files to include the new account.
**Generate AWS Configuration Files:**
**Commit and Push Changes!**
### Verify Account Setup
Verify that your new account has been properly configured and is accessible.
**Check Account Creation:**
- Verify the account appears in your AWS Organization console
- Confirm the account has the correct email address and tags
- Ensure the account is in the correct Organizational Unit
**Test Cross-Account Access:**
- Verify you can assume roles in the new account from your identity account
- Test Terraform operations in the new account
- Confirm AWS Team Roles are properly configured
If you encounter issues, check the Terraform outputs from each component deployment for error messages and configuration details.
## References
- [How to Delete AWS Accounts](/layers/accounts/tutorials/how-to-delete-aws-accounts) (in case a mistake was made)
- [Access Control Evolution](/layers/identity/tutorials/access-control-evolution/)
- [Account Management](/layers/accounts)
- [Identity and Access Management](/layers/identity)
---
## How to Create `SuperAdmin` user
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
The `SuperAdmin` user is an IAM user created for cold start bootstrapping and emergency break-glass access only. This guide outlines the steps to create a secure `SuperAdmin` user in AWS, including setting permissions, enabling MFA, and storing credentials safely in 1Password.
## About SuperAdmin
The `SuperAdmin` user is a break-glass IAM user intended for two scenarios:
1. **Cold start bootstrapping** — Initial account setup before IAM Identity Center is configured
1. **Emergency IdP failure** — When your Identity Provider is completely unavailable and you cannot authenticate via IAM Identity Center
After cold start is complete, prefer these alternatives:
| Scenario | Recommended Approach |
|----------|---------------------|
| Administrative tasks in a specific account | Use `AdministratorAccess` Permission Set for that account |
| Organization-level or emergency administrative tasks | Use [Centralized Root Access](/layers/identity/centralized-root-access/) via the `RootAccess` Permission Set |
| Emergency access (IdP down) | `SuperAdmin` (this is the only valid use case post-cold-start) |
Because `SuperAdmin` is an IAM user with static credentials, there is no audit trail for *who* used the credentials—all access appears as the same user. Permission Sets provide per-user audit trails via CloudTrail, making them the preferred option whenever your IdP is available.
[Follow the prerequisites steps in the How-to Get Started guide](/layers/project/#0-prerequisites)
### Create the SuperAdmin User
First, create the SuperAdmin IAM user in the AWS web console.
1. Login to the AWS `root` account using the root credentials.
1. In the IAM console, select "Users" on the sidebar.
1. Click "Add users" button
1. Enter "SuperAdmin" for "User name" and check "Programmatic access" and leave "AWS Management Console access" unchecked. Click "Next: Permissions"
1. Under "Set permissions", select "Attach existing policies directly". A list should appear, from which you should check "AdministratorAccess". Click "Next: Tags"
1. Skip the tags, Click "Next: Review"
1. Review and click "Create user"
1. The Success page should show you the "Access key ID" and hidden "Secret access key" which can be revealed by clicking "Show". Copy these to your secure credentials storage as you will need them shortly
1. Click "Close" to return to the IAM console. Select "Users" on the sidebar if it is not already selected. You should see a list of users. Click the user name "SuperAdmin" (which should be a hyperlink) to take you to the Users -> SuperAdmin "Summary" page
1. On the "Users -> SuperAdmin" "Summary" page, click on the "Security credentials" tab
1. In the "Sign-in credentials" section, find: "Assigned MFA device: Not assigned | Manage" and click "Manage"
1. Choose "Virtual MFA device" and click "Continue"
1. Press the "Show secret key" button
1. Copy the key into 1Password as a AWS Credential using the "MFA" field
1. Use the MFA codes from 1Password to complete the MFA setup process (you will input 2 consecutive codes)
1. You should be taken back to the "Security Credentials" tab, but now the "Assigned MFA device" field should have an ARN like `arn:aws:iam:::mfa/SuperAdmin`
1. Copy the ARN and keep it with the Access Key in 1Password
1. Now we need to create an Access Key for CLI access. Click on the "Create Access Key" under "Access Keys"
1. Select "Command Line Interface" and click the "I understand..." checkbox then click 'Next'
1. Enter a description if you like, such as 'SuperAdmin CLI Access' and click 'Create'
### Store SuperAdmin Credentials in 1Password
The `SuperAdmin` credentials should be properly stored in 1Password. Relative to other potential 1Password item types, the most appropriate 1Password item type for these credentials is `login`. Since these are programmatic credentials and not an actual login with an endpoint from which the website favicon can be retrieved. Additionally, the password field should be kept empty. For convenience in retrieving the TOTP code when using Leapp, save `com.leapp.app` as a website URL.
1. Set the username to `SuperAdmin`
1. Create fields for the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and the TOTP (known as One Time Password field type in 1password) via the AWS virtual MFA device's secret
1. Add a note in the following format:
```
This account's Access Key should be made inactive when not needed.
CURRENT STATUS: ACTIVE
Use this account for API/command line access to administrative functions that IAM roles cannot do, such as provision IAM roles.
This account should not be allowed to log in to the AWS console, and therefore does not have a password.
Root account ID: [AWS ACCOUNT ID]
User ARN arn:aws:iam::[AWS ACCOUNT ID]:user/SuperAdmin
MFA Device ARN arn:aws:iam::[AWS ACCOUNT ID]:mfa/SuperAdmin
```
The resulting entry in 1password should appear as follows:
1. Hit save once you are done. Once the SuperAdmin credentials need to be disabled, do not forget to update the notes in this item
## References
[REFARCH-73 - Provision SuperAdmin User for Root Level IAM Management](/layers/accounts/prepare-aws-organization/#create-the-superadmin-iam-user)
---
## How to Delete AWS Accounts
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Note from '@site/src/components/Note';
import Admonition from '@theme/Admonition';
Learn the step-by-step instructions for deleting AWS accounts that are no longer needed or were provisioned by accident. We cover renaming email addresses to avoid future conflicts, using cloud-nuke to delete all resources, and manually closing the account through the AWS Console. It also includes recommendations for renaming or repurposing accounts to avoid the overhead of deletion.
## Problem
Sometimes accounts are provisioned that are no longer needed or were provisioned by accident. AWS provides no easy programmatic way to destroy accounts. The following ClickOps method is required to destroy the account.
Email addresses associated with AWS accounts are globally unique and cannot be reused even after the account deletion. If you ever intend to use the email address again with AWS on another account, we strongly recommend that you first rename the email address on record before proceeding to delete the account.
## Solution
We recommend renaming or repurposing accounts rather than deleting them due to the overhead and complexity of the deletion process.
### Delete All Account Resources
Closing an account might not automatically terminate all active resources. You may continue to incur charges for active resources even after closing the account. To avoid tedious manual steps, leverage [cloud-nuke](https://github.com/gruntwork-io/cloud-nuke) to delete all resources.
**Install cloud-nuke:**
```bash
brew install cloud-nuke
```
**Dry run to see what will be deleted:**
```bash
cloud-nuke aws --dry-run
```
**Delete all resources (WARNING: This will delete ALL resources in the account):**
```bash
cloud-nuke aws
```
**Export required AWS config for running cloud-nuke:**
Save the following in `.envrc`:
```bash
# or wherever the configuration file is
export AWS_CONFIG_FILE=rootfs/etc/aws-config/aws-config-teams
# This is necessary for cloud-nuke
export AWS_SDK_LOAD_CONFIG=true
AWS_PROFILE=core-gbl-root-admin cloud-nuke aws --dry-run
```
Instead of using the AWS profile, you can also use the SuperAdmin user credentials directly. This is often simpler for one-off operations like account deletion.
**Create a shell script to automate cloud-nuke across accounts:**
Create `nuke-echo.sh`:
```bash
#!/usr/bin/env bash
cat rootfs/etc/aws-config/aws-config-teams | grep '\[profile' | cut -d' ' -f2 | cut -d']' -f1 | grep admin | while read profile; do echo AWS_PROFILE=$profile cloud-nuke aws $@; done
```
**Run with specific regions and exclusions:**
```bash
./nuke-echo.sh --region us-east-2 --region us-west-2 --region eu-west-3 --exclude-resource-type s3 --exclude-resource-type guardduty --exclude-resource-type transit-gateway
```
Delete resources in the following order for best results:
- Security accounts
- Audit accounts
- Platform accounts (dev, staging, qa, prod, perf, sandbox)
- Corp accounts
- Auto accounts
- Network accounts (due to transit gateway)
Consider skipping:
- DNS accounts
- Identity accounts
- Root accounts
Skip the following resources until the very end:
- `iam` - due to IAM roles used to initiate cloud-nuke
- `s3` - due to the time it takes to delete S3 objects
- `guardduty` - controlled by security account across all accounts
- `asg` - can fail to destroy EKS ASGs
- `transit-gateway` - controlled by network account
### Handle Manual Deletions
Some resources are not covered by cloud-nuke and require manual deletion:
**GuardDuty:**
- Navigate to the security account in AWS console
- Go to GuardDuty settings, disassociate all members, then suspend and disable
- Go to root account and remove security from being the GuardDuty delegate
**Per-region resources:**
- AWS Backup
- MWAA (Managed Workflows for Apache Airflow)
- MSK (Managed Streaming for Apache Kafka)
- ElastiCache
- EFS (Elastic File System)
- EC2 Client VPN
- Transit Gateway attachments in network account
- Any other resources that use EC2 Network Interfaces (ENI)
### Prepare Account for Deletion
**Remove Organization Restrictions:**
- In the root account, remove the `DenyLeavingOrganization` SCP from the account to be deleted
**Update Account Configuration:**
- Comment out the account line in `stacks/catalog/account.yaml`
- Apply changes to the account component:
```bash
atmos terraform apply account -s core-gbl-root
```
**Access the Account:**
- Look up the account's root user email address and password in 1Password
- If no password exists, request a password reset
- Open the email (accessible via `#aws-notifications` Slack channel) and click the reset link
- Set a new password and save it in 1Password
**Rename Email Address (Recommended):**
- Change the email address to something disposable by appending "deleted" and the date
- Use a "plus address" if not already using one
- Example: `aws+prod@cpco.co` → `aws+prod-deleted-2024-01-01@cpco.co`
- Navigate to Account Settings > Edit > Edit Email in the AWS console
- Validate the email address from the account dashboard
**Complete Account Setup:**
- Set account contact information (if not inherited from Organizations)
- Accept the AWS Customer Agreement (if not inherited from Organizations)
### Delete the Account
You can only close up to 10% of accounts per month. A valid payment method is no longer required to close an account, but the API does not support this feature yet. Each account must be manually closed.
**Close Account in AWS Console:**
- Open the AWS Console and go to the root account for the Organization
- Navigate to Organizations > AWS accounts > select account > Close
- Check all boxes and enter Account ID > Close account
**Clean Up Infrastructure Configuration:**
- Remove the account from Terraform state for the account component
- Remove the account from `stacks/catalog/account.yaml`
- Follow standard process for opening a pull request and updating the codebase
- Apply changes to the account component:
```bash
atmos terraform apply account -s core-gbl-root
```
**Update AWS Configuration:**
- Regenerate AWS configuration files to remove the deleted account:
```bash
aws-config teams > rootfs/etc/aws-config/aws-config-teams
aws-config switch-roles > rootfs/etc/aws-config/aws-extend-switch-roles
```
- Commit and push the updated configuration files
## References
- [AWS Account Closure Guide](https://aws.amazon.com/premiumsupport/knowledge-center/close-aws-account/)
- [Terminate Resources Before Account Closure](https://aws.amazon.com/premiumsupport/knowledge-center/terminate-resources-account-closure/)
- [AWS Organizations Account Management](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html)
- [Automating AWS Account Deletion](https://onecloudplease.com/blog/automating-aws-account-deletion#deleting-an-aws-account) (including for comedic value) 🤣
- [How to Create and Setup AWS Accounts](/layers/accounts/tutorials/how-to-create-and-setup-aws-accounts)
- [cloud-nuke GitHub Repository](https://github.com/gruntwork-io/cloud-nuke)
---
## How to manage Account Settings
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
Manage and update AWS account settings and budgets by modifying and reapplying the account-settings component.
## Problem
We want to update Account Settings for a given AWS Account
## Solution
Update the `account-settings` component
Account Settings are managed by the `account-settings` component and deployed for each account. Update the
`account-settings` catalog and reapply the component.
For example to add password requirements, add the following to `stacks/catalog/account-settings.yaml`:
```
components:
terraform:
account-settings:
backend:
s3:
role_arn: null
vars:
enabled: true
minimum_password_length: 20
maximum_password_age: 120
```
Then reapply the `account-settings` component for the given account. `example` tenant and `foo` stage are used in this
example
```
atmos terraform apply account-settings -s example-gbl-foo
```
### How to set Budgets
Budgets are also managed with the `account-settings` component. In order to create budgets, enable budgets in the
`account-settings` component
:::info
Budgets were added to the `account-settings` component in early 2022. Make sure the component contains `budgets.tf`. If
not, pull the latest from
[the upstream modules](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/account-settings).
:::
```
components:
terraform:
account-settings:
vars:
enabled: true
budgets_enabled: true
budgets_notifications_enabled: true
budgets_slack_webhook_url: https://url.slack.com/abcd/1234
budgets_slack_username: AWS Budgets
budgets_slack_channel: aws-budgets-notifications
budgets:
- name: 1000-total-monthly
budget_type: COST
limit_amount: "1000"
limit_unit: USD
time_unit: MONTHLY
- name: s3-3GB-limit-monthly
budget_type: USAGE
limit_amount: "3"
limit_unit: GB
time_unit: MONTHLY
```
Then reapply the `account-settings` component for all accounts. This example only applies to one account. Repeat this
step for all accounts
```
atmos terraform apply account-settings -s example-gbl-foo
```
---
## How to Register Pristine AWS Root Account
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
Step-by-step instructions for setting up a new AWS root account, including prerequisites such as deciding on an email address format, provisioning a shared 1Password vault, and gathering necessary contact and billing information.
[REFARCH-60 - Register Pristine AWS Root Account](/layers/accounts/tutorials/how-to-register-pristine-aws-root-account/)
### Prerequisites
1. [REFARCH-51 - Decide on Email Address Format for AWS Accounts](/layers/accounts/design-decisions/decide-on-email-address-format-for-aws-accounts/)
2. [REFARCH-31 - Provision 1Password with Shared Vault](/layers/project/design-decisions/decide-on-1password-strategy/)
3. [REFARCH-471 - Decide on AWS Organization Strategy](/layers/accounts/design-decisions/decide-on-aws-organization-strategy/)
4. Company primary contact information
5. Company credit card and billing information
6. Company business mobile phone number you have access to use for SMS
7. Email address that supports [plus addressing](https://en.wikipedia.org/wiki/Email_address#Sub-addressing) (e.g.
aws+root@example.com)
### Instructions
:::info
See the official AWS Documentation for the most up-to-date instructions.
[https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/)
:::
1. Navigate to this link to create an AWS account:
[https://portal.aws.amazon.com/billing/signup#/start](https://portal.aws.amazon.com/billing/signup#/start)
2. Specify the email address defined in the design decision and append `+root` before the `@`sign (e.g.
`ops+root@ourcompany.com`)
3. In AWS, every AWS account needs a unique email address. We use `+` addressing for each account for disambiguation.
Since this is the `root` account, we’re appending `+root`
4. `+` addressing is not universally supported. E.g. Gsuite supports it but Microsoft Exchange does not.
5. Generate a strong password and add it to the appropriate 1Password vault. Make sure it’s the vault you’ve shared with
Cloud Posse.
6. AWS account name will be `root`
7. Click `Continue (step 1 of 5)`
8. Select “How do you plan to use AWS?” radio button: `Business - for your work, school or organization`
9. Add the primary contact’s full name
10. Enter your company’s name as it appears on legal documentation
11. Enter the primary contact’s business phone number
12. Enter the company’s legal address
13. Click the link provided to read the terms `AWS Customer Agreement` and check the box
14. Click `Continue (step 2 of 5)`
15. Enter billing information
16. Click `Verify and Continue (step 3 of 5)`
17. Select “How should we send you the verification code?” radio button: `Text message (SMS)`
18. Enter a business mobile phone number that you have access to use. Ideally, this is a number that can forward text
messages to your team (e.g. Google Voice or Twillio).
19. Complete the Security check
20. Click `Send SMS (step 4 of 5)`
21. Enter the verification code that was sent as an SMS message to the mobile phone number provided in step 16
22. Click `Continue (step 4 of 5)`
23. Select `Business support - From $100/month`
24. We recommend this support plan so we can use it to expedite account limit increases for the organization. This will
be useful throughout the engagement.
25. Click “Complete sign up”
26. Click on the button `Go to the AWS Management Console`
27. Select the radio button `Root user`
28. Enter the Root user email address and click `Next`
29. This is the same address we set up in step 2 (`ops+root@ourcompany.com`)
30. Complete the Security check and click `Submit`
31. Enter the password that was stored in 1Password for this account in step 3 and click `Sign in`
:::tip
Congratulations! You are now able to proceed with the rest of the cold start process.
:::
### Related articles
- [Decide on AWS Organization Strategy](/layers/accounts/design-decisions/decide-on-aws-organization-strategy)
- [AWS Documentation: How do I create and activate a new AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/)
| Related issues | |
| -------------- | --- |
---
## Set Up AWS Email Notifications
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Admonition from '@theme/Admonition';
Learn how to setup AWS account email notifications by routing emails from AWS accounts to a dedicated Slack channel. It covers using plus addressing for efficient email management and provides detailed steps for creating the Slack channel, generating its email address, and configuring email forwarding.
:::caution
Make sure to use the email address corresponding to the
[Decide on Email Address Format for AWS Accounts](/layers/accounts/design-decisions/decide-on-email-address-format-for-aws-accounts)
ADR.
:::
There should exist an `[organization]-aws-notifications` Slack channel dedicated for emails addressed to the dedicated
AWS address, e.g. `aws@[domain]`. Cloud Posse recommends using `+` addressing, such that all AWS accounts will have
their account email set to be `aws+[account name]@[domain]`. Once emails addressed to `aws@[domain]` are set up to be
routed to this Slack channel's email address, all emails addressed to any of the AWS accounts' emails will appear in
this channel, thanks to the use of `+` addressing.
If the use of `+` addressing is not possible, a dedicated email address such as `aws.[account name]@[domain]` can be set
up for each AWS account, and a routing rule for each of these addresses to the AWS Notifications Slack channel's email
address can be created.
The following is an example of how to set up this channel and configure email routing to the dedicated Slack channel's
email address:
1. ## Create the shared Slack channel (under Slack Connect):
(see also: [How to Provision Shared Slack Channels](/jumpstart/tutorials/how-to-provision-shared-slack-channels) )
2. ## Generate the Slack channel's email address (at the top of the newly-created Slack channel):
3. ## Set up email forwarding (example: G Suite / Google Workspace)
in **`Google Workspace -> Settings for Gmail -> Routing -> Recipient address map`:**
4. ## Manage Incoming Emails
Depending on your current Slack Workspace permissions, you may need to [manage incoming emails for your Slack workspace or organization](https://slack.com/help/articles/360053335433-Manage-incoming-emails-for-your-workspace-or-organization) and allow incoming email
---
## Legacy Account Map
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
The `account-map` component has been deprecated. The reference architecture now uses [Atmos Auth](https://atmos.tools/cli/auth) for authentication and [Atmos Functions](https://atmos.tools/core-concepts/stacks/templates/functions) for dynamic values, eliminating the need for `account-map` entirely.
## Why Deprecate Account Map?
The `account-map` component was originally designed to store AWS account metadata in Terraform state and provide dynamic lookups for account IDs, IAM roles, and other configuration. While functional, this approach had several limitations:
1. **Tight coupling** — Components depended on `account-map` remote state, creating deployment dependencies
1. **Greenfield only** — The pattern assumed Cloud Posse deployed all accounts, making brownfield adoption difficult
1. **Slower operations** — Every Terraform run required remote state lookups
1. **Complex bootstrapping** — Cold start required careful ordering of component deployments
## The New Approach
The refactored architecture replaces `account-map` with:
1. **Atmos stack variables** — Account IDs and configuration stored directly in stack configuration (no remote state)
1. **Atmos Auth** — Authentication handled before Terraform runs via `atmos auth login`
1. **Atmos Functions** — Dynamic values resolved at plan time using `!terraform.output` and other functions
1. **Simplified components** — Components work in both greenfield and brownfield environments
This approach enables all Cloud Posse components to work in brownfield environments where accounts already exist.
## Current Documentation
The accounts layer documentation has been updated for the new approach:
1. [Prepare AWS Organization](/layers/accounts/prepare-aws-organization/) — ClickOps setup before Terraform
1. [Initialize Terraform Backend](/layers/accounts/initialize-tfstate/) — Set up the S3 state backend
1. [Deploy Accounts](/layers/accounts/deploy-accounts/) — Create AWS accounts and configure settings
1. [Setup CloudTrail](/layers/accounts/setup-cloudtrail/) — Enable organization-wide audit logging
## Migrating from Account Map
If you have an existing deployment using the `account-map` component, see the migration guide:
- [Migrate from Account Map](/layers/project/tutorials/migrate-from-account-map/)
## See Also
1. [Atmos Auth](https://atmos.tools/cli/auth) — Authentication commands
1. [Atmos Functions](https://atmos.tools/core-concepts/stacks/templates/functions) — Dynamic value resolution
1. [Identity Layer](/layers/identity/) — IAM Identity Center and access management
---
## Structure of Terraform S3 State Backend Bucket
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
This guide explains the structure of a Terraform S3 state backend bucket, including the use of workspaces, key prefixes, and buckets. It details how the `backend.tf.json` file is used to configure the S3 backend for storing Terraform state, and how S3 native state locking provides consistency checking. The document provides examples and best practices for managing and accessing the Terraform state backend.
Understand the anatomy of a Terraform S3 state backend bucket and how workspaces, key prefixes and buckets are used.
### S3 Native State Locking
As of Terraform 1.10 and OpenTofu 1.8, the S3 backend supports native state locking using S3's conditional writes feature. This eliminates the need for a separate DynamoDB table for state locking.
To enable S3 native state locking, set `use_lockfile = true` in your backend configuration. This uses S3 conditional writes to create a `.tflock` file alongside your state file, providing the same consistency guarantees as DynamoDB locking without the additional infrastructure.
:::tip
S3 native state locking is the recommended approach for new deployments. It simplifies your infrastructure by removing the DynamoDB dependency while maintaining the same locking guarantees.
:::
### The `backend.tf.json` File
This file is programmatically generated by [Atmos](/resources/legacy/fundamentals/atmos) using all the capabilities of
[Stacks](/resources/legacy/fundamentals/stacks) to deep merge. Every component defines a `backend.tf.json`, which is what distinguishes it as a root module (as opposed to a terraform child module). The backend tells terraform where to access the last known deployed state of infrastructure for the given component. Since the backend is stored in S3, it’s easily accessed by in a distributed manner by anyone running terraform.
:::info
An identical `backend.tf.json` file is used by all environments (stacks). Environments are selected using the
`terraform workspace` command, which happens automatically when using `atmos` together with the `--stack`argument.
:::
For reference, this is the anatomy of the backend configuration: (note this is just a JSON representation of HCL)
```json
{
"terraform": {
"backend": {
"s3": {
"acl": "bucket-owner-full-control",
"bucket": "acme-ue2-root-tfstate",
"encrypt": true,
"key": "terraform.tfstate",
"profile": "acme-gbl-root-terraform",
"region": "us-east-2",
"use_lockfile": true,
"workspace_key_prefix": "vpc"
}
}
}
}
```
:::note
- Either `profile` or `role_arn` can be used for authentication
- The `use_lockfile` setting enables S3 native state locking (requires Terraform 1.10+ or OpenTofu 1.8+)
:::
### S3 Backend
The S3 bucket is created in the cold start using the [tfstate-backend](/components/library/aws/tfstate-backend/)
component provisioned in the root account.
The state format is `s3://{bucket_name}/{component}/{stack}/terraform.tfstate`
- The `bucket name` format is `{namespace}-{optional tenant}-{environment}-{stage}-tfstate`
- We deploy this bucket in the `root` account so here are some example bucket names
`acme-ue2-root-tfstate` (without tenant) `acme-mgmt-ue2-root-tfstate` (with `tenant: mgmt`)
- The `component` name provided is used as the terraform state’s `workspace_key_prefix` in each component’s
`backend.tf.json`. Therefore, this will be the first s3 key after the bucket name.
- The `stack` is where the component is provisioned and the name of the workspace created
- Finally, the `terraform.tfstate` is the `key` provided in each component’s `backend.tf.json`
The terraform commands run by `atmos` for the backend `s3://acme-ue2-root-tfstate/vpc/ue2-prod/terraform.tfstate`
```
atmos terraform deploy vpc --stack ue2-prod
| atmos will create the input variables from the YAML and run the following commands
| -- terraform init
| -- terraform workspace ue2-prod
| -- terraform plan
| -- terraform apply
```
To better visualize what’s going on, we recommend running the commands below to explore your own state bucket. Make sure
to use the correct `profile` for your organization (`acme-gbl-root-admin` is just a placeholder).
Find the bucket. It should contain `tfstate` in its name. In the example below, we can see the
[vpc](/components/library/aws/vpc/) component is deployed to `use2-auto`, `use2-corp`, `use2-dev`, `use2-qa`,
`use2-sbx01`, `use2-staging`. As you can see, the `workspace` is constructed as the `{environment}-{stage}`. This
setting is defined in the `atmos.yaml` config with the `stacks.name_pattern` setting (see [Atmos](/resources/legacy/fundamentals/atmos)
for all settings).
```
$ aws --profile acme-gbl-root-admin \
s3 ls --recursive
...
2021-11-01 19:53:48 120926 vpc/use2-auto/terraform.tfstate # workspace key prefix: vpc, workspace name is `use2-auto`
2021-11-01 19:49:12 123604 vpc/use2-corp/terraform.tfstate
2021-11-01 19:50:18 123486 vpc/use2-dev/terraform.tfstate
2021-11-01 19:48:39 123354 vpc/use2-qa/terraform.tfstate
2021-11-01 19:49:46 123735 vpc/use2-sbx01/terraform.tfstate
2021-11-01 19:50:50 124014 vpc/use2-staging/terraform.tfstate
```
See where all the VPC components contain state
```
aws --profile acme-gbl-root-admin \
s3 ls s3://{bucket_name}/vpc/
```
:::note
If a component is mistakenly deployed somewhere and destroyed, a leftover `terraform.tfstate` file will be present on
your local filesystem with a small file size so while this is a good way to search for backends, it's not the best way
to determine where a component is deployed. Also, the S3 bucket has versioning enabled, ensuring we can always
(manually) revert to a previous state if need be.
:::
### DynamoDB Locking (Optional)
:::info
DynamoDB locking is optional and primarily retained for backwards compatibility with existing deployments. For new deployments, we recommend using S3 native state locking with `use_lockfile = true` instead.
:::
If you have an existing deployment using DynamoDB for state locking, or prefer to use DynamoDB, you can configure it by adding the `dynamodb_table` field to your backend configuration:
```json
{
"terraform": {
"backend": {
"s3": {
"bucket": "acme-ue2-root-tfstate",
"dynamodb_table": "acme-ue2-root-tfstate-lock",
"encrypt": true,
"key": "terraform.tfstate",
"region": "us-east-2",
"workspace_key_prefix": "vpc"
}
}
}
}
```
To find and inspect a DynamoDB lock table:
```bash
# Find the table (should contain `tfstate-lock` in its name)
aws --profile acme-gbl-root-admin \
dynamodb list-tables
```
```bash
# Get a LockID
aws --profile acme-gbl-root-admin \
dynamodb get-item \
--table-name {table_name} \
--key '{"LockID": {"S": "{bucket_name}/{component}/{stack}/terraform.tfstate-md5"}}'
```
### References
- [https://www.terraform.io/docs/language/settings/backends/s3.html](https://www.terraform.io/docs/language/settings/backends/s3.html)
backend configuration documentation.
---
## Atmos Pro
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
Atmos Pro is the recommended solution for automating Terraform and OpenTofu workflows in GitHub Actions. It provides enterprise-grade GitOps capabilities including ordered deployments, drift detection, and policy controls — all without the cost and vendor lock-in of traditional TACOS platforms.
- Recommended approach for automating Terraform in the reference architecture
- Native GitHub Actions integration with no external dependencies
- Replaces legacy GitHub Actions GitOps workflows
## The Problem
While GitHub Actions provides a solid foundation for CI/CD, it lacks the specialized tooling needed for complex infrastructure deployments. Teams face several challenges:
1. **Limited Deployment Control** — GitHub Actions doesn't provide built-in mechanisms for ordered deployments or dependency management between infrastructure components
1. **Poor Visibility** — Understanding the impact of changes across your infrastructure requires manual investigation
1. **Insufficient Guardrails** — Basic GitHub Actions workflows don't enforce deployment policies or prevent dangerous operations
1. **Drift Management** — Keeping infrastructure in sync with code requires additional tooling and manual processes
1. **Multi-Cloud Complexity** — Managing deployments across different cloud providers adds another layer of complexity
Traditional Terraform Automation and Collaboration Software (TACOS) solutions like Terraform Cloud, Spacelift, or Env0 attempt to solve these problems but often come with significant costs and vendor lock-in.
## Our Solution
Atmos Pro enhances GitHub Actions with enterprise-grade features specifically designed for infrastructure deployment:
1. **Ordered Deployments** — Ensure infrastructure components are deployed in the correct sequence based on their dependencies
1. **Dependency Visualization** — Automatically generate and maintain dependency graphs for your infrastructure
1. **[Drift Detection](/layers/atmos-pro/drift-detection/)** — Continuously monitor and report on infrastructure drift with automated remediation options
1. **Enhanced Guardrails** — Implement policy controls and approval gates to prevent dangerous operations
1. **Native GitOps** — Leverage Git as the single source of truth with full audit trails and change history
1. **Beautiful Job Summaries** — Clear, actionable insights into deployment status and changes
## How It Works
Atmos Pro integrates seamlessly with your GitOps workflow, providing automated infrastructure planning and deployment through two main processes:
### Pull Request Workflow
When a developer creates a pull request, Atmos Pro automatically triggers the planning process:
1. **Developer Makes a Change** — Infrastructure code is modified in a feature branch
1. **Code Is Pushed** — Changes are pushed to the feature branch
1. **GitHub Actions Trigger** — Atmos affected stacks are identified
1. **Atmos Uploads** — Affected stacks information is uploaded
1. **Atmos Pro Dispatches** — Plan workflows are triggered for affected components
1. **Status Updates** — Atmos Pro maintains a status comment showing the progress of plans
### Merge Workflow
When a pull request is merged, Atmos Pro automatically handles the deployment:
1. **Pull Request Is Merged** — Changes are merged into the main branch
1. **GitHub Actions Trigger** — Atmos affected stacks are identified
1. **Atmos Uploads** — Affected stacks information is uploaded
1. **Atmos Pro Dispatches** — Apply workflows are triggered for affected components
1. **Status Updates** — Atmos Pro maintains a status comment showing the progress of deployments
## Drift Detection
Infrastructure drift occurs when your deployed resources no longer match your Terraform configuration — whether from manual console changes, external automation, or incomplete applies. Left unchecked, drift creates security vulnerabilities, compliance violations, and deployment failures.
Atmos Pro provides automated drift detection that runs scheduled Terraform plans against your infrastructure, identifies changes, and offers multiple remediation paths including auto-remediation, manual review via pull request, or accepting changes into your code.
See [Drift Detection](/layers/atmos-pro/drift-detection/) for configuration details and best practices.
## Migrating from Legacy GitOps
If you're currently using the legacy GitHub Actions GitOps workflows, Atmos Pro provides a simplified and more powerful alternative. The legacy approach required complex workflow configurations and custom GitHub Actions. Atmos Pro replaces this with a streamlined setup that handles dependency ordering, drift detection, and policy controls automatically.
See the [Migration Guide](/layers/atmos-pro/tutorials/migrate-from-github-actions-gitops/) for step-by-step instructions.
## Why a Hosted Service?
Atmos Pro exists to solve problems that can't realistically be solved inside a CLI. These are capabilities that physically require a hosted control plane:
1. **Ordered Dependencies** — Ensuring components deploy in the correct sequence across distributed workflows
1. **Coordinated Rollouts** — Workflow dispatching that spans multiple GitHub Actions runs
1. **Drift Detection** — Scheduled infrastructure scanning and tracking over time
1. **Distributed Locking** — Workspace and stack-level locking across concurrent operations
The open-source [Atmos CLI](https://atmos.tools) will always remain fully featured for local and CI/CD workflows. Atmos Pro is its hosted counterpart, designed for teams that need orchestration, coordination, and visibility across large infrastructures.
:::info Self-Hosted Option
We may eventually introduce a self-hosted commercial edition for regulated or air-gapped environments. For now, Atmos Pro is SaaS only.
:::
## References
1. [Setup Documentation](/layers/atmos-pro/setup/) — Learn how to set up and configure Atmos Pro for your infrastructure
1. [atmos-pro.com](https://www.atmos-pro.com) — Full documentation for Atmos Pro features and capabilities
1. [atmos.tools](https://www.atmos.tools) — Core Atmos CLI documentation
Set up Atmos Pro for your infrastructure repository to automate Terraform workflows with enterprise-grade controls.
Setup Atmos Pro
---
## Deploy Plan File Storage with Atmos and Terraform
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import CollapsibleText from '@site/src/components/CollapsibleText';
import CodeBlock from '@theme/CodeBlock';
import Note from '@site/src/components/Note';
Deploy the AWS infrastructure required for Atmos Pro using Atmos and Terraform. This guide walks through deploying the GitHub OIDC provider and IAM roles needed for GitHub Actions to authenticate with AWS, along with the S3 bucket and DynamoDB table for plan file storage.
:::info GitOps Terminology
The workflows and components currently use "gitops" terminology. This will be updated to "atmos-pro" in a future release.
:::
- Terraform state backend deployed ([Initialize Terraform State](/layers/accounts/initialize-tfstate/))
- AWS accounts provisioned ([Deploy Accounts](/layers/accounts/deploy-accounts/))
- Atmos Auth configured for local deployments ([How to Log Into AWS](/layers/identity/how-to-log-into-aws/))
## Overview
Atmos Pro dispatches GitHub Actions workflows that run in **your** GitHub repository. These workflows require three categories of AWS infrastructure:
### Plan File Storage
S3 bucket and DynamoDB table to store Terraform plan files between the plan and apply phases, plus an IAM role to access them:
s3-bucket/gitops
Stores plan files generated by GitHub Actions during terraform plan and retrieves them during terraform apply. This ensures the exact plan reviewed in a PR is what gets applied on merge. Created with the s3-bucket component.
dynamodb/gitops
Tracks plan file metadata and maps git SHAs to their corresponding plan files, ensuring the correct plan is retrieved for each apply. This guarantees that merging a PR applies exactly the plan that was reviewed, even if multiple plans exist. Created with the dynamodb component.
iam-role/gitops
Grants GitHub Actions permission to read and write plan files in S3 and DynamoDB. This role is assumed via OIDC and is deployed only in the account containing the plan storage resources. Created with the iam-role component.
### Terraform Execution Roles
IAM roles deployed in **every account** that allow GitHub Actions to run Terraform plan and apply:
iam-role/planner
Read-only role for running terraform plan. Grants permissions to read Terraform state and AWS resources but not modify them. Used by plan workflows to safely preview changes without risk of accidental modifications. Created with the iam-role component.
iam-role/terraform
Write role for running terraform apply. Grants full permissions to create, modify, and delete AWS resources as defined in your Terraform configurations. Used by apply workflows after changes are approved and merged. Created with the iam-role component.
These roles are configured through [Atmos Auth](/layers/identity/atmos-auth/) and deployed as part of the [Identity layer](/layers/identity/).
### GitHub OIDC Provider
Enables GitHub Actions to authenticate with AWS without long-lived credentials. The OIDC provider must be deployed in **every account** where GitHub Actions needs to assume roles.
github-oidc-provider
Establishes a trust relationship between GitHub and AWS, allowing GitHub Actions workflows to assume IAM roles without storing AWS credentials as secrets. GitHub's OIDC tokens are exchanged for temporary AWS credentials. Must be deployed in each account where roles will be assumed. Created with the github-oidc-provider component.
The GitHub OIDC provider is likely already deployed as part of the [Identity layer](/layers/identity/).
## Configuration
### Deploy S3 Bucket and DynamoDB Table
The plan file storage requires an S3 bucket to store plan files and a DynamoDB table to track metadata and map git SHAs to plans.
Below is an example only. The latest version will be included with the reference architecture package.
{`components:
terraform:
# S3 Bucket for storing Terraform Plans
gitops/s3-bucket:
settings:
pro:
enabled: false
metadata:
component: s3-bucket
vars:
name: gitops-plan-storage
allow_encrypted_uploads_only: false
# DynamoDB table used to store metadata for Terraform Plans
gitops/dynamodb:
settings:
pro:
enabled: false
metadata:
component: dynamodb
vars:
name: gitops-plan-storage
# This key (case-sensitive) is required for the cloudposse/github-action-terraform-plan-storage action
hash_key: id
range_key: ""
# Only these 2 attributes are required for creating the GSI,
# but there will be several other attributes on the table itself
dynamodb_attributes:
- name: 'createdAt'
type: 'S'
- name: 'pr'
type: 'N'
# This GSI is used to Query the latest plan file for a given PR.
global_secondary_index_map:
- name: pr-createdAt-index
hash_key: pr
range_key: createdAt
projection_type: ALL
non_key_attributes: []
read_capacity: null
write_capacity: null
# Auto delete old entries
ttl_enabled: true
ttl_attribute: ttl`}
### Configure GitHub Repository Trust
All IAM roles (both GitOps and Terraform execution) need to trust your GitHub repository.
Below is an example only. The latest version will be included with the reference architecture package.
{`components:
terraform:
iam-role/gitops:
metadata:
component: iam-role
vars:
enabled: true
name: gitops
role_description: |
Role for GitHub Actions to access the GitOps resources, such as the S3 Bucket and DynamoDB Table.
# Grants access to GitHub Actions via OIDC
github_oidc_provider_enabled: true
github_oidc_provider_arn: !terraform.state github-oidc-provider oidc_provider_arn
trusted_github_org: acme
trusted_github_repos:
- acme/infrastructure
policy_statements:
AllowDynamodbAccess:
effect: "Allow"
actions:
- "dynamodb:List*"
- "dynamodb:DescribeReservedCapacity*"
- "dynamodb:DescribeLimits"
- "dynamodb:DescribeTimeToLive"
resources:
- "*"
AllowDynamodbTableAccess:
effect: "Allow"
actions:
- "dynamodb:BatchGet*"
- "dynamodb:DescribeStream"
- "dynamodb:DescribeTable"
- "dynamodb:Get*"
- "dynamodb:Query"
- "dynamodb:Scan"
- "dynamodb:BatchWrite*"
- "dynamodb:CreateTable"
- "dynamodb:Delete*"
- "dynamodb:Update*"
- "dynamodb:PutItem"
resources:
- !terraform.state gitops/dynamodb core-use2-auto ".table_arn + ""/*"""
- !terraform.state gitops/dynamodb core-use2-auto table_arn
AllowS3Actions:
effect: "Allow"
actions:
- "s3:ListBucket"
resources:
- !terraform.state gitops/s3-bucket core-use2-auto bucket_arn
AllowS3ObjectActions:
effect: "Allow"
actions:
- "s3:*Object"
resources:
- !terraform.state gitops/s3-bucket core-use2-auto ".bucket_arn + ""/*"""`}
The Terraform execution roles (`iam-role/planner` and `iam-role/terraform`) use the same trust pattern and are configured in the [Identity layer](/layers/identity/deploy/).
### Verify GitHub Workflow Permissions
Your GitHub Action workflows need specific permissions to authenticate via OIDC:
```yaml
permissions:
id-token: write # Required for requesting the JWT
contents: read # Required for actions/checkout
```
These permissions are already configured in the reference architecture workflows.
## Deployment
### Vendor Components
The GitOps stacks depend on components that may already exist in your component library (`s3-bucket` and `dynamodb`) and adds new components for GitHub OIDC authentication. Vendor these components with the included Atmos workflow:
Alternatively, use [Atmos Vendoring](https://atmos.tools/core-concepts/components/vendoring) directly.
### Deploy Infrastructure
Deploy the Atmos Pro infrastructure components:
This workflow deploys the **Plan File Storage** and **GitHub OIDC** integration:
1. **`github-oidc-provider`** — Creates the OIDC provider in this account if not already deployed
1. **`s3-bucket/gitops`** — S3 bucket for storing Terraform plan files
1. **`dynamodb/gitops`** — DynamoDB table for plan file metadata and locking
1. **`gitops-iam-policy`** — IAM policy granting access to plan storage resources
1. **`iam-role/gitops`** — IAM role for accessing plan file storage (single account)
:::info Terraform Execution Roles
The `iam-role/planner` and `iam-role/terraform` roles are deployed separately as part of the [Identity layer](/layers/identity/deploy/). These roles exist in every account and are configured through Atmos Auth.
:::
## Verification
After deployment, verify the GitOps infrastructure is correctly configured:
- GitHub OIDC provider exists in every account
- S3 bucket for plan file storage is created
- DynamoDB table for plan file metadata is created
- IAM role `gitops` exists and trusts your GitHub repository
- IAM policy grants access to the plan storage resources
- IAM roles `planner` and `terraform` exist in every account (deployed via [Identity layer](/layers/identity/deploy/))
You can test the setup by creating a pull request — Atmos Pro will attempt to run plans using the deployed infrastructure.
With the AWS infrastructure deployed, verify your Atmos Pro setup by testing the GitHub integration.
Verify Setup
---
## Drift Detection
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
import CollapsibleText from '@site/src/components/CollapsibleText';
import PartialListInstances from '@site/examples/snippets/.github/workflows/atmos-pro-list-instances.yaml';
import PartialTerraformPlan from '@site/examples/snippets/.github/workflows/atmos-pro-terraform-plan.yaml';
import PartialTerraformApply from '@site/examples/snippets/.github/workflows/atmos-pro-terraform-apply.yaml';
Drift detection identifies when your deployed infrastructure no longer matches your Terraform configuration. Atmos Pro provides automated drift detection with scheduled scans, detailed reports, and remediation workflows — all integrated into your existing GitOps process.
- GitHub workflow to upload instance list to Atmos Pro
- Atmos stack settings for drift detection and remediation
- Scheduled drift detection runs
## What is Infrastructure Drift?
Infrastructure drift occurs when the actual state of your cloud resources diverges from the desired state defined in your Terraform code. This can happen for several reasons:
1. **Manual Changes** — Someone modifies resources directly in the cloud console
1. **External Automation** — Other tools or scripts modify resources outside of Terraform
1. **Provider Updates** — Cloud provider changes default values or resource behavior
1. **Incomplete Applies** — Terraform runs that fail partway through
Undetected drift creates significant risks including security vulnerabilities, compliance violations, deployment failures, and operational confusion as teams lose confidence in their infrastructure code as the source of truth.
## How Drift Detection Works
Drift detection in Atmos Pro involves three coordinated workflows:
### 1. Instance Registration
First, Atmos Pro needs to know what instances (component + stack combinations) exist in your infrastructure. The **list-instances** workflow uploads this inventory to Atmos Pro.
```mermaid
flowchart LR
A[GitHub Actions] -->|atmos list instances --upload| B[Atmos Pro]
B -->|Stores| C[Instance Registry]
```
This workflow runs on a schedule (e.g., nightly) or on-demand to keep the instance list current when components are added or removed.
### 2. Drift Detection
When a drift detection run is triggered from Atmos Pro, it dispatches the **detect** workflow for each registered instance. This workflow runs `terraform plan` and reports the results back to Atmos Pro.
```mermaid
flowchart LR
A[Atmos Pro] -->|Dispatch for each instance| B[GitHub Actions]
B -->|terraform plan| C[AWS]
B -->|Upload results| A
A -->|Mark as| D{Drifted?}
```
Instances with detected changes are marked as "drifted" in the Atmos Pro dashboard.
### 3. Remediation
For any drifted instance, you can trigger remediation directly from the Atmos Pro UI. This dispatches the **remediate** workflow, which runs `terraform apply` to restore the desired state.
```mermaid
flowchart LR
A[Atmos Pro UI] -->|Trigger remediation| B[GitHub Actions]
B -->|terraform apply| C[AWS]
B -->|Report success| A
```
## Configuration
### Add the List Instances Workflow
Add this workflow to your infrastructure repository. It runs on a schedule to keep Atmos Pro's instance registry current.
{PartialListInstances}
:::caution Update the IAM Role ARN
Update the `role-to-assume` with your actual planner role ARN. The workflow needs read access to Terraform state to enumerate instances.
:::
### Configure the Plan Workflow for Drift Detection
The existing `atmos-pro-terraform-plan.yaml` workflow already supports drift detection through the `upload_status` input. When `upload_status: true`, the plan results are uploaded to Atmos Pro for drift tracking.
{PartialTerraformPlan}
Key parameters for drift detection:
- **`upload_status`** — When `true`, uploads plan results to Atmos Pro (used by detect workflow)
- **`atmos-pro-upload-status`** — The GitHub Action input that enables status upload
### Configure Atmos Stack Settings
Add drift detection configuration to your Atmos stack defaults. This tells Atmos Pro which workflows to dispatch for detection and remediation.
{`vars:
namespace: acme
# Workflow configuration anchors for reuse
plan-wf-config: &plan-wf-config
atmos-pro-terraform-plan.yaml:
inputs:
component: "{{ .atmos_component }}"
stack: "{{ .atmos_stack }}"
apply-wf-config: &apply-wf-config
atmos-pro-terraform-apply.yaml:
inputs:
component: "{{ .atmos_component }}"
stack: "{{ .atmos_stack }}"
github_environment: "{{ .vars.tenant }}-{{ .vars.stage }}"
# Drift detection uses the plan workflow with upload_status enabled
detect-wf-config: &detect-wf-config
atmos-pro-terraform-plan.yaml:
inputs:
component: "{{ .atmos_component }}"
stack: "{{ .atmos_stack }}"
upload_status: true
settings:
pro:
enabled: true
# Standard PR workflows
pull_request:
opened:
workflows: *plan-wf-config
synchronize:
workflows: *plan-wf-config
reopened:
workflows: *plan-wf-config
merged:
workflows: *apply-wf-config
# Drift detection configuration
drift_detection:
enabled: true
detect:
workflows: *detect-wf-config
remediate:
workflows: *apply-wf-config`}
Key configuration:
- **`detect.workflows`** — Uses the plan workflow with `upload_status: true` to report drift
- **`remediate.workflows`** — Uses the apply workflow to fix drift (same as PR merge workflow)
### Configure Repository Permissions
Drift detection requires additional repository permissions in Atmos Pro:
1. Go to [atmos-pro.com](https://atmos-pro.com) and log in to your organization
1. From the dashboard, select your infrastructure repository
1. Click **Quick Actions** in the top right corner
1. Select **Repository Permissions**
1. Add the following permissions:
- **Instances Create** — Allows the list-instances workflow to register new instances
- **Instances Update** — Allows drift detection to update instance status
### Configure Drift Detection Schedule
Create a drift detection schedule in the Atmos Pro dashboard:
1. Go to [atmos-pro.com](https://atmos-pro.com) and log in to your organization
1. From the dashboard, select your infrastructure repository
1. Click **Quick Actions** in the top right corner
1. Select **Schedule Drift Detection**
1. Configure the schedule frequency and save
Common schedule frequencies:
- **Weekly** — Run once a week to audit infrastructure drift (recommended for most teams)
- **Daily** — Run nightly for environments requiring tighter compliance
:::tip Pausing Schedules
Drift detection schedules can be paused and resumed at any time from the same Quick Actions menu. This is useful during planned maintenance windows or major migrations.
:::
## Workflow Summary
| Workflow | Purpose | Trigger | Frequency |
| :------- | :------ | :------ | :-------- |
| `atmos-pro-list-instances.yaml` | Upload instance inventory | Schedule + on-demand | Nightly or weekly |
| `atmos-pro-terraform-plan.yaml` | Detect drift (with `upload_status`) | Atmos Pro dispatch | Per drift detection run |
| `atmos-pro-terraform-apply.yaml` | Remediate drift | Atmos Pro dispatch | On-demand from UI |
## Operational Workflow
The intended operational workflow for drift detection:
1. **List instances runs on schedule** — Keeps Atmos Pro's instance registry current (e.g., nightly)
1. **Drift detection runs on schedule** — Scans all instances for drift (e.g., weekly)
1. **Review drifted components** — Audit the list of drifted instances in Atmos Pro dashboard
1. **Remediate on demand** — Fix individual drifted components from the UI as needed
This approach provides regular visibility into infrastructure drift while allowing controlled, deliberate remediation rather than automatic fixes.
## Troubleshooting
### No instances appearing in Atmos Pro
- Verify the `atmos-pro-list-instances.yaml` workflow is running successfully
- Check that `ATMOS_PRO_WORKSPACE_ID` is set in your repository variables
- Ensure the IAM role has permissions to read Terraform state
### Drift detection not running
- Verify `settings.pro.drift_detection.enabled: true` in your stack defaults
- Check that the drift detection schedule is configured in Atmos Pro
- Ensure the detect workflow configuration matches your actual workflow filename
### Remediation failing
- Verify the remediate workflow configuration matches your apply workflow
- Check that GitHub environments are properly configured for protected stages
- Review the GitHub Actions logs for specific errors
Atmos Pro is fully configured! Continue with deploying the database layer—create a PR and let Atmos Pro handle the planning and deployment!
Provision Databases
---
## Setup Atmos Pro
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
import CollapsibleText from '@site/src/components/CollapsibleText';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import PartialAtmosPro from '@site/examples/snippets/.github/workflows/atmos-pro.yaml';
import PartialAtmosProTerraformPlan from '@site/examples/snippets/.github/workflows/atmos-pro-terraform-plan.yaml';
import PartialAtmosProTerraformApply from '@site/examples/snippets/.github/workflows/atmos-pro-terraform-apply.yaml';
Setting up Atmos Pro is straightforward: install the GitHub App, grant repository permissions, set up the workflows, and deploy the AWS infrastructure. This guide provides an overview of each step, with detailed instructions available in the linked pages.
## Setup Process
- Admin access to both GitHub and AWS
- Familiarity with the detailed instructions in each linked guide
- A small test change ready to validate the setup
### Sign up for Atmos Pro
The first step is to sign up for Atmos Pro. The sign up process includes creating a workspace in the Atmos Pro web console and installing the Atmos Pro GitHub App into your infrastructure repository. This will set up the connection between your repository and Atmos Pro.
- Sign up for Atmos Pro
- Create or join a workspace
- Install the Atmos Pro GitHub App
- Import your repositories
For step-by-step instructions, see the [official Atmos Pro installation guide](https://atmos-pro.com/docs/install).
### Grant Repository Permissions
Grant repository permissions in Atmos Pro to enable ordered deployments, drift detection, and other features.
Navigate to your repository in the Atmos Pro dashboard, click **Quick Actions**, and select **Repository Permissions**. Add the following permissions:
| Permission | Workflow | Branch | Environment |
| :--------- | :------- | :----- | :---------- |
| `Affected Stacks Create` | `*` | `*` | `*` |
| `Instances Create` | `*` | `*` | `*` |
| `Instances Update` | `*` | `*` | `*` |
- **Affected Stacks Create** — Required for PR workflows to report affected stacks
- **Instances Create** — Required for drift detection to register instances
- **Instances Update** — Required for drift detection to update instance status
For detailed instructions, see the [official Atmos Pro repository permissions guide](https://atmos-pro.com/docs/ordered-deployments/repository-permissions).
### Set Up Workflows
The third step is to configure the workflows in your repository. This includes reviewing the generated workflows, setting up environment variables, and configuring branch protection rules.
- Review the 3 GitHub Action workflows
- Add the Workspace ID to GitHub repository variables
- Merge the workflows into the default branch
_The dispatched workflows need to exist in the default branch before they can be triggered!_
:::info
These workflows are already included with the reference architecture. Review them to ensure they meet your requirements.
:::
The following workflows should be added to your repository:
This workflow is triggered by GitHub on pull request events (opened, synchronized, reopened) and when the PR is merged (closed). It uses the `atmos describe affected` command to identify affected components and upload them to Atmos Pro.
{PartialAtmosPro}
This workflow is dispatched by Atmos Pro to create Terraform plans for affected components. It is a reusable workflow that takes stack and component as inputs.
{PartialAtmosProTerraformPlan}
This workflow is dispatched by Atmos Pro to apply Terraform changes for affected components. It is a reusable workflow that takes stack and component as inputs.
{PartialAtmosProTerraformApply}
For additional workflow setup instructions, see the [official Atmos Pro workflow configuration guide](https://atmos-pro.com/docs/ordered-deployments/github-workflow-config).
### Deploy AWS Infrastructure
Atmos Pro doesn't run Terraform or Atmos itself. It dispatches GitHub Actions that **you control**. To run Terraform in those GitHub Actions, you need to set up a few things in your cloud environment:
- [x] **State Backend** (S3 + DynamoDB) to store Terraform state and enable state locking
- [x] **OIDC Integration** with GitHub for workflows to authenticate with your cloud provider
- [ ] **Plan File Storage** (S3 + DynamoDB) to persist Terraform plan outputs for review and approvals
If you've been following along with the reference architecture, you should already have the Terraform State Backend provisioned. This guide walks through deploying the GitHub OIDC provider and IAM roles needed for GitHub Actions to authenticate with AWS.
Deploy AWS Infrastructure
:::tip Alternative: CloudFormation
All requirements can also be deployed with CloudFormation. This option is not included by default with the reference architecture but may be useful for organizations that prefer CloudFormation or need to bootstrap before Terraform is available.
See [Deploy with CloudFormation](/layers/atmos-pro/tutorials/deploy-with-cloudformation) for details.
:::
## Verification
After completing all four steps, you can verify the setup by:
### Test GitHub Integration
- Create a new pull request with a small stack change
- The Atmos Pro GitHub App will automatically comment on the PR
- The comment will show the status of affected components
- As workflows are dispatched for each component, the comment will automatically update
### Trigger a Plan
- In the new pull request, change a value for any component. For example, add a tag to a S3 bucket.
- The `atmos-pro.yaml` workflow will discover the newly affected stack and trigger Atmos Pro.
- Atmos Pro will run Atmos Terraform Plan for the affected stack.
- As the workflow is executed, Atmos Pro will update the comment on the PR with the plan status.
### Merge the PR
- Now try merging the PR
- Again, the `atmos-pro.yaml` workflow will discover the affected stacks and trigger Atmos Pro.
- This time Atmos Pro will determine this is a "merged" event and run Atmos Terraform Apply.
- Finally, Atmos Pro will update the comment on the PR with the apply status.
With Atmos Pro configured and verified, enable drift detection to continuously monitor your infrastructure for unauthorized changes.
Configure Drift Detection
---
## Deploy Infrastructure with CloudFormation
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
import Admonition from '@theme/Admonition';
Deploy the required AWS infrastructure for Atmos Pro with just a few clicks using CloudFormation. This approach provides a quick and straightforward way to set up all necessary resources including state backend, plan file storage, and GitHub OIDC integration.
- Deploy complete Terraform backend infrastructure in a single CloudFormation stack
- Set up S3 buckets for state and plan file storage
- Configure DynamoDB tables for state locking and plan file management
- Create GitHub OIDC integration for secure authentication
- Configure Atmos Pro to use the deployed infrastructure
## Overview
Atmos Pro doesn't run Terraform or Atmos itself. It dispatches GitHub Actions that **you control**. To run Terraform in those GitHub Actions, you need to set up a few things in your cloud environment:
- **State Backend** (S3 + DynamoDB) to store Terraform state and enable state locking
- **Plan File Storage** (S3 + DynamoDB) to persist Terraform plan outputs for review and approvals
- **OIDC Integration** with GitHub for workflows to authenticate with your cloud provider
To make things easier, we've provided a CloudFormation template that sets up everything for you.
## Deployment Steps
### Authenticate with AWS
- Sign in to your AWS account
- Ensure you have administrator access
- Choose your deployment region (we recommend `us-east-1`)
### Deploy Infrastructure
- Click the "Deploy to AWS" button below
- Review the CloudFormation template parameters
- Click "Create stack" to deploy
Your stack name must be unique across all AWS accounts. We use the stack name as part of the S3 bucket and DynamoDB table IDs.
[](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=my-terraform-backend&templateURL=https://s3.amazonaws.com/cplive-core-ue2-public-cloudformation/aws-cloudformation-terraform-backend.yaml)
Or manually deploy the template with the AWS CLI:
```bash
aws cloudformation deploy \
--stack-name my-backend \
--template-url https://s3.amazonaws.com/cplive-core-ue2-public-cloudformation/aws-cloudformation-terraform-backend.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--no-fail-on-empty-changeset \
--parameter-overrides GitHubOrg=my-org
```
### Configure Atmos Pro
Once deployed, you will need to add the new role and plan file storage configuration to your Atmos configuration.
#### GitHub Integration Configuration
```yaml
integrations:
github:
gitops:
opentofu-version: "1.10.0"
artifact-storage:
region: "us-east-1" # Ensure this matches the region where the template was deployed
bucket: "my-backend-tfplan" # Get this value from the PlanBucketName output
table: "my-backend-tfplan" # Get this value from the PlanDynamoDBTableName output
role: "arn:aws:iam::123456789012:role/my-backend-github-actions" # Get this value from the GitHubActionsRoleARN output
role:
plan: "arn:aws:iam::123456789012:role/my-backend-github-actions" # Get this value from the GitHubActionsRoleARN output
apply: "arn:aws:iam::123456789012:role/my-backend-github-actions" # Get this value from the GitHubActionsRoleARN output
```
#### State Backend Configuration
Then use the state backend with Atmos by specifying the S3 bucket and DynamoDB table:
```yaml
terraform:
backend_type: s3
backend:
s3:
bucket: my-backend-tfstate # Get this value from the StateBucketName output
dynamodb_table: my-backend-tfstate # Get this value from the StateDynamoDBTableName output
role_arn: null # Set to null to use the current AWS credentials
encrypt: true
key: terraform.tfstate
acl: bucket-owner-full-control
region: us-east-1 # Ensure this matches the region where the template was deployed
remote_state_backend:
s3:
role_arn: null # Set to null to use the current AWS credentials
```
## CloudFormation Parameters
| Parameter | Description | Default |
| ----------------------- | ------------------------------------------------------------------------------------------------ | ------- |
| `CreateStateBackend` | Set to 'true' to create state backend resources (S3 bucket, DynamoDB table), 'false' to skip | true |
| `CreatePlanFileStorage` | Set to 'true' to create plan file storage resources (S3 bucket, DynamoDB table), 'false' to skip | true |
| `CreateGitHubAccess` | Set to 'true' to create GitHub access resources (OIDC provider, IAM role), 'false' to skip | true |
| `CreateOIDCProvider` | Set to 'true' to create the GitHub OIDC provider, 'false' to skip (if it already exists) | true |
| `GitHubOrg` | GitHub organization or username | |
| `GitHubRepo` | GitHub repository name. Set to `*` to allow all repositories | \* |
## Review
Congratulations! The CloudFormation stack has now deployed:
- An IAM role configured with trusted relationships for GitHub Actions
- An S3 bucket to store Terraform state files
- A DynamoDB table for state locking
- An S3 bucket to store Terraform plan files
- A DynamoDB table for managing those plan files
- GitHub OIDC provider for secure authentication
You're now ready to start using Atmos Pro with GitHub Actions.
## Cleanup
To destroy the template, run:
```bash
aws cloudformation delete-stack --stack-name my-backend
```
This will destroy the stack and all the resources it created. However, if the S3 bucket is not empty, the stack will fail to destroy.
To destroy the stack and empty the S3 bucket, run:
```bash
aws cloudformation delete-stack --stack-name my-backend --deletion-mode FORCE_DELETE_STACK
```
This will destroy the state files and empty the S3 bucket. This is a destructive action and cannot be undone.
---
## Migrate from GitHub Actions GitOps
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Note from '@site/src/components/Note';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
This guide helps you migrate from the legacy GitHub Actions GitOps workflows to Atmos Pro. Atmos Pro provides a simplified and more powerful alternative with built-in dependency ordering, drift detection, and policy controls.
## Overview
The legacy GitHub Actions GitOps approach required:
1. Custom GitHub Action workflows for plan, apply, drift detection, and remediation
1. Manual configuration of S3 buckets and DynamoDB tables for plan storage
1. Complex IAM role setup with `aws-teams` and `aws-team-roles` components
1. Custom workflow dispatch logic and matrix strategies
Atmos Pro replaces all of this with:
1. Three simple workflow files that Atmos Pro manages
1. Built-in plan storage and management
1. Simplified IAM setup using the `iam-role` component
1. Automatic dependency ordering and parallel execution
## Migration Steps
### Step 1: Set Up Atmos Pro
Follow the [Atmos Pro Setup Guide](/layers/atmos-pro/setup/) to:
1. Sign up for Atmos Pro and create a workspace
1. Install the Atmos Pro GitHub App
1. Grant repository permissions
1. Add the three Atmos Pro workflow files
### Step 2: Deploy New IAM Roles
Atmos Pro uses the `iam-role` component instead of `aws-teams` and `aws-team-roles` for GitHub Actions authentication. Deploy the new roles:
```bash
atmos workflow deploy/iam-role -f identity
```
This creates `terraform` and `planner` IAM roles in each account that GitHub Actions can assume via OIDC.
### Step 3: Update GitHub OIDC Provider
Ensure the `github-oidc-provider` component is deployed to all accounts where Terraform will run:
```bash
atmos workflow deploy/github-oidc-provider -f github
```
### Step 4: Test Atmos Pro Workflows
Before removing the legacy workflows:
1. Create a test pull request with a small change
1. Verify Atmos Pro comments on the PR with affected stacks
1. Verify plan workflows are dispatched and complete successfully
1. Merge the PR and verify apply workflows run correctly
### Step 5: Remove Legacy Workflows
Once Atmos Pro is working correctly, remove the legacy workflow files:
1. `.github/workflows/atmos-terraform-plan.yaml`
1. `.github/workflows/atmos-terraform-apply.yaml`
1. `.github/workflows/atmos-terraform-drift-detection.yaml`
1. `.github/workflows/atmos-terraform-drift-remediation.yaml`
1. `.github/workflows/atmos-terraform-dispatch.yaml`
1. `.github/workflows/atmos-terraform-plan-matrix.yaml`
1. `.github/workflows/atmos-terraform-apply-matrix.yaml`
### Step 6: Clean Up Legacy Infrastructure (Optional)
If you no longer need the legacy GitOps infrastructure, you can remove:
1. `gitops/s3-bucket` component — Plan file storage bucket
1. `gitops/dynamodb` component — Plan metadata table
1. `gitops` component — Legacy GitHub OIDC roles
Only remove these components after confirming Atmos Pro is working correctly and you no longer need access to historical plan files.
## Key Differences
| Feature | Legacy GitOps | Atmos Pro |
|---------|--------------|-----------|
| Workflow configuration | 7+ custom workflow files | 3 simple workflow files |
| Plan storage | Self-managed S3 + DynamoDB | Built-in |
| IAM roles | `aws-teams` + `aws-team-roles` | `iam-role` component |
| Dependency ordering | Manual or none | Automatic |
| Drift detection | Scheduled workflow | Built-in with dashboard |
| Policy controls | Custom implementation | Built-in |
## Troubleshooting
### Workflows Not Triggering
If Atmos Pro workflows aren't being dispatched:
1. Verify the Atmos Pro GitHub App is installed and has access to the repository
1. Check that repository permissions are configured in the Atmos Pro console
1. Ensure the workflow files exist in the default branch
### Authentication Errors
If workflows fail with authentication errors:
1. Verify `github-oidc-provider` is deployed to the target account
1. Check that `iam-role` components are deployed with correct trust policies
1. Ensure the workflow has `id-token: write` permission
## See Also
1. [Atmos Pro Setup](/layers/atmos-pro/setup/) — Complete setup guide
1. [Atmos Pro Overview](/layers/atmos-pro/) — Features and capabilities
1. [Legacy GitOps Documentation](/layers/gitops/) — Reference for existing deployments
---
## Tutorials(Tutorials)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These tutorials will help you deploy and configure the required AWS infrastructure for Atmos Pro. Choose the deployment method that best fits your needs.
---
## Prepare Container Registry
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
Now that the GitHub OIDC Provider has been deployed, we can proceed with setting up the necessary prerequisites for containers. The first prerequisite is deploying Amazon Elastic Container Registry (ECR) repositories that will be used to store container images built by GitHub Actions workflows.
| Steps | Actions |
| -------------------------- | ----------------------------------- |
| Deploy ECR repositories | `atmos workflow deploy/ecr -f quickstart/foundation/accounts` |
## Deploy ECR Repositories
Deploy the ECR repositories that will be used by GitHub Actions workflows:
We use ECR for two main purposes:
1. Storing the Geodesic base image that provides the development environment and tooling
2. Storing container images built during CI steps of application release workflows
---
## Provision Databases
import Intro from '@site/src/components/Intro';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
There are many options for provisioning databases in AWS. The reference architecture provides support for a variety of databases, including Aurora PostgreSQL, Aurora MySQL, DynamoDB, RDS, ElastiCache, DocumentDB, and more.
Each database has its own unique features and benefits, so you can choose the one that best fits your needs or deploy multiple databases to support different use cases.
### SQL Database Options
The reference architecture supports several SQL database options, including Aurora PostgreSQL, Aurora MySQL, RDS, and more.
Aurora PostgreSQL is a fully managed, PostgreSQL-compatible relational database service that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Aurora PostgreSQL is a good choice for applications that require the features and capabilities of PostgreSQL with the scalability, performance, and reliability of a managed service.
Get Started
Aurora MySQL is a fully managed, MySQL-compatible relational database service that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Aurora MySQL is a good choice for applications that require the features and capabilities of MySQL with the scalability, performance, and reliability of a managed service.
Get Started
Amazon RDS is a managed relational database service that supports multiple database engines, including MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server. RDS is a good choice for applications that require the features and capabilities of a relational database with the scalability, performance, and reliability of a managed service.
Get Started
Amazon Redshift is a fully managed, petabyte-scale data warehouse service that provides fast query performance using SQL and business intelligence tools. Redshift is a good choice for applications that require high-performance analytics and data warehousing with built-in security, backup, and restore capabilities.
Get Started
### NoSQL Database Options
The reference architecture also supports several NoSQL database options, including DynamoDB, DocumentDB, Elasticache Redis, and more. SQL and/or NoSQL databases can always be deployed side by side to support different use cases.
DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB is a good choice for applications that require low-latency, high-throughput access to data with built-in security, backup, and restore capabilities.
Get Started
Amazon DocumentDB is a fully managed, MongoDB-compatible document database service that provides fast and scalable performance with built-in security, backup, and restore capabilities. DocumentDB is a good choice for applications that require the features and capabilities of MongoDB with the scalability, performance, and reliability of a managed service.
Get Started
Amazon ElastiCache is a fully managed in-memory data store service that supports Redis and Memcached. ElastiCache Redis is a good choice for applications that require low-latency, high-throughput access to data with built-in security, backup, and restore capabilities.
Get Started
---
## Decide on Amazon Managed Workflows for Apache Airflow (MWAA) Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Requirements for MWAA environments deployed to each active compute environment must be outlined before an MWAA environment is configured and deployed. Customers will likely require integrations with other systems like Redshift or S3. There’s no generic way of handling all these cases, so each will need to be handled separately. Each case may require additional resources like IAM roles to be provisioned, which we cannot anticipate and will rely entirely on information supplied by the customer.
## Context
Amazon MWAA environments will be used by applications that use Apache Airflow.
## Considered Options
Create a standardized MWAA Environment based on requirements.
### Integrations
- What integrations are required with other systems?
- e.g. S3 will require IAM roles be provisioned
- e.g. RDS will require database users, grants and security groups be opened up
- Have those other systems already been deployed?
- Should we provide an example?
- How will DAGs be managed in S3?
#### Standardized Managed Workflows for Apache Airflow (MWAA) Configuration Settings
- Number of workers
- Min number of workers
- Max number of workers
- Webserver access mode
- Can be one of: `PUBLIC_ONLY`, `PRIVATE_ONLY`. Defaults to `PRIVATE_ONLY`.
- If it’s private, how will you intend to access it? e.g. we’ll need something like [Decide on Client VPN Options](/layers/network/design-decisions/decide-on-client-vpn-options)
- Environment class
- Can be one of: `mw1.small`, `mw1.medium`, `mw1.large`
- Airflow version
- Supported versions outlined here: [https://docs.aws.amazon.com/mwaa/latest/userguide/airflow-versions.html](https://docs.aws.amazon.com/mwaa/latest/userguide/airflow-versions.html)
- If not specified, the latest available version will be used. The latest available version of Apache Airflow will be used unless a previous minor version must be used to provide compatibility with an application environment. This provides the latest bug fixes and security patches for Apache Airflow, which is especially important if the webserver access mode is set to `PUBLIC_ONLY`. If an older minor version must be used to provide compatibility with an application environment, then the latest available patch version should be used to include all possible bug fixes and security patches.
- Use custom `plugins.zip` file?
- If so, where are those plugins stored?
- What is generating the plugin's artifact? CI/CD for this artifact could be out of scope.
- Use custom `requirements.txt` file?
- If so, we’ll need the customer to provide this file.
- DAG processing logs
- From least to most verbose: disabled, `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`. Defaults to `INFO`.
- Scheduler logs
- From least to most verbose: disabled, `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`. Defaults to `INFO`.
- Task logs
- From least to most verbose: disabled, `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`. Defaults to `INFO`.
- Webserver logs
- From least to most verbose: disabled, `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`. Defaults to `INFO`.
- Worker logs
- From least to most verbose: disabled, `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`. Defaults to `INFO`.
## References
- [Amazon Managed Workflows for Apache Airflow (MWAA): Create an Environment](https://docs.aws.amazon.com/mwaa/latest/userguide/create-environment.html)
- [Amazon Managed Workflows for Apache Airflow (MWAA): Supported Versions](https://docs.aws.amazon.com/mwaa/latest/userguide/airflow-versions.html)
---
## Decide on Amazon OpenSearch Service (Elasticsearch) Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Requirements for the Amazon OpenSearch Service ([formerly known as Elasticsearch](https://aws.amazon.com/blogs/aws/amazon-elasticsearch-service-is-now-amazon-opensearch-service-and-supports-opensearch-10/)) clusters deployed to each active compute environment need to be outlined before an Elasticsearch component is configured and deployed
## Context
At a minimum, we need the following for each operating stage (prod, staging, dev, etc)
- Instance family for each node
- Number of nodes
- EBS volume size
- Whether or not Kibana is required
See [https://docs.aws.amazon.com/opensearch-service/latest/developerguide/sizing-domains.html](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/sizing-domains.html) for Amazon’s recommendations.
## Considered Options
Create a standardized Elasticsearch (Amazon OpenSearch Service) cluster based on one of these options. We’ll also need to know how these requirements will vary by stage.
:::caution
This is a reversible decision, however, resizing large OpenSearch clusters can literally take several days.
:::
### Option 1: Use Current Infrastructure Requirements
:::info
If already opening OpenSearch, we recommend sharing a screenshot of your current setup from the AWS web console for each cluster in every environment.
:::
### Option 2: Use Minimal Elasticsearch (Amazon OpenSearch) Cluster Requirements
Because the [Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/sizing-domains.html) recommends having at least 3 nodes in each Elasticsearch cluster in order to avoid a split-brain scenario, each cluster should contain 3 nodes (if it were to be minimally sized).
This, in addition to the requirements outlined in _v1 Infrastructure Requirements_, concludes that each Elasticsearch
cluster will have the following requirements:
| **Requirement** | **Recommendation** | |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------ |
| EBS volume size | :::cautionThe volume size is limited by the size of the instance. [https://docs.aws.amazon.com/opensearch-service/latest/developerguide/limits.html](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/limits.html) ::: | |
| Number of nodes | 3 | |
| Instance family for each node | Depends on use-case | |
| Kibana | Whether or not Kibana is required: not required. :::cautionIf Kibana is required, we’ll need to discuss how to securely access Kibana. We recommend SAML authentication. [https://docs.aws.amazon.com/opensearch-service/latest/developerguide/saml.html](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/saml.html) ::: | |
## Consequences
Provision Amazon OpenSearch Service based on these requirements using the `elasticsearch` component with terraform.
- This allows for a standardized Elasticsearch cluster that satisfies the requirements required by the application stack
in each active compute environment.
- This standard size can be easily adjusted as needed, so this is an easily reversible decision.
## References
- [https://docs.aws.amazon.com/opensearch-service/latest/developerguide/sizing-domains.html](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/sizing-domains.html)
- [https://docs.aws.amazon.com/opensearch-service/latest/developerguide/limits.html](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/limits.html)
- [https://docs.aws.amazon.com/opensearch-service/latest/developerguide/saml.html](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/saml.html)
- [https://aws.amazon.com/blogs/aws/amazon-elasticsearch-service-is-now-amazon-opensearch-service-and-supports-opensearch-10/](https://aws.amazon.com/blogs/aws/amazon-elasticsearch-service-is-now-amazon-opensearch-service-and-supports-opensearch-10/)
---
## Decide on Automated Backup Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
Describe why we are making this decision or what problem we are solving.
## Considered Options
### Option 1 (Recommended)
:::tip
Our Recommendation is to use Option 1 because....
:::
#### Pros
-
#### Cons
-
### Option 2
#### Pros
-
#### Cons
-
### Option 3
#### Pros
-
#### Cons
-
## References
- Links to any research, ADRs or related Jiras
---
## Decide on AWS Backup Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
## Context
We need a standardized way to implement backup services for AWS resources (S3, databases, EC2 instances, EFS, etc. etc.) to have the ability to restore data from points in time in the event of data loss or corruption.
AWS provides a managed backup service offering called AWS Backup. [https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html)
We need to determine if we are opting in or opting out using AWS Backup.
## References
- [https://www.druva.com/blog/understanding-rpo-and-rto/](https://www.druva.com/blog/understanding-rpo-and-rto/)
---
## Decide on AWS EMR Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
We need to document the requirements for the EMR cluster.
## Context
If EMR is presently deployed, the best course of action is to replicate the settings you have (share these details if that’s the case).
## Considered Options
A list of applications for the cluster. Currently supported options are: Flink, Ganglia, Hadoop, HBase, HCatalog, Hive, Hue, JupyterHub, Livy, Mahout, MXNet, Oozie, Phoenix, Pig, Presto, Spark, Sqoop, TensorFlow, Tez, Zeppelin, and ZooKeeper (as of EMR 5.25.0).
For a full list of supported options, review the EMR module.
## References
- [https://github.com/cloudposse/terraform-aws-emr-cluster#inputs](https://github.com/cloudposse/terraform-aws-emr-cluster#inputs)
---
## Decide on AWS Managed RabbitMQ Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
Describe why we are making this decision or what problem we are solving.
## Considered Options
### Option 1 (Recommended)
:::tip
Our Recommendation is to use Option 1 because....
:::
#### Pros
-
#### Cons
-
### Option 2
#### Pros
-
#### Cons
-
### Option 3
#### Pros
-
#### Cons
-
## References
- Links to any research, ADRs or related Jiras
---
## Decide on Database Schema Migration Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
### Problem and Context
We must decided how and when to run database migrations. The strategy will depend on several factors, including whether or not any automated tools are already in place to handle these migrations and the platform (e.g. ECS or EKS).
## Considerations
- During rolling application deployments, there will be a period when 2 versions of your application are live.
- Migrations that happen before the rolling update, mean that the previous version of the application will be forced to use the new schema
- Migrations that happen after the rolling update, mean that the next version of the application will be forced to use the previous version of the schema
- Adjacent releases of the application must be backward compatible with schemas.
- **Never delete columns (or rows), only add columns**
### Questions
- Should migrations happen before or after the application rollout?
- What should happen during application rollbacks?
- Does the schema get rolled back or stay ahead?
- What software are you using to handle migrations?
- What happens if pods or nodes are scaling during a database migration? (e.g. old versions of pods can come up)
## Options
**Option 1:** Implement migrations as part of entry point initialization of docker container
- If you have 25 containers running, each one will attempt to obtain a lock (if you’re lucky) and perform the migration. Many customers don’t like that each container attempts this and prefer it to happen before or after rollout.
- At any given point, different versions of the app will be using different versions of the schema (e.g. rolling updates)
- Long migrations will cause health checks to fail and the pods will get restarted aborting the migrations. Extending the timeouts for health checks means slowing down recovery for legitimate failures.
**Option 2:** Implement migrations as part of the CD workflow
- This is nice because we can control when it happens in the release process
- It’s complicated when doing asynchronous deployments with ArgoCD or spacelift. Since deployments are happening outside of the GitHub action workflow, we don’t know when steps are completed.
**Option 3:** Implement Manually triggered Workflows (E.g. via GitHub Action workflow dispatches)
- You have full control over when it runs, but it’s not automated in relation to your workflows. The actual execution is automated.
**Option 4:** Implement Kubernetes Job or ECS Task
- Easy to implement
- Works well, when it works. When it fails, it’s hard to regulate what happens with the services.
- Kubernetes will keep re-attempting the migration if the job exit’s non-zero. If we squash the exit code, then we don’t realize it’s failing, unless there’s other monitoring in place.
---
## Decide on DocumentDB Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
Requirements for DocumentDB clusters deployed to each active compute environment need to be outlined before a DocumentDB
component is configured and deployed:
- Instance family for DB instances
- Database Engine
- Whether or not to create DB replica instances
- Instance family for DB replica instances
- Backup retention period
## Considered Options
Create a standardized DocumentDB cluster based on current use case:
### v1 Infrastructure Requirements
In production, a single `[CHANGE ME]` instance is used. Read-replicas are not enabled.
The primary database engine in DocumentDB clusters is [CHANGE ME]
The backup retention period for DocumentDB clusters is one day for non-production environments (the minimum retention
period), and 35 days for production environments (the maximum retention period). This allows for a point-in-time-restore
backup that can be rolled back for the duration of the retention period.
### Standardized DocumentDB Cluster Requirements
The [Amazon DocumentDB Service](https://docs.aws.amazon.com/documentdb/latest/developerguide/replication.html) recommends deploying at least one additional DocumentDB instance in a different availability zones to ensure High Availability. This instance is automatically designated a read-replica by DocumentDB. During a disaster scenario when the primary instance becomes unavailable, DocumentDB automatically designates one of the other instances as the primary instance.
The primary AWS region [uses three availability zones](/layers/network/design-decisions/decide-on-primary-aws-region), therefore it is recommended that
DocumentDB is deployed across three availability zones when possible.
The [Amazon DocumentDB Service](https://docs.aws.amazon.com/documentdb/latest/developerguide/replication.html) recommends that read replicas are of the same instance family as the primary instance:
> For consistency, these replica instances should be the same instance family, and should be left to be designated as
> replica instances by the DocumentDB service rather than manually designated, in order to simplify management of the
> infrastructure.
> This, in addition to the requirements outlined in _v1 Infrastructure Requirements_, concludes that each DocumentDB
> cluster will have the following requirements:
- Instance family for DB instances: `[CHANGE ME]` in non-production environments, `[CHANGE ME]` in production environments
- Database Engine: [CHANGE ME]
- Whether or not to create DB replica instances: yes, ideally create 3 (one in each of the 3 Availability Zones)
- Instance family for DB replica instances: `[CHANGE ME]` in non-production environments, `[CHANGE ME]` in production
environments
- Backup retention period: 1 day in non-production environments (the minimum retention period), 35 days in production
environments (the maximum retention period)
## Decision Outcome
Chosen option: "Create a standardized DocumentDB cluster based on current use case", because
- This allows for a standardized DocumentDB cluster that satisfies the requirements required by the application stack in
each active compute environment.
## Consequences
Create a DocumentDB component and tune it to the requirements outlined above.
### References
- [https://docs.aws.amazon.com/documentdb/latest/developerguide/replication.html](https://docs.aws.amazon.com/documentdb/latest/developerguide/replication.html)
- [Primary AWS Region](/layers/network/design-decisions/decide-on-primary-aws-region)
---
## Decide on DynamoDB Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem **DRAFT**
Requirements for DynamoDB tables deployed to each active compute environment need to be outlined before a DynamoDB component is configured and deployed.
## Context
We need to at a minimum define the following requirements:
- Read/Write capacity Mode (and related settings)
- Integrated backup settings:
- Point-in-time-recovery (PITR) or on-demand
- AWS Backup settings:
- Backup frequency
- Backup lifecycle
- TTL (whether or not to enable)
## Considered Options
Create a standardized DynamoDB Table based on the current use case:
### v1 Infrastructure Requirements
Currently, DynamoDB tables are used within [CHANGE ME].
DynamoDB tables use `PAY_PER_REQUEST` billing instead of `PROVISIONED` billing because:
- The DynamoDB is part of a data pipeline where the table IO operations are not entirely predictable, therefore realized
throughput may be significantly below or above the provisioned throughput throughout the day.
- The DynamoDB table IO operations are also not entirely consistent, therefore a provisioned capacity table whose
realized throughput closely meets its provisioned throughput cover a period of one day may not do so the next day.
- If the provisioned capacity tables surpass their provisioned throughput, throttling will occur, unless auto-scaling is
implemented for the DynamoDB table. This involves more machinery and is only warranted for DynamoDB tables whose
realized throughput is very close to their provisioned throughput, and which need to be able to handle unpredictable
spikes from time-to-time. This is not cost-effective for a table whose realized throughput does not meet its
provisioned throughput consistently in the first place.
- Due to the reasons described above, it is more cost-effective to [CHANGE ME: PAY_PER_REQUEST OR PROVISIONED]
Backups (both integrated and via AWS Backup) are configured for DynamoDB as follows:
- Integrated Backup type: in production, the DynamoDB tables have integrated point-in-time-recovery (PITR) backups which
allow for a to-the-second recovery. The retention period is the maximum PITR retention period, which is 35 days. For
non-production environments, PITR is disabled. Integrated on-demand backups are not used, because AWS Backup performs
the same function. Enabling integrated PITR backups alongside AWS Backup allows for the "best of both worlds" for
DynamoDB backups — that is, the ability to restore to the second for the past 35 days, and to have periodic snapshots
for a long period of time.
- AWS Backup is disabled for non-production environments.
- AWS Backup frequency: the DynamoDB tables are backed up once a month. For simplicity, this is the first day of the
month.
- AWS Backup lifecycle: the DynamoDB tables backups are transitioned to cold storage after some time and are eventually
deleted. The backup is moved to cold storage after [CHANGE ME] days, and is deleted after [CHANGE ME] days. This leaves a short period
in the month to restore the table from warm storage, and exactly 3 months in cold storage to recover the table.
[TALK ABOUT WHETHER TTL IS GOING TO BE USED]
### Standardized DynamoDB Table Requirements
The requirements outlined by _v1 Infrastructure Requirements_ are sufficiently comprehensive to be standardized in the
v2 infrastructure:
- Read/Write capacity Mode: `[CHANGE ME]`
- PointInTimeRecoveryEnabled: `true` for production, `false` for non-production environments
- AWS Backup settings (disabled for non-production environments):
- BackupCronExpression: `[CHANGE ME]`
- DeleteAfterDays: `[CHANGE ME]`
- MoveToColdStorageAfterDays: `[CHANGE ME]`
- TimeToLiveSpecification:
- Enabled:`false`
## Consequences
Create a DynamoDB component and tune it to the outlined requirements.
## References
- [https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html)
---
## Decide on Elasticache Redis Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
We need to define the use-cases and outline soft requirements for Elasticache Redis.
## Context
Amazon ElastiCache for Redis is Amazon’s fully managed version of Open Source [https://redis.io/](https://redis.io/) in-memory data store that provides sub-millisecond latency and powers some of the largest websites out there. Any applications you have that depend on Redis can work seamlessly with ElastiCache Redis without any code changes.
See [https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/nodes-select-size.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/nodes-select-size.html) for Amazon’s recommendations on right-sizing clusters.
## Considered Options
:::info
Ideally, share a screenshot of any existing Elasticache redis requirements and we can provision accordingly.
:::
| **Requirement** | **Recommendation** | **Description** |
| --------------------------- | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Cache Engine | Redis | |
| Instance family | | |
| Encryption in Transit | No | By default, we don’t require this as :::infoNative TLS was not supported prior to open source Redis version 6.0. As a result, not every Redis client library supports TLS.[https://redis.io/topics/encryption](https://redis.io/topics/encryption) ::: |
| Encryption at Rest | | |
| Security Group Restrictions | 10.0.0.0/0 | |
| Automated failover | yes | |
| Auto minor upgrade | N/A | Auto Minor Upgrade is only supported for engine type `"redis"` and if the engine version is 6 or higher.(See: [https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/VersionManagement.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/VersionManagement.html) ) |
| Multi-az enabled | | deployed across 2 AZs (private subnets) |
| Number of nodes | | |
| Cluster Mode Enabled | | |
| AWS Backup requirements | | [https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.html) |
Additional options
- [aws_elasticache_cluster](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/reference/elasticache_cluster)
- [aws_elasticache_replication_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/reference/elasticache_replication_group)
## Consequences
- We’ll provision Elasticache Redis using our `elasticache-redis` component.
- Define the catalog entries for the various Redis configurations
- Enable AWS backups, as necessary
## References
- [https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html)
- [https://aws.amazon.com/blogs/database/five-workload-characteristics-to-consider-when-right-sizing-amazon-elasticache-redis-clusters/](https://aws.amazon.com/blogs/database/five-workload-characteristics-to-consider-when-right-sizing-amazon-elasticache-redis-clusters/)
- [https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/nodes-select-size.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/nodes-select-size.html)
-
---
## Decide on MSK Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Requirements for MSK clusters deployed to each active compute environment need to be outlined before an MSK component is configured and deployed.
## Context
Amazon MSK clusters are going to be used by applications that use Apache Kafka streams.
## Considered Options
### Create a standardized MSK cluster based on requirements.
- Number of AZs
- Apache Kafka version
- Number of broker nodes
- Broker instance type
- Enhanced monitoring enabled?
- Node and JMX Prometheus exporters enabled?
- Broker volume size
- S3 broker logging enabled?
- CloudWatch Logs broker logging enabled?
- CloudWatch Logs retention period (if CloudWatch Logs broker logging is enabled)
- Kinesis Firehose broker logging enabled?
- Authentication
- Defaults to Mutual TLS authentication disabled
- Private CA ARN can be used for mutual TLS authentication if that’s required. This can be created per account or in a single account and shared across accounts with AWS RAM.
- Defaults to IAM (Client SASL IAM) authentication disabled.
- Encryption at rest defaults to using amazon-managed “aws/msk” kms key.
- MSK properties
- Auto create topics ?
- `auto.create.topics.enable` (this is explicitly set to false by default)
- If auto creating topics is not required but topic creation is required, there is a separate component for it where topics can be explicitly created.
- Allow deleting topics ?
- `delete.topic.enable` (since kafka 1.0.0, this has defaulted to true but is not in the msk default)
[Amazon provides an Excel spreadsheet](https://amazonmsk.s3.amazonaws.com/MSK_Sizing_Pricing.xlsx) to help make these calculations.
[msk_sizing_pricing.xlsx](/assets/refarch/msk_sizing_pricing.xlsx)
#### Standardized Amazon MSK Cluster Requirements
The [Amazon-recommended Apache Kafka version](https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html#2.6.2) (`2.6.2`) will be used in favor of `2.6.0`, because the minor semantic version difference is not expected to cause any compatibility issues, and contains bug fixes and remediations to CVEs (see: [Apache Kafka 2.6.2 release notes](https://downloads.apache.org/kafka/2.6.2/RELEASE_NOTES.html)).
As a best practice, CloudWatch Logs broker logging should be enabled in order to have the ability to debug Apache Kafka issues when they arise. (Amazon MSK will log `info` level logs. See: [Apache Kafka log levels](https://httpd.apache.org/docs/2.4/mod/core.html#loglevel).) The retention period for these logs should be long enough for debugging in non-production environments (e.g. 60 days), and even longer in production environments in order to be able to debug issues that may be impacting or have impacted users in the past (e.g. 365 days).
[LIST ATTRIBUTES FROM _Create a standardized MSK cluster based on requirements_ AND FILL THEM IN]
## References
- [Amazon MSK: Supported Apache Kafka versions](https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html)
- [Amazon MSK: Logging](https://docs.aws.amazon.com/msk/latest/developerguide/msk-logging.html)
- [Amazon MSK now supports the ability to change the size or family of your Apache Kafka brokers](https://aws.amazon.com/about-aws/whats-new/2021/01/amazon-msk-now-supports-the-ability-to-change-the-size-or-family/)
- [https://docs.aws.amazon.com/msk/latest/developerguide/msk-encryption.html](https://docs.aws.amazon.com/msk/latest/developerguide/msk-encryption.html)
- [https://docs.aws.amazon.com/msk/latest/developerguide/msk-default-configuration.html](https://docs.aws.amazon.com/msk/latest/developerguide/msk-default-configuration.html)
---
## Decide on RDS Aurora DB Cluster Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Requirements for Amazon Aurora DB clusters deployed to each active compute environment need to be outlined before an
Amazon Aurora component is configured and deployed
## Context
Amazon RDS provides MySQL and PostgreSQL-compatible relational databases that are built for the cloud with greater performance and availability at 1/10th the cost of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. RDS Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones.
Amazon Aurora DB clusters (See: [Decide on RDS Technology and Architecture](/layers/data/design-decisions/decide-on-rds-technology-and-architecture))
### Known Limitations
- [Max of 15 Read Replicas](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html#:~:text=An%20Aurora%20DB%20cluster%20can%20contain%20up%20to%2015%20Aurora%20Replicas.%20The%20Aurora%20Replicas%20can%20be%20distributed%20across%20the%20Availability%20Zones%20that%20a%20DB%20cluster%20spans%20within%20an%20AWS%20Region.) (we had a customer decline RDS Aurora based on this limitation)
- ~~Point-in-time recovery (PITR) is not yet supported~~ RDS Aurora now supports PITR. [https://aws.amazon.com/blogs/storage/point-in-time-recovery-and-continuous-backup-for-amazon-rds-with-aws-backup/](https://aws.amazon.com/blogs/storage/point-in-time-recovery-and-continuous-backup-for-amazon-rds-with-aws-backup/)
- Cannot be launched on public subnets
## Considered Options
Create a standardized Aurora DB cluster based on the current use case:
### Current Infrastructure Requirements
### RDS Aurora Replication
RDS aurora replication happens at the filesystem layer versus the conventional database layer. It’s actually a shared filesystem. Hitting the read replicas hard can still impact the masters since they are using the shared filesystem.
> Because the cluster volume is shared among all DB instances in your DB cluster, minimal additional work is required to replicate a copy of the data for each Aurora Replica.
> [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html)
### RDS Serverless v1 vs v2
Using serverless could be more costly that using regular RDS aurora due to not having enough options for CPU.
Serverless v1 offers more granular scaling units. Only operating in a single availability zone. Serverless v1 only supports up to v10 of Postgres (v10 will be sunset by Postgres on November 10, 2022).
Serverless v2 offers multi-AZ, so that DB subnets can be across multiple availability zones. Supports Postgres 12+.
[https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.upgrade.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.upgrade.html)
### Future Aurora DB Cluster Requirements
Because the [RDS Service documentation on Aurora DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html) recommends deploying at least one additional Aurora DB cluster instance to an Availability Zone other than where the primary instance is located in order to ensure High Availability, and since [3 AZs are used by the primary AWS region](/layers/network/design-decisions/decide-on-primary-aws-region), it is recommended that 3 instances are deployed per Aurora DB cluster when High Availability is needed.
- Explain how many instances exist in the cluster (or per region, if this is a global cluster)
- Explain whether the cluster is global or regional, and reference [Decide on RDS Technology and Architecture](/layers/data/design-decisions/decide-on-rds-technology-and-architecture)
- Explain how many secondary DB clusters should exist, if this is a global cluster
Lastly, database storage encryption, deletion protection and cloudwatch logs exports should be enabled as a best practice.
This, in addition to any of the requirements outlined in _v1 Infrastructure Requirements_, should be captured in the following table.
#### **Setting**
Aurora DB cluster Engine
Aurora DB cluster Instance Family
Number of Aurora DB cluster Instances: 1 for all environments except for
prod, 3 for prod (or 2 when < 3 AZs are available)
Regional or Global DB Cluster
Security-related settings
Storage Encryption enabled
yes
## Other Considerations
- Cost [https://aws.amazon.com/rds/aurora/pricing/](https://aws.amazon.com/rds/aurora/pricing/)
## Consequences
Create an Aurora DB Cluster component and tune it to the outlined requirements.
## References
- [Decide on RDS Technology and Architecture](/layers/data/design-decisions/decide-on-rds-technology-and-architecture)
- [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html)
- [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html)
---
## Decide on RDS Technology and Architecture
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
RDS offers a variety of features and deployment options. A specific RDS deployment option needs to be adopted for the application database solution. Options are not mutually exclusive and multiple types of databases can be deployed depending on your requirements.
## Context
Amazon’s RDS Relational Database Service (Amazon RDS) is a fully managed Database-as-a-Service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.
## Considered Options
There are several ways in which an RDS Cluster can be deployed.
### **Option 1:** Amazon RDS Instances
Amazon RDS Instances are the original version of RDS and provide simple master-slave replication with multiple read replicas and multi-AZ fail-over capabilities. RDS Instances are best suited for one-off databases (e.g. for microservices or dev environments) where performance is likely not an issue and the ability to do point-in-time restores for a database is required. Point in time recovery allows you to create an additional RDS instance (e.g. it does replace your running instance), based on the data as it existed on your instance at any specific point in time by restoring and replaying the journal to a specific point in time. This feature is not supported yet by RDS Aurora.
[https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html)
:::info
AWS offers push-button migration to convert existing Amazon RDS MySQL and PostgreSQL RDS instances to RDS Aurora.
:::
### **Option 2:** Amazon Aurora DB Cluster (recommended for most use cases)
An RDS Aurora Cluster can be deployed into each VPC. The Aurora DB Cluster must use a DB Subnet Group that spans at least two availability zones.
For more information, see: [Creating a DB Cluster (Amazon Aurora)](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.CreateInstance.html).
[https://aws.amazon.com/rds/aurora/faqs/](https://aws.amazon.com/rds/aurora/faqs/)
:::caution
RDS Aurora does not support Point-in-time Recovery (PITR) like with RDS instances.
[https://aws.amazon.com/blogs/storage/point-in-time-recovery-and-continuous-backup-for-amazon-rds-with-aws-backup/](https://aws.amazon.com/blogs/storage/point-in-time-recovery-and-continuous-backup-for-amazon-rds-with-aws-backup/)
:::
### **Option 3:** Amazon Aurora Global Database
Amazon RDS Aurora can be deployed as a global database, with an Aurora DB cluster existing in a designated primary AWS region, and up to 5 additional Aurora DB clusters in designated secondary AWS Regions.
The Aurora DB clusters in the secondary regions are Aurora replicas, but [write forwarding](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-write-forwarding.html) can be enabled in order to forward write operations to the primary region DB cluster.
For more information, see: [Getting Started with Amazon Aurora Global Databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-getting-started.html).
:::caution
**Major version upgrades must be performed manually (E.g. not with terraform)**
:::
> Major version upgrades can contain database changes that are not backward-compatible with previous versions of the database. This functionality can cause your existing applications to stop working correctly. As a result, Amazon Aurora doesn't apply major version upgrades automatically.
> [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion)A transit gateway connection will be required between all regions and accounts participating in the Global Database.
### **Option 4:** Amazon Aurora Serverless
Amazon Aurora is also offered as Aurora Serverless. This is an on-demand autoscaling configuration that scales automatically horizontally based on usage, and shuts down when it is not in use.
The downside to Amazon Aurora Serverless is that there is a warm-up cost that can cause connections to hang for up to 30 seconds. This is potentially mitigated using the [AWS RDS proxy service](https://aws.amazon.com/rds/proxy/).
Amazon Aurora Serverless has two releases: v1 and v2. v2 is currently a preview release. [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-2.limitations.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-2.limitations.html)
For more information, see: [Using Amazon Aurora Serverless v1](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html).
Known issue: [https://github.com/cloudposse/terraform-aws-rds-cluster/issues/81](https://github.com/cloudposse/terraform-aws-rds-cluster/issues/81)
:::danger
Don't use Aurora Serverless v2 (preview) for production databases. All resources and data will be deleted when the preview ends.
:::
## Other Considerations
### RDS Engine: MySQL or Postgres
This decision is determined based on the stack(s) of the applications being onboarded and their supported databases.
:::caution
`aurora-postgresql` database engine has no minor auto update candidates; therefore, it does not auto update on minor versions. (See [Slack Explanation](https://cloudposse.slack.com/archives/C018WN7NC1W/p1646674264252789))
:::
### RDS Multi-AZ
Lastly, a regular RDS instance can be deployed in a Multi-AZ configuration. A standby instance allows for Multi-AZ redundancy, and [read-replicas](https://aws.amazon.com/rds/features/read-replicas/) can be used to reduce the IO load on the primary RDS instance.
This is a more cost-effective option when compared to the Amazon Aurora offerings, but it is also not as scalable.
For more information, see: [High availability (Multi-AZ) for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html).
### Tenancy: Shared or Dedicated
Once RDS is deployed, services can either use each database in a shared or dedicated tenancy model.
1. In the shared model, multiple application databases are provisioned on one database instance (or cluster):
- This is the most economical option and achieves greater economies of scale.
- The downside is that one cannot automatically restore an individual database, making recoveries from human error
slower and more manual.
2. In the dedicated model, one application database is provisioned in each database instance (or cluster):
- This creates the greatest level of isolation.
- Each database has its own automated backup and can be restored as a point-in-time-recovery (PITR) backup ([except for Amazon Aurora](https://aws.amazon.com/blogs/storage/point-in-time-recovery-and-continuous-backup-for-amazon-rds-with-aws-backup/)).
- This is the least economical option.
## Consequences
A component for an Amazon Aurora RDS cluster will be created and provisioned in each VPC as needed.
---
## Decide on S3 Bucket Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Your organization depends on many S3 buckets, but they all have different purposes, with different requirements. Some are public for serving assets, while others are storing sensitive uploads for customer data. You want to reduce the friction for developers to create new buckets while ensuring some uniformity and standardized policies for how buckets are created. Buckets will be created in multiple stages and seldom if ever shared across stages. Bucket names in AWS S3 are globally unique, so we’ll need to have a convention to name them.
## Considerations
:::info
We’ll use the [Terraform](/resources/legacy/fundamentals/terraform) to generate bucket names, so a short name for each bucket is all that is required. If you don’t yet know what buckets you will need, then we can provision some dummy buckets as examples.
:::
- Short name to describe the bucket (without stage or account)
- Define the archetypes/classes of buckets used. The most common types we see are:
- Static configurations (e.g. downloaded by mobile clients or EC2 instances)
- Static assets (e.g. images, videos, thumbnails, uploads)
- Websites, SPAs, Cloudfront origins
- Log buckets (e.g. ALB access logs, Cloudtrail Logs, VPC Flow Logs, etc)
- Artifact buckets (E.g. for CI/CD, binary executables)
- SFTP/upload buckets
- Define the lifecycle requirements of objects
- Public/private ACLs
- Encryption at Rest
- Access logs? aggregation of access logs to a centralized location across all accounts?
- Cloudfront integration
## Consequences
- Catalog configurations will be created for each bucket archetype.
- Buckets will be provisioned using the [s3-bucket](/components/library/aws/s3-bucket/) component
## See Also
[Decide on Terraform State Backend Architecture](/layers/accounts/design-decisions/decide-on-terraform-state-backend-architecture)
---
## Decide on SFTP Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem **DRAFT**
## Context
## Considered Options
- Will SFTP access the existing buckets? or do we need to create a new bucket?
- Do you need different users with different levels of access?
- Will users need to be to be restricted to different paths of the SFTP file system?
- Will the service need to be able to accept multiple logins?
- Will the service be public (0.0.0.0/0) or only available to certain CIDRs?
- What are the logging requirements?
- What are the object lifecycle requirements?
- What type of authentication is needed? e.g. Basic certificate-based or SAML?
## References
- [https://aws.amazon.com/blogs/storage/using-okta-as-an-identity-provider-with-aws-transfer-for-sftp/](https://aws.amazon.com/blogs/storage/using-okta-as-an-identity-provider-with-aws-transfer-for-sftp/)
- [https://github.com/cloudposse/terraform-aws-transfer-sftp](https://github.com/cloudposse/terraform-aws-transfer-sftp)
---
## Decide on the backup AWS region for Aurora Global Cluster
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Considerations
- Global Aurora Postgres cluster requires at least two regions in which `aws_rds_cluster` are created
- An Aurora global database consists of one primary AWS Region where your data is mastered, and up to five read-only, secondary AWS Regions. Aurora replicates data to the secondary AWS Regions with a typical latency of under a second. You issue write operations directly to the primary DB instance in the primary AWS Region
- Related to this: decide on VPC CIDRs for the main and backup regions
:::info
Consequences of deploying RDS Aurora Global Cluster include:
- Provisioning additional VPC and Aurora clusters in the backup region
- Setting up peering via the transit gateway because write operations go directly to the primary DB instance in the primary AWS Region
:::
### Related
- [Decide on RDS Technology and Architecture](/layers/data/design-decisions/decide-on-rds-technology-and-architecture)
---
## Decide Whether to Use RDS IAM Authentication
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Multiple ways exist to authenticate with the database. Static credentials grow stale and are too easily hardcoded in places making rotation difficult and seldom performed. Generally, short-lived credentials to access only the resources you need to do your job (granting _least privilege_) is preferred.
## Context
RDS supports IAM authentication, which means IAM credentials are used to obtain short-lived credentials to access the RDS database. Leveraging RDS IAM Authentication in applications requires application changes to leverage the AWS SDK ([Java Example](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Java.html)).
:::caution
RDS IAM authentication is not recommended for applications due to a maximum of 200 new connections per second, and therefore only advisable for use with human operators.
:::
For applications, the AWS recommended method is using AWS Secrets Manager (as opposed to RDS IAM Authentication) which also has the built-in capability to rotate credentials.
Despite these best practices, we primarily provision static credentials randomly generated by terraform using the database provider and then write them to SSM and encrypt with KMS. See [Use SSM over ASM for Infrastructure](/resources/adrs/adopted/use-ssm-over-asm-for-infrastructure) for more context.
## Consequences
If we choose to enable RDS IAM Authentication, it’s just a simple feature flag in our rds component. This is an easily reversible decision that can be disabled.
## References
- [https://www.ibexlabs.com/iam-database-authentication-for-amazon-rds-in-mysql/](https://www.ibexlabs.com/iam-database-authentication-for-amazon-rds-in-mysql/)
- [https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Java.html](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Java.html)
- [Use SSM over ASM for Infrastructure](/resources/adrs/adopted/use-ssm-over-asm-for-infrastructure)
---
## Design Decisions(Design-decisions)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions for the data layer, including which services
you will rely on on their configurations.
---
## (TODO) Decide on RDS Aurora Serverless Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
## Considered Options
## References
---
## (TODO) Decide on RDS Instance Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem **DRAFT**
## Context
[Decide on RDS Technology and Architecture](/layers/data/design-decisions/decide-on-rds-technology-and-architecture)
## Considered Options
:::caution
RDS Global Databases are only compatible with RDS Aurora.
:::
## References
- [https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-right-sizing/tips-for-right-sizing-your-workloads.html](https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-right-sizing/tips-for-right-sizing-your-workloads.html)
- [https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-right-sizing/tips-for-right-sizing-your-workloads.html](https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-right-sizing/tips-for-right-sizing-your-workloads.html)
---
## Setup Databases
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Note from '@site/src/components/Note'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
## Quick Start
| Steps | Example |
| :------------------------ | :---------------------------------- |
| 1. Vendor data components | `atmos workflow vendor -f quickstart/app/data` |
| 2. Connect to the VPN | Click Ops |
| 3. Deploy clusters | `atmos workflow deploy/all -f quickstart/app/data` |
## Requirements
In order to deploy Data layer components, Networking must be fully deployed and functional. See [the network documentation](/layers/network) for details.
All deployment steps below assume that the environment has been successfully set up with the following steps.
1. Sign into AWS via [Atmos Auth](/layers/identity/how-to-log-into-aws/)
2. Connect to the VPN
3. Open Geodesic
## Supported databases
At the moment we have support for:
- [Aurora PostgreSQL](/components/library/aws/aurora-postgres/)
- [Aurora PostgreSQL Resources](/components/library/aws/aurora-postgres-resources/)
- [Aurora MySQL](/components/library/aws/aurora-mysql/)
- [Aurora MySQL Resources](/components/library/aws/aurora-mysql-resources/)
- [AWS Backup](/components/library/aws/aws-backup/)
- [DocumentDB](/components/library/aws/documentdb/)
- [DynamoDB](/components/library/aws/dynamodb/)
- [Elasticsearch Cluster](/components/library/aws/elasticsearch/)
- [RDS](/components/library/aws/rds/)
- [RedShift](/components/library/aws/redshift/)
- [ElastiCache Redis](/components/library/aws/elasticache-redis/)
### Vendor
Vendor all data components with the following workflow:
These run several vendor commands for each included component. You can always run these commands individually to update
any single component. For example:
We're using `aurora-postgres` for this example. Your database selection may differ.
```bash
atmos vendor pull -c aurora-postgres
```
### Deploy
In order to deploy database, deploy both the cluster and the resources component (if applicable). Applying changes to
the resources component requires a VPN connection. For example,
---
## How to Enable Cross-Region Backups in AWS-Backup
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
## Problem
AWS Backup is a regional component that can backup a ton of resources. It is often very helpful to save your backups in another region in case of a disaster.
## Solution
Create a backup vault and point to it via `destination_vault_arn` variable!
Currently, this requires deploying the component into two different regions. The first is a normal aws-backup component. This includes a `plan`, a `vault`, and an `iam` role. The second aws-backup component should be deployed to the cross-region destination.
```
# -.yaml
components:
terraform:
aws-backup:
vars:
# https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
schedule: cron(0 0 * * ? *) # Daily At 12:00 AM UTC
start_window: 60 # Minutes
completion_window: 240 # Minutes
cold_storage_after: null # Days
delete_after: 14 # Days
destination_vault_arn: null # Copy to another Region's Vault
copy_action_cold_storage_after: null # Copy to another Region's Vault Cold Storage Config (Days)
copy_action_delete_after: null # Copy to another Region's Vault Persistence Config (Days)
backup_resources: []
selection_tags:
- type: "STRINGEQUALS"
key: "aws-backup/resource_schedule"
value: "dev-daily-14day-backup"
```
```
# -.yaml
components:
terraform:
aws-backup:
vars:
plan_enabled: false
iam_role_enabled: false
```
:::info
This will only create a **vault**!
:::
Create the cross-region backup vault first. Grab its ARN, and set it to the value of the `destination_vault_arn`. Apply the component and you now have cross-region backups enabled.
---
## How to Migrate RDS Snapshots
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Note from '@site/src/components/Note'
## Context
In this document we will refer to the Legacy Organization as `legacy` and refer to the new Organization as `acme`.
At this point we have a populated database in the `legacy-prod` account that we want to migrate to the new organization.
This database is encrypted with the Amazon Managed KMS key, `aws/rds`. We have already deployed an empty database to the
new account, `acme-prod`. Now we want to migrate all data from the old to the new.
### Additional considerations
#### Notes from AWS
> You can't share a snapshot that has been encrypted using the default KMS key of the AWS account that shared the
> snapshot. To work around the default KMS key issue, perform the following tasks:
>
> 1. Create a customer managed key and give access to it.
> 2. Copy and share the snapshot from the source account.
> 3. Copy the shared snapshot in the target account.
>
> [reference](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html#share-encrypted-snapshot)
#### What does this mean
1. We've encrypted the source RDS instance using the AWS managed KMS key for RDS, `aws/rds`
2. We cannot modify the AWS managed KMS key, so we must copy the snapshot using a customer managed KMS key that we
create
3. Then we can allow the destination account permission to access our new KMS key, which the destination account will
use to access the copied snapshot
4. Restoring a DB instance from a cross account storage encrypted snapshot is not supported, so we must again copy the
shared snapshot in the destination account
## Requirements
- The developer needs to have `terraform` access in the new Organization for applying Terraform
- The developer needs to have `admin` access in target account for the new Organization to read AWS SSM Parameters and
KMS keys and create snapshots
- The developer needs to have Administrator access in the Legacy Organization source account to create snapshots and KMS
keys
- We have an populated RDS database cluster in the Legacy Organization source account. This is the source of our data
for the migration.
- _Any data in this database will be lost!_ We have an empty RDS database cluster in new Organization target account.
This is the destination of our data for the migration. We will recreate the DB instance with the new snapshot copy.
## Steps
:::info Example Region
The remainder of this document assumes both the source and destination regions are `us-west-2`. We will use the
environment abbreviation, `usw2`.
:::
Connect to _both_ `acme-identity` and `legacy-prod` in Leapp
### Connect to `acme-identity`
:::info AWS Team to Team Roles Permission
You must have access assume the `acme-plat-gbl-prod-admin` role via your `acme-identity` profile.
:::
This is the normal AWS profile we use to connect to Leapp. Follow the steps in
[How To Log into AWS](/layers/identity/how-to-log-into-aws).
When opening Geodesic, you should see the following with a "green" checkmark:
```console
⧉ acme
√ . [acme-identity] (HOST) infrastructure ⨠
```
### Connect to `legacy-prod`
1. Open Leapp
2. Create new Integration for the Legacy AWS Organization
```yaml
Type: AWS Single Sign-On
Alias: Legacy
Portal URL: https://.awsapps.com/start/
Region: us-west-2
Auth. method: In-browser # optional
```
3. Log into the new Integration and accept the pop up windows
4. Find the `legacy-prod` - `AWSAdministratorAccess` Session or whichever Administrator Permission Set you have access to assume.
5. Select the "dots" on the right and click "Change" > "Named Profile"
6. Enter `legacy-prod-admin`
7. Start the session. You should see `legacy-prod-admin` under "Named Profile"
8. Open Geodesic shell
9. Assume the new profile:
```bash
assume-role legacy-prod-admin
```
You should now be connected to the Legacy Prod account:
```bash
✗ . [none] (HOST) infrastructure ⨠ assume-role legacy-prod-admin
* Found SSH agent config
⧉ acme
√ : [legacy-prod-admin] (HOST) infrastructure ⨠ aws sts get-caller-identity
{
"UserId": "AROXXXXXXXXXXXXXXXXXX:daniel@cloudposse.com",
"Account": "111111111111",
"Arn": "arn:aws:sts::111111111111:assumed-role/AWSReservedSSO_AWSAdministratorAccess_40xxxxxxxxxxxxxx/daniel@cloudposse.com"
}
```
### Apply Initial Terraform
Before we execute the migration script, we need to apply the new `rds` component in the destination or `acme-prod`
account. Applying this component will create the customer managed KMS key that we will need to copy the final legacy
snapshot into the new account.
We should already have the `rds` component configured for `acme-plat-usw2-prod`.
This may be configured in the following file: `stacks/orgs/acme/plat/prod/us-west-2/data.yaml`. But please adapt this to
your infrastructure file structure.
Apply the `rds` component to create the KMS key now, if not already applied:
```bash
atmos terraform apply rds -s plat-usw2-prod
```
Once completed, Terraform will return all outputs. Find the output named `kms_key_alias`. For example:
```bash
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
...
kms_key_alias = "alias/acme-plat-usw2-prod-rds"
...
```
### Prepare the migration script
In order to run a helpful bash tool included with this script, add the following to your local Geodesic infrastructure
Dockerfile. To read more about `gum`, please see [charmbracelet/gum](https://github.com/charmbracelet/gum).
```
# Install gum - a CLI tool for making bash scripts "pretty"
# https://github.com/charmbracelet/gum
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg
RUN echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
RUN apt-get update && apt-get install -y --allow-downgrades \
gum
```
In this guide, we include the `rds-snapshot-migration` script under `rootfs/usr/local/bin/` so that it will be included
with your infrastructure Geodesic build. Create a new file called `rootfs/usr/local/bin/rds-snapshot-migration` or
choose another path to store this script. Copy and paste the following into that file.
:::warning Always Review Before Executing
This bash script is a point-in-time copy. Please always review any code you execute, especially against production
environments.
:::
rds-snapshot-migration
The migration script has hard-coded values for our organization. Open the script, add the KMS key alias from the
previous step, and verify the rest of the values are what you'd expect. For example:
```bash
# Organization specific values
legacy_account_id="111111111111" # legacy-prod
legacy_rds_instance_id="legacy-prod"
legacy_profile="legacy-prod-admin"
legacy_region="us-west-2"
acme_account_id="222222222222" # acme-prod
acme_region="us-west-2"
acme_profile="acme-plat-gbl-prod-admin"
acme_region="us-west-2"
acme_kms_key_alias="alias/acme-plat-usw2-prod-rds"
```
Now copy and paste the following:
```bash
#!/bin/bash
set -e -o pipefail
# Organization specific values
legacy_account_id="111111111111" # legacy-prod
legacy_rds_instance_id="legacy-prod"
legacy_profile="legacy-prod-admin"
legacy_region="us-west-2"
acme_account_id="222222222222" # acme-prod
acme_region="us-west-2"
acme_profile="acme-plat-gbl-prod-admin"
acme_region="us-west-2"
acme_kms_key_alias="alias/acme-plat-usw2-prod-rds"
# This path needs to exist. In Geodesic this is created by default to be your infrastructure directory
infrastructure_dir_path="$GEODESIC_WORKDIR"
# This is the path to the KMS Key policy template. Needs to exist before script runs
additional_key_policy="$infrastructure_dir_path/docs/rds-migration/kms_key_policy_addition.json"
# This is the path to where we will save the completed KMS key policy. Does not need to exist before script runs
updated_key_policy="$infrastructure_dir_path/docs/rds-migration/generated_key_policy.json"
function gum_log {
gum log --time rfc822 --structured --level info "$1"
}
function gum_exit {
gum log --time rfc822 --structured --level error "Something went wrong..."
printf "\n%s\n\n" "$1"
exit 1
}
function convert_seconds {
local total_seconds=$1
local hours=$((total_seconds / 3600))
local minutes=$(( (total_seconds % 3600) / 60))
local seconds=$((total_seconds % 60))
echo "$hours hours, $minutes minutes, $seconds seconds"
}
function assume_role {
profile=$1
region=$2
export AWS_PROFILE=$profile
export AWS_DEFAULT_REGION=$region
gum spin --spinner dot \
--title "Checking AWS session: $profile" -- \
aws sts get-caller-identity --output json \
|| { gum log --time rfc822 --structured --level error \
"Failed to retrieve AWS session name. Please ensure your AWS CLI is configured correctly."; exit 1; }
}
# The default wait for snapshot may not be enough in every case we've tried
wait_for_snapshot() {
set +e # continue on errors
while true; do
gum spin --spinner dot --show-output \
--title "Waiting for database snapshot: $2 ..." -- \
aws rds wait db-snapshot-completed \
--db-instance-identifier "$1" \
--db-snapshot-identifier "$2"
if [ $? -eq 0 ]; then
set -e # resume failure on errors
gum_log "Snapshot completed successfully! Snapshot ID: $2"
return 0
else
gum_log "Failed to wait for snapshot. Retrying..."
fi
done
}
#########################################################################################################
#
# Welcome Messages
#
#########################################################################################################
start_time=$(date +%s)
gum_log "Database migration started..."
gum style \
--foreground 212 --border-foreground 212 --border double \
--align center --width 50 --margin "1 2" --padding "2 4" \
"Welcome to the RDS Snapshot Migration helper!"
gum style \
--foreground 31 \
--margin "1 2" \
"Legacy RDS Instance: ${legacy_rds_instance_id}" \
"..." \
"Legacy Account ID: ${legacy_account_id}" \
"Legacy Account Profile: ${legacy_profile}" \
"Legacy Account Region: ${legacy_region}" \
"..." \
"Destination Account ID: ${acme_account_id}" \
"Destination Account Profile: ${acme_profile}" \
"Destination Account Region: ${acme_region}"
printf "Are you ready to start the datastore migration using these values?\n"
response=$(gum choose "yes" "no")
if [[ "$response" != "yes" ]]; then
gum log --time rfc822 --structured --level debug "Exiting..."
exit 0
fi
#########################################################################################################
#
# Begin Migration
#
#########################################################################################################
assume_role $legacy_profile $legacy_region
#########################################################################################################
#
# Create or Fetch KMS Key and update the key policy
#
#########################################################################################################
kms_key_alias="alias/legacy-mockprod-rds-kms-$(date '+%Y%m')"
alias_exists=$(gum spin --show-output --spinner dot \
--title "Detecting if KMS key already exists..." -- \
aws kms list-aliases \
--query "Aliases[?AliasName=='$kms_key_alias'] | length(@)" --output text) || gum_log "$alias_exists"
if [[ "$alias_exists" -eq "0" ]]; then
gum_log "KMS doesn't exist. Creating..."
kms_key_id=$(gum spin --show-output --spinner dot --title "Creating KMS key..." -- \
aws kms create-key --query KeyMetadata.KeyId --output text) && gum_log "$kms_key_id"
kms_key_alias="legacy-mockprod-rds-kms-$(date '+%Y%m')"
gum spin --show-output --spinner dot \
--title "Creating KMS key alias..." -- \
aws kms create-alias \
--target-key-id $kms_key_id \
--alias-name alias/$kms_key_alias
else
gum_log "KMS exists. Fetching..."
kms_key_id=$(gum spin --show-output --spinner dot --title "Retrieving KMS key ID..." -- \
aws kms describe-key --key-id $kms_key_alias --query 'KeyMetadata.KeyId' --output text) || gum_exit "$kms_key_id"
fi
kms_key_arn=$(gum spin --show-output --spinner dot --title "Retrieving KMS key ARN..." -- \
aws kms describe-key \
--key-id $kms_key_id \
--query 'KeyMetadata.Arn' --output text)
gum_log "Key Alias: $kms_key_alias"
gum_log "Key ID: $kms_key_id"
gum_log "Key ARN: $kms_key_arn"
gum_log "Updating KMS key policy..."
gum spin --spinner dot --show-output \
--title "Creating KMS key policy..." -- \
sed \
-e "s/THIS_ACCOUNT_ID/${legacy_account_id}/" \
-e "s/ALLOWED_ACCOUNT_ID/${acme_account_id}/" \
$additional_key_policy > $updated_key_policy
# Commented out to reduce noise. Uncomment if you need to check the KMS Key Policy
# gum_log "Key Policy:"
# cat $updated_key_policy | jq
gum spin --show-output --spinner dot --show-output \
--title "Updating KMS key policy..." -- \
aws kms put-key-policy \
--key-id $kms_key_id \
--policy file://$updated_key_policy
#########################################################################################################
#
# Create a RDS snapshot that we can share with the destination
#
#########################################################################################################
timestamp="$(date '+%Y%m%d%H%M%S')" # Use timestamp to create a useful identifier for the RDS snapshots
legacy_rds_snapshot_source_id="$legacy_rds_instance_id-snapshot-$timestamp"
legacy_rds_snapshot_share_id="$legacy_rds_instance_id-snapshot-share-$timestamp"
gum_log "Creating snapshot of existing RDS instance, encrypted with default KMS key..."
gum spin --spinner dot --show-output \
--title "Creating database snapshot: $legacy_rds_snapshot_source_id ..." -- \
aws rds create-db-snapshot \
--db-instance-identifier $legacy_rds_instance_id \
--db-snapshot-identifier $legacy_rds_snapshot_source_id
gum_log "Creating the initial snapshot typically takes around 3 minutes"
wait_for_snapshot "$legacy_rds_instance_id" "$legacy_rds_snapshot_source_id"
gum_log "Copying snapshot to share with new Organization, encrypted with customer managed KMS key..."
gum spin --spinner dot --show-output \
--title "Copying database snapshot: $legacy_rds_snapshot_source_id > $legacy_rds_snapshot_share_id ..." -- \
aws rds copy-db-snapshot \
--source-db-snapshot-identifier $legacy_rds_snapshot_source_id \
--target-db-snapshot-identifier $legacy_rds_snapshot_share_id \
--kms-key-id $kms_key_id \
--copy-tags
gum_log "Copying and reencrypting snapshots can take more than 20 minutes! Although sometimes it can be much quicker."
gum_log "Check the status in the Legacy AWS Console: https://d-926767ca79.awsapps.com/"
wait_for_snapshot "$legacy_rds_instance_id" "$legacy_rds_snapshot_share_id"
#########################################################################################################
#
# Share the snapshot with the destination
#
#########################################################################################################
# We need the snapshot ARN later. plus check if this snapshot actually exists
legacy_rds_snapshot_share_arn=$(gum spin --show-output --spinner dot --title "Retrieving snapshot ARN..." -- \
aws rds describe-db-snapshots \
--db-snapshot-identifier $legacy_rds_snapshot_share_id \
--query 'DBSnapshots[*].DBSnapshotArn' --output text) || gum_exit "$legacy_rds_snapshot_share_arn"
gum spin --spinner dot --show-output \
--title "Allowing target account to restore this snapshot..." -- \
aws rds modify-db-snapshot-attribute \
--db-snapshot-identifier $legacy_rds_snapshot_share_id \
--attribute-name "restore" \
--values-to-add $acme_account_id
#########################################################################################################
#
# Copy the shared snapshot to the destination account so that we can restore RDS from that snapshot
#
#########################################################################################################
assume_role $acme_profile $acme_region
gum_log "Fetching KMS key in destination account..."
acme_kms_key_id=$(gum spin --show-output --spinner dot \
--title "Retrieving KMS key ID..." -- \
aws kms list-aliases --query "Aliases[?AliasName=='$acme_kms_key_alias'].TargetKeyId" --output text) || gum_exit "$acme_kms_key_id"
gum_log "Copying customer-managed KMS key encrypted snapshot into the destination account..."
acme_rds_snapshot_id="${legacy_rds_instance_id}-snapshot-${timestamp}"
gum spin --spinner dot --show-output \
--title "Copying database snapshot: $legacy_rds_snapshot_share_arn > $acme_rds_snapshot_id ..." -- \
aws rds copy-db-snapshot \
--source-db-snapshot-identifier $legacy_rds_snapshot_share_arn \
--target-db-snapshot-identifier $acme_rds_snapshot_id \
--source-region $legacy_region \
--kms-key-id $acme_kms_key_id
gum_log "Copying and reencrypting snapshots can take more than 20 minutes! Although sometimes it can be much quicker."
gum_log "Check the status in the acme AWS Console: https://d-92674d8c2c.awsapps.com/"
wait_for_snapshot "$legacy_rds_instance_id" "$acme_rds_snapshot_id"
gum_log "Legacy aws/kms Encrypted Snapshot ID: $legacy_rds_snapshot_source_id"
gum_log "Legacy Customer-Managed KMS Key Encrypted Snapshot ID: $legacy_rds_snapshot_share_id"
gum_log "acme Snapshot ID: $acme_rds_snapshot_id"
#########################################################################################################
#
# Done! Print some helpful closing messages with the total execution time
#
#########################################################################################################
end_time=$(date +%s)
duration_in_seconds=$((end_time - start_time))
formatted_time=$(convert_seconds $duration_in_seconds)
gum_log "The script took: $formatted_time"
printf "\n\nNow restore the database in the destination account using the snapshot ID\n\n%s\n\n" $acme_rds_snapshot_id
```
This script refers to a locally stored KMS key policy JSON that we will use to allow cross account access. Create this
file locally and set `additional_key_policy` to that JSON file path. The new JSON will be saved to the
`updated_key_policy` variable file path. Update this value as well to a path that makes sense for your file structure.
KMS Key Policy
The `THIS_ACCOUNT_ID` and `ALLOWED_ACCOUNT_ID` strings will be replaced by `sed` using the `legacy_account_id` and
`acme_account_id` values respectively.
```json
{
"Version": "2012-10-17",
"Id": "AllowCrossAccountKMS",
"Statement": [
{
"Sid": "AllowThisAccountUse",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::THIS_ACCOUNT_ID:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "AllowCrossAccountUse",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ALLOWED_ACCOUNT_ID:root"
},
"Action": [
"kms:CreateGrant",
"kms:Decrypt",
"kms:DescribeKey",
"kms:GenerateDataKey*",
"kms:ListGrants",
"kms:ReEncryptFrom",
"kms:ReEncryptTo",
"kms:RevokeGrant"
],
"Resource": "*"
}
]
}
```
Now rebuild Geodesic in order to use the latest image and `rootfs` scripts:
```bash
make all
```
### Execute the migration script
Now inside the Geodesic image, we should be authenticated with `acme-identity`.
Enter the following command to execute the migration script:
```bash
rds-snapshot-migration
```
Your new snapshot name will be output at the end of the script:
```bash
Now restore the database in the destination account using the snapshot ID
legacy-prod-snapshot-20240517234933
```
### Verify the results
Your new snapshot should now exist in the target account, `acme-prod`. Optionally, run the following to verify:
```bash
AWS_PROFILE=acme-plat-gbl-prod-admin aws rds describe-db-snapshots --db-snapshot-identifier legacy-prod-snapshot-20240517234933
```
### Apply Terraform
Add the snapshot ID to your `rds` component configuration in the `acme-prod` account.
We also need to allow `core-usw2-network` private subnets (where the VPN is deployed) access through the RDS instance's
security group in order to locally connect and validate the instance.
Therefore to add this unique snapshot identifier for only `acme-plat-usw2-prod`. For example, your `rds` component may
be configured in the following file path, but of course adapt this path to your needs:
`stacks/orgs/acme/plat/prod/us-west-2/data.yaml`
```yaml
component:
terraform:
rds:
vars:
# This is the resulting snapshot from the rds-snapshot-migration script
snapshot_identifier: legacy-prod-snapshot-20240517234933
# Optionally allow the VPN through the DB's security group
allowed_cidr_blocks:
- 10.89.16.0/20 # VPN CIDR
# The rest of the configuration is irrelevant to this guide
# ...
```
And then apply the component, replacing the old DB instance with `-replace`:
```bash
atmos terraform apply rds -replace="module.rds_instance.aws_db_instance.default[0]" -s plat-usw2-prod
```
:::tip Replace the Database!
A database will only use a snapshot on creation! If you have already created an empty RDS database in the destination
account, you must trigger recreation of the database instance.
:::
Terraform can take 30 to 40 minutes to apply.
### Setting Database Name and Admin Credentials
:::caution Database Name and Admin Credentials
When restoring RDS from a snapshot, the instance uses the same database name, admin username, and admin password as the
original RDS instance. Make sure to set these values to match, or the next `terraform apply` will destroy and recreate
the RDS instance!
:::
Add the `database_name` and `database_user` using the same values from the original `legacy-prod` database. Changing
either of these values requires database recreation!
```yaml
database_name: foobar
database_user: admin
```
However, you can leave `database_password` unset. If unset, Terraform will create a new password and store it in AWS
SSM. This _does not_ require database recreation.
### Validate the new RDS instance
To validate the new RDS instance in `acme-plat-usw2-prod` connect to the VPN, connect to the instance, and list the
tables.
1. Connect to the EC2 Client VPN
2. Open Geodesic
3. Get the `psql` command from the output of the `rds` component applied above. Take note of this for later
```bash
# As a generic RDS component
atmos terraform output rds -s plat-usw2-prod
```
4. Assume a role that has access to the DB password stored in AWS SSM. For example, we can assume the `admin` role
```bash
assume-role acme-plat-gbl-prod-admin
```
5. Connect to the RDS instance with `psql` using the command we copied from Terraform output above. For example, this
would be similar to the following:
```bash
PGPASSWORD=$(chamber read app/rds/admin db_password -q) psql --host=acme-plat-usw2-prod-rds.xxxxxxxxxxxx.us-west-2.rds.amazonaws.com --port=5432 --username=admin --dbname=foobar
```
4. List tables to verify they've been copied over
```bash
\dt
```
And that's it!
---
## Tutorials(3)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These are some additional tutorials that will help you along with the associated data layer components.
---
## Deploying the ECS Platform
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
## Quick Start
| Steps | Example |
| :---------------------- | :-------------------------------------- |
| Vendor ECS components | `atmos workflow vendor -f quickstart/platform/ecs` |
| Deploy ACM certificates | `atmos workflow deploy/ecs-acm -f quickstart/platform/ecs` |
| Connect to the VPN | Click Ops |
| Deploy Clusters | `atmos workflow deploy/clusters -f quickstart/platform/ecs` |
## Requirements
In order to deploy ECS, Networking must be fully deployed and functional. In particular, the user deploying the cluster must have a working VPN connection to the targeted account. See [the network documentation](/layers/network/) for details.
All deployment steps below assume that the environment has been successfully set up with the following steps.
1. Sign into AWS via [Atmos Auth](/layers/identity/how-to-log-into-aws/)
1. Connect to the VPN
1. Open Geodesic
# Steps
## Vendor Components
Vendor these components with the included Atmos Workflows.
## Deploy ECS
ECS provisioning includes deploying certificate requirements, the default ECS cluster, and Echo Server. Echo Server is a basic service used to validate a successful cluster deployed and is an example of an ECS service. Find ECS Service definitions under `catalog/stacks/ecs-services`.
To provision each cluster, these components need to be deployed in order. The included Atmos Workflows will carry out this deployment in the proper order, but any of these step can be run outside of a workflow if desired.
See the ecs workflow (`stacks/workflows/ecs.yaml`) for each individual deployment step.
## Deploy ACM Certificates
First deploy all required ACM certificates for each ECS cluster. These certificates validate the given service domain. You can deploy these certificates before associating the given Route 53 Hosted Zone with the purchased domain in your chosen Domain Registrar, but the certificate will not be ISSUED until the registered domain and Hosted Zone are connected.
Run the following to deploy every required ACM certificate for ECS.
## Connect to the VPN
In order to complete the following steps, connect to the VPN now. For more on connecting to the VPN, see
[`ec2-client-vpn`](https://docs.cloudposse.com/components/library/aws/ec2-client-vpn/#testing).
The OVPN configuration for your VPN can be found in the output of the `ecs-client-vpn` component. For example,
```bash
atmos terraform output ec2-client-vpn -s core-use1-network
```
## Deploy All Clusters
Run the following to deploy every ECS cluster. This workflow will deploy every required platform cluster.
# Related Topics
- [ECS Component](/components/library/aws/ecs/)
- [ECS Services Component](/components/library/aws/ecs-service/)
---
## Decide on ECS load balancer requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
# Decide on ECS load balancer requirements
## Considerations
- Do we need to support internal and external load balancers?
- Can we share the load balancer across applications (our default recommendation)?
- Do you need to support protocols other than HTTP such as raw TCP or UDP?
---
## Decide on the Application Service Log Destination for ECS
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
## Context and Problem Statement
Should logs be sent to datadog or log only to cloudwatch logs?
## Considered Options
### Option 1. Fluent Bit sidecar is required for ECS Fargate
> In addition to your application container, your task definition needs to specify a Fluent Bit sidecar container that’s
> responsible for routing logs to Datadog. AWS provides an `aws-for-fluent-bit` Docker image you can use to create the
> sidecar container.([source](https://www.datadoghq.com/blog/aws-fargate-monitoring-with-datadog/))
#### Pros
- Recommended by Datadog
- Works without much configuration
- Prior art for this
#### Cons
- Sidecar required
- Fargate cpu/mem requirements would go up per task
- More expensive due to fargate pricing is by cpu/memory
### Option 2. Datadog Lambda Forwarder
Apps log directly to cloudwatch log groups and the Datadog lambda forwarder would then forwards the logs
[https://github.com/DataDog/datadog-serverless-functions/tree/master/aws/logs_monitoring](https://github.com/DataDog/datadog-serverless-functions/tree/master/aws/logs_monitoring)
And we have a module for this
[https://github.com/cloudposse/terraform-aws-datadog-lambda-forwarder](https://github.com/cloudposse/terraform-aws-datadog-lambda-forwarder)
#### Pros
- Apps log to cloudwatch log groups
- if logging to a single log group per account, we can use `filter_patterns` to filter the events
- if logging to a log group per service, we can give it multiple cloudwatch log group names or deploy a separate lambda
per service
- Module exposes providing your own lambda in case forking the official Datadog one is needed
- Prior art for this
#### Cons
- Previously recommended by Datadog
- Need to log to cloudwatch logs before pushing to Datadog
- Maintaining upgrades for the lambdas
- Monitoring lambdas
### Option 3. Fluentd with CloudWatch Plugin
In the future, if AWS adds a logdriver for `fluentd` without firelens (like with `splunk`), then we can forward to a
fluentd aggregator without a sidecar and without logging to cloudwatch.
We could create a single service in each ecs cluster to run fluentd and the cloudwatch plugin would be able to push
cloudwatch logs to datadog.
[https://hub.docker.com/r/fluent/fluentd/](https://hub.docker.com/r/fluent/fluentd/)
[https://github.com/fluent-plugins-nursery/fluent-plugin-cloudwatch-logs#in_cloudwatch_logs](https://github.com/fluent-plugins-nursery/fluent-plugin-cloudwatch-logs#in_cloudwatch_logs)
#### Pros
- More control over parsing logs
- Can output to multiple SIEMs
#### Cons
- Same cons as option 1
- Maintaining a fluentd cluster
- Monitoring a fluentd cluster
- We don’t have prior art for this.
---
## Review Design Decisions
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions for ECS. These decisions relate to how you
will provision your Elastic Container Service clusters on AWS.
---
## ECS Foundational Platform
import ReactPlayer from "react-player";
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
Elastic Container Service (ECS) is a fully-managed container orchestration service provided by Amazon Web Services (AWS) that simplifies the process of deploying, managing, and scaling containerized applications. ECS makes it easy to run and manage Docker containers on AWS infrastructure, providing a secure and scalable platform for your applications.
AI generated voice
## The Problem
The emergence of Docker necessitated the development of container management solutions, with Kubernetes being one of the most widely adopted options. However, Kubernetes is often considered overly complex for smaller scale operations, akin to using a nuclear reactor to charge a phone. In such scenarios, Elastic Container Service (ECS) is a more practical choice for deploying applications with speed and efficiency.
With ECS there is no need to upgrade the underlying platform. Unlike EKS which requires consistent upgrades to stay
current, ECS is a managed service that is always up to date. This means that you can focus on your application and not the underlying platform.
## Our Solution
We have developed a set of Terraform modules that can be used to deploy ECS clusters and services.
### ECS Cluster Component
The [`ecs` component] is used to deploy an ECS cluster and an associated load balancer.
#### Application Load Balancer (ALB)
Through stack configuration you can determine your domains, subdomains, and the number of instances to deploy. The component also supports the deployment of a bastion host, which can be used to access containers on the ECS Cluster.
```yaml
alb_configuration:
public: # name of the ALB to be referred to by other configurations
internal_enabled: false # sets it to public
# resolves to *.public-platform.....
route53_record_name: "*.public-platform"
private:
internal_enabled: true
route53_record_name: "*.private-platform"
```
#### Autoscaling
The cluster component has the ability to scale with a variety of options. Fargate provides a serverless way of scaling. Spot instances provide a cheaper way to run instances than on demand ec2. you can mix these options to provide a cost effective and scalable solution.
```yaml
name: ecs
capacity_providers_fargate: true
capacity_providers_fargate_spot: true
capacity_providers_ec2:
default:
instance_type: t3.medium
max_size: 2
```
### ECS Service
The [`ecs-service` component] is used to deploy an ECS service. This includes the task and the service definition.
By default we also support datadog logging and metrics. This can be disabled by setting `datadog_agent_sidecar_enabled` to false.
```yaml
datadog_agent_sidecar_enabled: false
```
---
## FAQ(Ecs)
import Intro from '@site/src/components/Intro';
Frequently asked questions about ECS with Cloud Posse's reference architecture.
## How do I add a new service to an existing ECS cluster?
Add a new instance of the `ecs-service` component to your stack configuration. The component will automatically detect the ECS cluster and add the service to it.
## How can I add AWS Policies to my ECS Tasks?
Use the `task_policy_arns` to attach policies to individual tasks, this allows those tasks to access AWS resources.
```yaml
task_policy_arns:
- arn:aws:iam::aws:policy/AmazonS3FullAccess
```
## How can I inject secrets into my ECS Service?
Use the `map_secrets` variable which maps a environment variable key to an SSM param store key. This will inject the value of the SSM param store key into the environment variable.
```yaml
map_secrets:
SECRET_KEY: /my/secret/key
```
## How can we create Self Hosted Runners for GitHub with ECS?
We recommend [Runs On](/layers/github-actions/runs-on/) for self-hosted GitHub runners. It provides zero infrastructure management, simple setup via GitHub App, and cost-effective pay-per-use pricing without requiring Kubernetes.
For more on self-hosted GitHub Runners, see the [GitHub Actions layer](/layers/github-actions/).
---
## Provision Example Services on the ECS Platform
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
Provision an example service like the Echo server to test your cluster’s functionality. This will ensure you can access the service remotely via the load balancer, both publicly and privately through the VPN.
## Quick Start
| Steps | Example |
| :----------------- | :----------------------------------------- |
| Deploy Echo Server | `atmos workflow deploy/echo-server -f quickstart/platform/ecs` |
Once the cluster is up and running, continue with the first ECS service deployment. We deploy Echo Server as an example and to validate a given cluster. This deploys two ECS services: one private and one public. The private deployment should only be accessible by VPN.
## Deploy Echo Server
Run the following to deploy `ecs/platform/service/echo-server` and
`ecs/platform/service/echo-server-private` to every cluster.
## Verify the Deployment of Public Endpoints
In your browser, go to each of the following URLs. Make sure to use your service discovery domain and not the example.
plat-sandbox
Go to `https://echo-server.public-platform.use1.sandbox.plat.acme-svc.com/`
plat-dev
Go to `https://echo-server.public-platform.use1.dev.plat.acme-svc.com/`
plat-staging
Go to `https://echo-server.public-platform.use1.staging.plat.acme-svc.com/`
plat-prod
Go to `https://echo-server.public-platform.use1.prod.plat.acme-svc.com/`
## Verify the Deployment of Private Endpoints
Verify these are not publicly accessible. Each of the following should time out if not connected to the VPN.
plat-sandbox
Go to `https://echo-server.private-platform.use1.sandbox.plat.acme-svc.com/`
plat-dev
Go to `https://echo-server.private-platform.use1.dev.plat.acme-svc.com/`
plat-staging
Go to `https://echo-server.private-platform.use1.staging.plat.acme-svc.com/`
plat-prod
Go to `https://echo-server.private-platform.use1.prod.plat.acme-svc.com/`
## Test Private Endpoints using VPN
Then connect to the VPN to successfully, and retry the previous step.
---
## Deploy 1Password SCIM Bridge
import Intro from "@site/src/components/Intro";
import Steps from "@site/src/components/Steps";
import Step from "@site/src/components/Step";
import StepNumber from "@site/src/components/StepNumber";
import CollapsibleText from "@site/src/components/CollapsibleText";
The 1Password SCIM Bridge is a service that allows you to automate the management of users and groups in 1Password. This guide will walk you through deploying the SCIM Bridge for ECS environments.
## Implementation
The implementation of this is fairly simple. We will generate credentials for the SCIM bridge in 1Password, store them in AWS SSM Parameter Store, deploy the SCIM bridge ECS service, and then finally connect your chosen identity provider.
### Generate Credentials for your SCIM bridge in 1Password
The first step is to generate credentials for your SCIM bridge in 1Password. We will pass these credentials to Terraform and the ECS task definition to create the SCIM bridge.
1. Log in to your 1Password account
1. Click Integrations in the sidebar
1. Select "Set up user provisioning"
1. Choose "Custom"
1. You should now see the SCIM bridge credentials. We will need the "scimsession" and "Bearer Token" for the next steps.
1. Save these credentials in a secure location (such as 1Password) for future reference
1. Store only the "scimsession" in AWS SSM Parameter Store. This will allow the ECS task definition to access the credentials securely. Then once the service is running, the server will ask for the bearer token to verify the connection, which we will enter at that time.
- Open the AWS Web Console - Navigate to the target account, such as `core-auto`, and target region, such as `us-west-2`
- Open "AWS System Manager" > "Parameter Store"
- Create a new Secure String parameter using the credentials you generated in the previous step: `/1password/scim/scimsession`
There will be additional steps to complete the integration in 1Password, but first we need to deploy the SCIM bridge service.
### Deploy the SCIM bridge ECS Service
The next step is to deploy the SCIM bridge ECS service. We will use Terraform to create the necessary resources with our existing `ecs-service` component. Ensure you have the `ecs-service` component and `ecs` cluster before proceeding.
If you do not have ECS prerequisites, please see the [ECS layer](/layers/ecs) to create the necessary resources.
1. Create a new stack configuration for the SCIM bridge. The placement of this file will depend on your project structure. For example, you could create a new file such as `stacks/catalog/ecs-services/1password-scim-bridge.yaml` with the following content:
```yaml
import:
- catalog/terraform/services/defaults
components:
terraform:
1pass-scim:
metadata:
component: ecs-service
inherits:
- ecs-service/defaults
vars:
enabled: true
name: 1pass-scim
containers:
service:
name: op_scim_bridge
image: 1password/scim:v2.9.5
cpu: 128
memory: 512
essential: true
dependsOn:
- containerName: redis
condition: START
port_mappings:
- containerPort: 3002
hostPort: 3002
protocol: tcp
map_environment:
OP_REDIS_URL: redis://localhost:6379
OP_TLS_DOMAIN: ""
OP_CONFIRMATION_INTERVAL: "300"
map_secrets:
OP_SESSION: "1password/scim/scimsession"
log_configuration:
logDriver: awslogs
options: {}
redis:
name: redis
image: redis:latest
cpu: 128
memory: 512
essential: true
restart: always
port_mappings:
- containerPort: 6379
hostPort: 6379
protocol: tcp
map_environment:
REDIS_ARGS: "--maxmemory 256mb --maxmemory-policy volatile-lru"
log_configuration:
logDriver: awslogs
options: {}
```
2. Confirm the `map_secrets` value for `OP_SESSION` matches the AWS SSM Parameter Store path you created previously, an confirm they are in the same account and region as this ECS service component.
3. Deploy the ECS service with Atmos:
```bash
atmos terraform apply 1pass-scim -s core-usw2-auto
```
### Validate the Integration
After deploying the SCIM bridge ECS service, verify the service is running and accessible. Connect to the VPN (if deployed the ECS service is deployed with a private ALB), navigate to the SCIM bridge URL, and confirm the service is running.
For example, go to `https://1pass-scim.platform.usw1.auto.core.acme-svc.com/`
### Connect your Identity Provider
Finally, connect your identity provider to the SCIM bridge. The SCIM bridge URL will be the URL you validated in the previous step. Follow the instructions in the 1Password SCIM Bridge documentation to connect your identity provider, using the Bearer Token you generated in the first step.
---
## Setup Vanity Domains on an ALB
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
Learn how to setup ECS Vanity Domains on an existing ALB together with Service Discovery Domains.
## Pre-requisites
- [Understand the differences between Vanity Domains and Service Discovery Domains](/resources/legacy/learning-resources/#the-difference-between-vanity-domains-and-service-discovery-domains)
- Assumes our standard [Network Architecture](/layers/network/)
- Requires `dns-primary` & `dns-delegated` are already deployed.
## Context
After setting up your [Network Architecture](/layers/network) you will have 2 hosted zones in each platform account.
In `dev` for example, you will have Hosted Zones for `dev-acme.com` and `dev.platform.acme.com`.
You should also have an ACM certificate that registers `*.dev-acme.com` and `*.dev.platform.acme.com`.
Also ensure you've deployed applications to your ECS cluster and have two ALBs for service discovery, one public, one
private.
Now we want to set up a vanity subdomain for `echo-server.dev-acme.com` that will point to one of the ALBs used for
service discovery. This saves us money by not requiring a new ALB for each vanity domain.
## Implementation
The implementation of this is fairly simple. We add additional certificates to our ECS Cluster ALBs, then we add another
route to the ALB Listener Rules.
## Setup ACM Certs
By default, our `dns-primary` component will create ACM certs for `*.dev-acme.com`. Depending on the level of subdomains
you want, you may need to disable this with the variable `request_acm_certificate: false`
If a single subdomain is sufficient. e.g. `api.dev-acme.com` then you can leave this enabled.
See the troubleshooting section if you run into issues with recreating resources.
## Understand How it Works
With a valid ACM cert for your domains we configure the ECS Cluster ALBs to use the certificate. This is done by adding
the certificate to the ECS Clusters stack configuration.
You can validate your cert is picked up by the ALB by checking the ALB's target group. You should see the certificate
listed under the `Certificates` tab.
## Adding the ACM TLS Cert to the ECS Cluster ALBs
Here is a snippet of a stack configuration for the ECS Cluster. Note the `additional_certs` variable which declares
which additional certs to add to the ALB.
```yaml
components:
terraform:
ecs/cluster:
vars:
alb_configuration:
public:
internal_enabled: false
# resolves to *.public-platform.....
route53_record_name: "*.public-platform"
additional_certs:
- "dev-acme.com"
private:
internal_enabled: true
route53_record_name: "*.private-platform"
additional_certs:
- "dev-acme.com"
```
## Point the Vanity Domain to the ALB
Now that our ECS Cluster ALB supports `*.dev-acme.com`, we now need to update our service to point to this new domain
as well. We can use the following snippet:
```yaml
components:
terraform:
ecs/platform/service/echo-server:
vars:
vanity_domain: "dev-acme.com"
vanity_alias:
- "echo-server.dev-acme.com"
```
## Troubleshooting
The problem with this comes when you need to remove a subdomain or ACM certificate. By running
`atmos terraform deploy dns-delegated -s plat--dev` with `request_acm_certificate: false`, you are trying to
destroy a single ACM certificate in an account. While this is a small scope deletion, the ACM certificate is in use by
the ALB, and the ALB has many different targets. Thus Terraform will stall out.
:::warning This is a **destructive** operation and will cause downtime for your applications.
:::
You need to:
1. Delete the listeners and targets of the ALB that are using the certificate
2. Delete the ALB
3. Terraform will then successfully delete the ACM certificate.
You will notice:
1. The ALB will be recreated
2. Ingresses should reconcile for service discovery domains
3. ALB Targets should be recreated pointing at service discovery domains.
Once you recreate the correct ACM certificates and have valid ingresses you should be able to access your applications
via the vanity domain.
---
## Tutorials(4)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These are some additional tutorials that will help you along with the associated ECS components.
---
## Deploying the EKS Platform
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import Note from '@site/src/components/Note'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
## Quick Start
| Steps | Example |
| :---------------------------------- | :-------------------------------------------------------- |
| 1. Vendor EKS components | `atmos workflow vendor -f quickstart/platform/eks` |
| 2. Connect to the VPN | |
| 3. Deploy roles for each EKS stack | `atmos workflow deploy/iam-service-linked-roles -f quickstart/platform/eks` |
| 4. Deploy cluster into each stack | `atmos workflow deploy/cluster -s plat-use1-dev -f quickstart/platform/eks` |
| 5. Deploy resources into each stack | `atmos workflow deploy/resources -s plat-use1-dev -f quickstart/platform/eks` |
Repeat steps 4 and 5 for each EKS stack, typically `plat-dev`, `plat-staging`, and `plat-prod`
## Requirements
In order to deploy EKS, Networking must be fully deployed and functional. In particular, the user deploying the cluster
must have a working VPN connection to the targeted account. See [the network documentation](/layers/network) for details.
All deployment steps below assume that the environment has been successfully set up with the following steps.
1. Sign into AWS via [Atmos Auth](/layers/identity/how-to-log-into-aws/)
1. Connect to the VPN
1. Open Geodesic
# Steps
## Vendor Components
EKS adds many components required to set up a cluster. Generally, all these components are contained in the EKS
components and catalog folders, under `components/terraform/eks` and `catalog/stacks/eks` respectively.
Vendor these components with the included Atmos Workflows.
## Deploy EKS Cluster
EKS provisioning includes many components packaged together into a single import per stack. Leveraging Atmos
inheritance, we have defined a baseline set of required components for all EKS deployments and a unique set of
additional components for a particular stack's EKS deployment. Find these catalog set definitions under
`catalog/stacks/eks/clusters`.
To provision a cluster, these components need to be deployed in order. The included Atmos Workflows will carry out this
deployment in the proper order, but any of these step can be run outside of a workflow if desired.
See the eks workflow (`stacks/workflows/eks.yaml`) for each individual deployment step.
## Deploy IAM Service Linked Roles
In order for Karpenter to reserve Spot Instances, the cluster needs to have a Service-Linked Role. Deploy these to all
cluster accounts with `iam-service-linked-roles`
## Deploy Initial Platform Dev Cluster
First deploy the cluster and AWS EFS. Since Karpenter will be used in the following steps, the initial cluster is
deployed without Nodes.
Change `use1` to your cluster's environment!
## Deploy Platform Dev Cluster Resources
Once the cluster is up and running, continue with the EKS `plat` resources deployment. These need to be deployed in the
given order by the include Atmos Workflow. For additional details on each component, see the included `README.md` for
the individual component.
Run the Atmos Workflow to deploy all required `plat` components.
Validate the cluster deployment with `eks/echo-server` and the targeted service domain. The following URL should return
a success message for `dev`:
https://echo.use1.dev.plat.acme-svc.com/
## Deploy Staging
Once the `dev` cluster is deployed and validated, continue with `staging` and then `prod`.
Repeat the same deployment steps in `staging`
Validate `staging`: https://echo.use1.staging.plat.acme-svc.com/
## Deploy Production
Then deploy `prod`
Validate `prod`: https://echo.use1.prod.plat.acme-svc.com/
# Related Topics
- [eks/cluster](/components/library/aws/eks/cluster/)
- [Karpenter Documentation](https://karpenter.sh/)
---
## Decide on Default Storage Class for EKS Clusters
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
When provisioning EKS (Kubernetes) clusters, there is no one-size-fits-all
recommendation for the default storage class. The right choice depends on your
workload’s specific requirements, such as performance, scalability, and
cost-efficiency. While only one storage class can be set as the default,
storage classes are not mutually exclusive, and the best solution may often
involve using a combination of classes to meet various needs.
A `StorageClass` in Kubernetes defines the type of storage (e.g., EBS, EFS, etc.) and its parameters (e.g., performance, replication) for dynamically provisioning Persistent Volumes. The default `StorageClass` is automatically used when a `PersistentVolumeClaim` (PVC) is created without specifying a specific storage class, but its configuration varies depending on the cluster setup and cloud provider. Storage classes available on AWS differ from other clouds.
## Default Storage Class Options
We need to decide between **Amazon EFS (Elastic File System)** and **Amazon EBS (Elastic Block Store)** as the default storage class for our EKS clusters.
- **Availability Zone Lock-in:** EBS volumes are restricted to a single
Availability Zone, which can impact high availability and disaster recovery
strategies. This is a key drawback of using EBS. If you need a Pod to recover
across multiple AZs, EFS is a more suitable option, though it comes at a
higher cost. - **Performance:** EFS generally offers lower performance when
compared to EBS. This can be mitigated by paying for additional bandwidth but
has routinely caused outages due to throttling even with low-performance
applications. Additionally, poor lock performance makes EFS completely
unsuitable for high-performance applications like RDBMS. - **Cost:** EFS is
significantly more expensive than EBS, at least 3x the price per GB and
potentially more depending on performance demands, although there may be some
savings from not having to reserve size for future growth. - **Concurrent
Access:** EBS volumes can only be attached to one instance at a time within
the same Availability Zone, making them unsuitable for scenarios that require
concurrent access from multiple instances. In contrast, EFS allows multiple
instances or Pods to access the same file system concurrently, which is useful
for distributed applications or workloads requiring shared storage across
multiple nodes.
## Amazon EFS
**Amazon EFS** provides a scalable, fully managed, elastic file system with NFS compatibility, designed for use with AWS Cloud services and on-premises resources.
### Pros:
- **Unlimited Disk Space:** Automatically scales storage capacity as needed without manual intervention.
- **Shared Access:** Allows multiple pods to access the same file system concurrently, facilitating shared storage scenarios.
- **Managed Service:** Fully managed by AWS, reducing operational overhead for maintenance and scaling.
- **Availability Zone Failover**: For workloads that require failover across multiple Availability Zones, EFS is a more suitable option. It provides multi-AZ durability, ensuring that Pods can recover and access persistent storage seamlessly across different AZs.
### Cons:
- **Lower Performance:** Generally offers lower performance compared to EBS, with throughput as low as 100 MB/s, which may not meet the demands of even modestly demanding applications.
- **Higher Cost:** Significantly more expensive than EBS, at least 3x the price per GB and potentially more depending on performance demands, although there may be some savings from not having to reserve size for future growth.
- **Higher Latency:** Higher latency compared to EBS, which may impact performance-sensitive applications.
- **No Native Backup Support:** EFS lacks a built-in, straightforward backup and recovery solution for EKS. Kubernetes-native tools don’t support EFS backups directly, requiring the use of alternatives like AWS Backup. Recovery, however, can be non-trivial and may involve complex configurations to restore data effectively.
## Amazon EBS
**Amazon EBS** provides high-performance block storage volumes for use with Amazon EC2 instances, suitable for a wide range of workloads.
### Pros:
- **Higher Performance:** Offers high IOPS and low latency, making it ideal for performance-critical applications.
- **Cost-Effective:** Potentially lower costs for specific storage types and usage scenarios.
- **Native EKS Integration:** Well-integrated with Kubernetes through the EBS CSI (Container Storage Interface) driver, facilitating seamless provisioning and management.
- **Supports Snapshot and Backup:** Supports snapshotting for data backup, recovery, and cloning.
### Cons:
- **Single-Attach Limitation:** EBS volumes can only be attached to a single node at a time, limiting shared access across multiple Pods or instances. Additional configurations or alternative storage solutions are required for scenarios needing concurrent access.
- **Availability Zones:** EBS volumes are confined to a single Availability Zone, limiting high availability and disaster recovery across zones. This limitation can be mitigated by configuring a `StatefulSet` with replicas spread across multiple AZs. However, for workloads using EBS-backed Persistent Volume Claims (PVCs), failover to a different AZ requires manual intervention, including provisioning a new volume in the target zone, as EBS volumes cannot be moved between zones.
- **Non-Elastic Storage:** EBS volumes can be manually resized, but this process is not fully automated in EKS. After resizing an EBS volume, additional manual steps are required to expand the associated Persistent Volume (PV) and Persistent Volume Claim (PVC). This introduces operational complexity, especially for workloads with dynamic storage needs, as EBS lacks automatic scaling like EFS.
## Recommendation
Use **Amazon EBS** as the primary storage option when:
- High performance, low-latency storage is required for workloads confined to a single Availability Zone.
- The workload doesn’t require shared access across multiple Pods.
- You need cost-effective storage with support for snapshots and backups.
- Manual resizing of volumes is acceptable for capacity management, recognizing that failover across AZs requires manual intervention and provisioning.
Consider **Amazon EFS** when:
- Multiple Pods need concurrent read/write access to shared data across nodes.
- Workloads must persist data across multiple Availability Zones for high availability, and the application does not support native replication.
- Elastic, automatically scaling storage is necessary to avoid manual provisioning, especially for workloads with unpredictable growth.
- You are willing to trade off higher costs and lower performance for multi-AZ durability and easier management of shared storage.
:::important
EFS should never be used as backend storage for performance-sensitive applications like databases, due to its high latency and poor performance under heavy load.
:::
---
## Decide on EKS Node Pool Architecture
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Kubernetes has a concept of Node Pools, which are basically pools of computing resources. Node pools are where the
scheduler dispatches workloads based on the taints/tolerations of nodes and pods.
## Types of Node Pools
At a minimum recommend one node pool per availability zone to work optimally with the cluster autoscaler. 1 Node Group
per AZ is required by the Kubernetes Cluster Autoscaler to effectively scale up nodes in the appropriate AZs.
- [https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html#ca-ng-considerations](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html#ca-ng-considerations)
:::caution **Important**
If you are running a stateful application across multiple Availability Zones that is backed by Amazon EBS volumes and
using the Kubernetes [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html), you
should configure multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the
`--balance-similar-node-groups` feature.
:::
After that, we should decide the kinds of node pools. Typically a node pool is associated with an instance type. Also,
since certain instance types are more expensive than others (e.g. GPU support), creating different pools of resources is
suggested. We can create pools with multiple cores, high memory, or GPU. There’s no one-size-fits-all approach with
this. The requirements are determined by your applications. If you don’t know the answer right now, we’ll start with a
standard node pool and grow from there. This is an easily reversible decision.
## Provisioning of Node Pools
We have a few ways that we can provision node pools.
1. Use
[https://github.com/cloudposse/terraform-aws-eks-node-group](https://github.com/cloudposse/terraform-aws-eks-node-group)
(Fully-managed)
2. Use
[https://github.com/cloudposse/terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers)
(Self-managed in Auto Scale Groups)
3. Use
[https://github.com/cloudposse/terraform-aws-eks-fargate-profile](https://github.com/cloudposse/terraform-aws-eks-fargate-profile)
(Fully-managed serverless)
4. Use
[https://github.com/cloudposse/terraform-aws-eks-spotinst-ocean-nodepool](https://github.com/cloudposse/terraform-aws-eks-spotinst-ocean-nodepool)
([Spot.io](http://Spot.io) managed node pools - most cost-effective)
## Fargate Node Limitations
Currently, EKS Fargate has a lot of limitations
[https://docs.aws.amazon.com/eks/latest/userguide/fargate.html](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html)
- Classic Load Balancers and Network Load Balancers are not supported on pods running on Fargate. For ingress, we
recommend that you use the ALB Ingress Controller on Amazon EKS (minimum version v1.1.4).
- Pods must match a Fargate profile at the time that they are scheduled in order to run on Fargate. Pods that do not
match a Fargate profile may be stuck as Pending. If a matching Fargate profile exists, you can delete pending pods
that you have created to reschedule them onto Fargate.
- Daemonsets are not supported on Fargate. If your application requires a daemon, you should reconfigure that daemon to
run as a sidecar container in your pods.
- Privileged containers are not supported on Fargate.
- Pods running on Fargate cannot specify HostPort or HostNetwork in the pod manifest.
- GPUs are currently not available on Fargate.
- Pods running on Fargate are not assigned public IP addresses, so only private subnets (with no direct route to an
Internet Gateway) are supported when you create a Fargate profile.
- We recommend using the Vertical Pod Autoscaler with pods running on Fargate to optimize the CPU and memory used for
your applications. However, because changing the resource allocation for a pod requires the pod to be restarted, you
must set the pod update policy to either Auto or Recreate to ensure correct functionality.
- Stateful applications are not recommended for pods running on Fargate. Instead, we recommend that you use AWS
solutions such as Amazon S3 or DynamoDB for pod data storage.
- Fargate runs each pod in a VM-isolated environment without sharing resources with other pods. However, because
Kubernetes is a single-tenant orchestrator, Fargate cannot guarantee pod-level security isolation. You should run
sensitive workloads or untrusted workloads that need complete security isolation using separate Amazon EKS clusters.
---
## Decide on email address for cert-manager support emails
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
Describe why we are making this decision or what problem we are solving.
## Considered Options
### Option 1 (Recommended)
:::tip Our Recommendation is to use Option 1 because....
:::
#### Pros
-
#### Cons
-
### Option 2
#### Pros
-
#### Cons
-
### Option 3
#### Pros
-
#### Cons
-
## References
- Links to any research, ADRs or related Jiras
---
## Decide on Helm Chart Repository Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Helm charts were originally intended to be compiled into packages which are `.tar.gz` archives with a manifest and the
templatized kubernetes manifests. This chart repository will house proprietary charts your organization depends on and
therefore is on the critical path for both availability and supply-chain attacks. Any adversary who controls your chart
repository effectively controls what you can run on your cluster. The chart registry goes down, your company will be
unable to deploy applications via helm. The same goes for third-party charts, hosted on external registries.
## Solution
Fortunately, charts are very simple to produce and host. They are easily served from a VCS (e.g. git) or S3. There are a
few ways to host chart repositories, but our recommendation is just to use VCS. Using `terraform` or `helmfile`, we can
just point directly to GitHub and short-circuit the need for managing a chart repository altogether. Terraform natively
supports this pattern.
:::tip Our recommendation is to use GitHub directly and to avoid using chart repositories unless there are specific
requirements for it.
:::
## Considerations
If we must, then here are the considerations:
1. **Use VCS** (e.g. GitHub) to pull charts directly and build on-demand.
2. **Use OCI** (docker registry) to push/pull charts (e.g. ECR). Support for terraform will probably drop soon.
[https://github.com/hashicorp/terraform-provider-helm/issues/633#issuecomment-1021093381](https://github.com/hashicorp/terraform-provider-helm/issues/633#issuecomment-1021093381)
[https://aws.amazon.com/blogs/containers/oci-artifact-support-in-amazon-ecr/](https://aws.amazon.com/blogs/containers/oci-artifact-support-in-amazon-ecr/)
3. **Use GitHub Actions** to build chart artifacts and push them to a static endpoint.
[https://github.com/helm/chart-releaser-action](https://github.com/helm/chart-releaser-action)
4. **Use Nexus** - Self-hosted artifactory alternative
5. **Use Artifactory** (SaaS preferred)
6. **Use S3** (public or private chart repositories)
[https://github.com/hypnoglow/helm-s3](https://github.com/hypnoglow/helm-s3)
7. **ChartMuseum** → S3 [https://github.com/helm/chartmuseum](https://github.com/helm/chartmuseum)
Using Nexus or Artifactory make the most sense if we plan to use them elsewhere to cache artifacts.
## References
- [https://helm.sh/docs/topics/chart_repository](https://helm.sh/docs/topics/chart_repository)
---
## Decide on Host OS Flavor for EKS
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
We need to pick the AMI which will be used by the EKS cluster nodes. There are a few options and the right one depends
on your needs and organization’s requirements for compliance and packaging. It also relates to
[Decide on Technical Benchmark Framework](/layers/security-and-compliance/design-decisions/decide-on-technical-benchmark-framework)
and
[Decide on Strategy for Hardened Base AMIs](/layers/security-and-compliance/design-decisions/decide-on-strategy-for-hardened-base-amis).
## Solution
EKS managed & unmanaged node groups support the ability to have custom AMIs. By default, we use Amazon Linux.
Where this might be insufficient is if your organization requires a vetted OS or specific tools for audits.
Since this is a reversible decision, so we can start with Amazon Linux and change later if needed.
### Option 1: Bottlerocket
Amazon’s Bottlerocket OS is a container-native OS built to be immutable and only supports non-root containers. It
supports a Kubernetes Operator for automatically updating the cluster, in a way reminiscent of the `update_engine` of
the CoreOS (defunct).
[https://aws.amazon.com/blogs/containers/amazon-eks-adds-native-support-for-bottlerocket-in-managed-node-groups/](https://aws.amazon.com/blogs/containers/amazon-eks-adds-native-support-for-bottlerocket-in-managed-node-groups/)
[https://aws.amazon.com/bottlerocket/](https://aws.amazon.com/bottlerocket/)
[https://github.com/bottlerocket-os/bottlerocket-update-operator](https://github.com/bottlerocket-os/bottlerocket-update-operator)
### Option 2: Amazon Linux
Our standard recommendation is to use Amazon Linux’s EKS optimized image which is the most battle-tested in the AWS
landscape.
### Option 3: DIY
If we want to build custom AMIs, then we recommend using Packer with GitHub Action workflows to automatically build and
push AMIs.
**REVERSIBLE**
---
## Decide on Kubernetes Ingress Controller(s)
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Considerations
Kubernetes supports any number of ingress controllers deployed multiple times. The choice of Ingress controller will
determine which AWS features we can natively support (e.g. WAF requires an ALB).
Our recommendation is to use the `aws-loadbalancer-controller` (aka `aws-alb-ingress-controller` v2) with ACM
certificates provisioned by terraform.
:::caution TLS terminates at the ALB. It’s then _optionally_ unencrypted if the downstream services support it, such as
with self-signed certificates and a TLS sidecar like Envoy or Nginx. Without this, traffic is in clear-text between the
ALB and the downstream service or pod.
:::
Historically, we’ve recommended `ingress-nginx` (formerly `nginx-ingress`), but prefer to use the AWS load balancer
controller due to it’s native support by AWS.
---
## Decide on Secrets Management for EKS
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
We need to decide on a secrets management strategy for EKS. We prefer storing secrets externally, like in AWS SSM Parameter Store, to keep clusters more disposable. If we decide on this, we'll need a way to pull these secrets into Kubernetes.
## Problem
We aim to design our Kubernetes clusters to be disposable and ephemeral, treating them like cattle rather than pets. This influences how we manage secrets. Ideally, Kubernetes should not be the sole source of truth for secrets, though we still want to leverage Kubernetes’ native `Secret` resource. If the cluster experiences a failure, storing secrets exclusively within Kubernetes risks losing access to them. Additionally, keeping secrets only in Kubernetes limits integration with other services.
To address this, several solutions allow secrets to be stored externally (as the source of truth) while still utilizing Kubernetes' `Secret` resources. These solutions, including some open-source tools and recent offerings from Amazon, enhance resilience and interoperability. Any approach must respect IAM permissions and ensure secure secret management for applications running on EKS. We have several options to consider that balance external secret storage with Kubernetes-native functionality.
### Option 1: External Secrets Operator
Use [External Secrets Operator](https://external-secrets.io/latest/) with AWS SSM Parameter Store.
External Secrets Operator is a Kubernetes operator that manages and stores sensitive information in external secret management systems like AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, HashiCorp Vault, and more. It allows you to use these external secret management systems to securely add secrets in your Kubernetes cluster.
Cloud Posse historically recommends using External Secrets Operator with AWS SSM Parameter Store and has existing Terraform modules to support this solution. See the [eks/external-secrets-operator](/components/library/aws/eks/external-secrets-operator/) component.
### Option 2: AWS Secrets Manager secrets with Kubernetes Secrets Store CSI Driver
Use [AWS Secrets and Configuration Provider (ASCP) for the Kubernetes Secrets Store CSI Driver](https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_csi_driver.html). This option allows you to use AWS Secrets Manager secrets as Kubernetes secrets that can be accessed by Pods as environment variables or files mounted in the pods. The ASCP also works with [Parameter Store parameters](https://docs.aws.amazon.com/systems-manager/latest/userguide/integrating_csi_driver.html)
However, Cloud Posse does not have existing Terraform modules for this solution. We would need to build this support.
### Option 3: SOPS Operator
Use [SOPS Operator](https://github.com/isindir/sops-secrets-operator) to manage secrets in Kubernetes. SOPS Operator is a Kubernetes operator that builds on the `sops` project by Mozilla to encrypt the sensitive portions of a `Secrets` manifest into a `SopsSecret` resource, and then decrypt and provision `Secrets` in the Kubernetes cluster.
1. **Mozilla SOPS Encryption**: Mozilla SOPS (Secrets OPerationS) is a tool that encrypts Kubernetes secret manifests, allowing them to be stored securely in Git repositories. SOPS supports encryption using a variety of key management services. Most importantly, it supports AWS KMS which enables IAM capabilities for native integration with AWS.
2. **GitOps-Compatible Secret Management**: In a GitOps setup, storing plain-text secrets in Git poses security risks. Using SOPS, we can encrypt sensitive data in Kubernetes secret manifests while keeping the rest of the manifest in clear text. This allows us to store encrypted secrets in Git, track changes with diffs, and maintain security while benefiting from GitOps practices like version control, auditability, and CI/CD pipelines.
3. **AWS KMS Integration**: SOPS uses AWS KMS to encrypt secrets with customer-managed keys (CMKs), ensuring only authorized users—based on IAM policies—can decrypt them. The encrypted secret manifests can be safely committed to Git, with AWS securely managing the keys. Since it's IAM-based, it integrates seamlessly with STS tokens, allowing secrets to be decrypted inside the cluster without hardcoded credentials.
4. **Kubernetes Operator**: The [SOPS Secrets Operator](https://github.com/isindir/sops-secrets-operator) automates the decryption and management of Kubernetes secrets. It monitors a `SopsSecret` resource containing encrypted secrets. When a change is detected, the operator decrypts the secrets using AWS KMS and generates a native Kubernetes `Secret`, making them available to applications in the cluster. AWS KMS uses envelope encryption to manage the encryption keys, ensuring that secrets remain securely encrypted at rest.
5. **Improved Disaster Recovery and Security**: By storing the source of truth for secrets outside of Kubernetes (e.g., in Git), this setup enhances disaster recovery, ensuring secrets remain accessible even if the cluster is compromised or destroyed. While secrets are duplicated across multiple locations, security is maintained by using IAM for encryption and decryption outside Kubernetes, and Kubernetes' native Role-Based Access Control (RBAC) model for managing access within the cluster. This ensures that only authorized entities, both external and internal to Kubernetes, can access the secrets.
The SOPS Operator combines the strengths of Mozilla SOPS and AWS KMS, allowing you to:
- Encrypt secrets using KMS keys.
- Store encrypted secrets in Git repositories.
- Automatically decrypt and manage secrets in Kubernetes using the SOPS Operator.
This solution is ideal for teams following GitOps principles, offering secure, external management of sensitive information while utilizing Kubernetes' secret management capabilities. However, the redeployment required for secret rotation can be heavy-handed, potentially leading to a period where services are still using outdated or invalid secrets. This could cause services to fail until the new secrets are fully rolled out.
## Recommendation
We recommend using the External Secrets Operator with AWS SSM Parameter Store. This is a well-tested solution that we have used in the past. We have existing Terraform modules to support this solution.
However, we are in the process of evaluating the AWS Secrets Manager secrets with Kubernetes Secrets Store CSI Driver solution. This is the AWS supported option and may be a better long-term solution. We will build the required Terraform component to support this solution.
## Consequences
We will develop the `eks/secrets-store-csi-driver` component using the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/getting-started/installation)
---
## Review Design Decisions(Design-decisions)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions for EKS. These decisions relate to how you
will provision your Kubernetes clusters on AWS.
---
## EKS Foundational Platform
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
Amazon EKS is a managed Kubernetes service that allows you to run Kubernetes in AWS cloud and on-premises data centers. AWS handles the availability and scalability of the Kubernetes control plane, which oversees tasks such as scheduling containers, managing application availability, and storing cluster data. While AWS manages control plane upgrades, users are responsible for the worker nodes and the workloads running on them, including operators, controllers, and applications. We use Karpenter for managing node pools and support spot instances to optimize costs. Be aware that you'll need to upgrade the cluster quarterly due to the significant pace of Kubernetes innovation. Although EKS has a steeper learning curve compared to ECS, it offers greater flexibility and control, making it ideal for organizations already utilizing Kubernetes.
## The Problem
Although Amazon EKS is a managed service, there is still much that is needed to set up any given cluster. First of all,
we must decide how we want to deploy Nodes for the cluster. EC2 instance backed nodes, Amazon Fargate, or Karpenter all
provide solutions for the foundation of a cluster. Next we must provide a method to authenticate with the cluster.
Amazon IAM roles can grant API access to the EKS service but do not grant control within Kubernetes. Kubernetes system
roles are native to the cluster, but we need to be able to scope finer access of users and resources than what is
provided natively. Furthermore, we need to connect each cluster to our network and DNS architecture. Clusters must be
secure and protected from the public internet, yet developers still need to be able to connect and manage cluster
resources. And finally, we need a place to storage application data.
## Our Solution
Cloud Posse deploys EKS through a number of components. Each component has a specific responsibility and works in
harmony with the rest. We first deploy a nodeless EKS cluster and create an AWS Auth config mapping. This `ConfigMap`
connects our existing AWS Teams architecture to the cluster and allows us to assign Kubernetes roles to a given Team
Role. Next we use Karpenter to manage nodes on the cluster. Karpenter automatically launches compute resources to handle
cluster applications and provides fast and simple compute provisioning for Kubernetes clusters. We then deploy a set of
controllers and operators for the cluster. These controllers will automatically connect the cluster to our network and
DNS architecture by annotations and manage storage within the cluster. Simply adding the relevant annotation to a given
resources triggers the creation and management of Load Balancers in AWS, adds routing to the relevant Route 53 Hosted
Zone, provisions certificates, and more. These resources set the foundation for any application platform. From this
foundation, your application will be fully secure, scalable, and resilient.
## References
- [Decide on EKS Node Pool Architecture](/layers/eks/design-decisions/decide-on-eks-node-pool-architecture)
- [Decide on Kubernetes Ingress Controller(s)](/layers/eks/design-decisions/decide-on-kubernetes-ingress-controller-s)
- [How to Load Test in AWS](/learn/maintenance/tutorials/how-to-load-test-in-aws)
- [How to Keep Everything Up to Date](/learn/maintenance/upgrades/how-to-keep-everything-up-to-date)
- [How to Upgrade EKS Cluster Addons](/learn/maintenance/upgrades/how-to-upgrade-eks-cluster-addons)
- [How to Upgrade EKS](/learn/maintenance/upgrades/how-to-upgrade-eks)
---
## FAQ(Eks)
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
Frequently asked questions about EKS with Cloud Posse's reference architecture.
## How can I create secrets for an EKS cluster?
Consider deploying the [`external-secrets-operator` component](/components/library/aws/eks/external-secrets-operator).
This component creates an external SecretStore configured to synchronize secrets from AWS SSM Parameter store as
Kubernetes Secrets within the cluster. Per the operator pattern, the `external-secret-operator` pods will watch for any
`ExternalSecret` resources which reference the `SecretStore` to pull secrets from.
## How does the `alb-controller-ingress-group` determine the name of the ALB?
1. First the component uses the [null-label](/modules/library/null/label) module to generate our intended name. We do this to meet the character length restrictions on ALB names. [ref](https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks/alb-controller-ingress-group/main.tf#L75-L83)
1. Then we pass that output to the Kubernetes Ingress resource with an annotation intended to define the ALB's name. [ref](https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks/alb-controller-ingress-group/main.tf#L98)
1. Now the Ingress is created and `alb-controller` creates an ALB using the annotations on that `Ingress`. This ALB name will have a dynamic character sequence at the end of it, so we cannot know what the name will be ahead of time.
1. Finally, we grab the actual name that is given to the created ALB with the `data.aws_lb` resources. [ref](https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks/alb-controller-ingress-group/main.tf#L169)
1. Then output that name for future reference. [ref](https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks/alb-controller-ingress-group/main.tf#L36)
## How can we create Self-Hosted Runners for GitHub with EKS?
Self-Hosted Runners are a great way to save cost and add customizations with GitHub Actions. Since we've already
implemented EKS for our platform, we can build off that foundation to create another cluster to manage Self-Hosted
runners in GitHub. We deploy that new EKS cluster to `core-auto` and install the
[Actions Runner Controller (ARC) chart](https://github.com/actions/actions-runner-controller). This controller will
launch and scale runners for GitHub automatically.
For self-hosted runners, we now recommend [Runs On](/layers/github-actions/runs-on/), which doesn't require Kubernetes. If you prefer running runners on EKS, see the [Actions Runner Controller (ARC) documentation](/layers/github-actions/tutorials/eks-github-actions-controller/).
## Common Connectivity Issues and Solutions
If you're having trouble connecting to your EKS cluster, follow these comprehensive steps to diagnose and resolve the issue:
**1. Test Basic Connectivity**
First, test basic connectivity to your cluster endpoint. This helps isolate whether the issue is with basic network connectivity or something more specific:
```bash
curl -fsSk --max-time 5 "https://CLUSTER_ENDPOINT/healthz"
```
If these tests fail, it indicates a fundamental connectivity issue that needs to be addressed before proceeding to more specific troubleshooting.
**2. Check Node Communication**
If worker nodes aren't joining the cluster, follow these detailed steps:
- Verify that the addon stack file (e.g., `stacks/catalog/eks/mixins/k8s-1-29.yaml`) is imported into your stack.
- Verify cluster add-ons are properly configured for your EKS version.
- Check CoreDNS is running
- Verify kube-proxy is deployed
- Ensure VPC CNI is correctly configured
- Confirm the rendered component stack configuration.
```bash
atmos describe component eks/cluster -s
```
**3. Verify Network Configuration**
- Security Groups:
- Control plane security group must allow port 443 inbound from worker nodes
- Worker node security group must allow all traffic between nodes
- Verify outbound internet access for pulling container images
- Subnet Routes:
- Verify route tables have paths to all required destinations
- Check for conflicting or overlapping CIDR ranges
- Ensure NAT Gateway is properly configured for private subnets
- Transit Gateway:
- Verify TGW attachments are active and associated
- Check TGW route tables for correct propagation
- Confirm cross-account routing if applicable
- Private Subnets Configuration:
- Set `cluster_private_subnets_only: true` in your configuration
- Ensure private subnets have proper NAT Gateway routing
**4. VPN Connectivity**
When accessing via AWS Client VPN, verify these configurations:
- VPN Routes:
- Check route table entries for EKS VPC CIDR
- Verify routes are active and not in pending state
- Confirm no conflicting routes exist
- Subnet Associations:
- Ensure VPN endpoint is associated with correct subnets
- Verify subnet route tables include VPN CIDR range
- Authorization Rules:
- Check network ACLs allow VPN CIDR range
- Verify security group rules permit VPN traffic
- Confirm IAM roles have necessary permissions
After making any changes, have clients disconnect and reconnect to receive updated routes.
**5. Advanced Diagnostics**
- AWS Reachability Analyzer:
- Enable cross-account analysis for VPC peering or TGW connections
- Test from VPN ENI to cluster endpoint
- Test return path from cluster to VPN ENI
---
## EKS as a Foundational Platform
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
We first deploy the foundation for the cluster. The `eks/cluster` component deploys the initial EKS resources to AWS,
including Auth Config mapping. We do not deploy any nodes with the cluster initially. Then once EKS is available, we
connect to the cluster and start deploying resources. First is Karpenter. We deploy the Karpenter chart on a Fargate
node and the IAM service role to allow Karpenter to purchase Spot Instances. Karpenter is the only resources that will
be deployed to Fargate. Then we deploy Karpenter Node Pools using the CRD created by the initial Karpenter component.
These provisioners will automatically launch and scale the cluster to meet our demands. Next we deploy `idp-roles` to
manage custom roles for the cluster, and deploy `metrics-server` to provide access to resource metrics.
Then we connect the cluster to our network. First we must deploy the `cert-manager` component to provision X.509
certificates on the cluster. Then we deploy the `alb-controller` component to provision and associate ALBs or NLBs based
on `Ingress` annotations that route traffic to the cluster. Then we deploy the `alb-controller-ingress-group` to
actually create that ALB. Next, we deploy `external-dns` which will look for annotations to make services discoverable,
and then create records in our Route 53 Hosted Zones mapping to the cluster. Finally we deploy `echo-server` to validate
the complete setup.
:::info
Connecting to an EKS cluster requires a VPN connection! See [ec2-client-vpn](/components/library/aws/ec2-client-vpn/)
for details.
:::
Depending on your application requirements we can also deploy a number of operators. The most common is the
`efs-controller`, which we use to provide encrypted block storage that is not zone-locked. Other operators are
optionally but often include the `external-secrets-operator` to automatically sync secrets from AWS SSM Parameter Store.
Monitoring and release engineering are handled separately from the components mentioned here, and we will expand of
those implementations in follow up topics. For details, see the
[Monitoring](/layers/monitoring/) and
[Release Engineering](/layers/software-delivery/fundamentals/) quick start documents.
#### Foundation
- [`eks/cluster`](/components/library/aws/eks/cluster/): This component is responsible for provisioning an end-to-end
EKS Cluster, including IAM role to Kubernetes Auth Config mapping.
- [`eks/karpenter`](/components/library/aws/eks/karpenter-controller/): Installs the Karpenter chart on the EKS cluster and
prepares the environment for provisioners.
- [`eks/karpenter-provisioner`](/components/library/aws/eks/karpenter-node-pool/): Deploys Karpenter Node Pools
using CRDs made available by `eks/karpenter`
- [`iam-service-linked-roles`](/components/library/aws/iam-service-linked-roles/): Provisions
[IAM Service-Linked](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) roles. These
are required for Karpenter to purchase Spot instances.
- [`idp-roles`](/components/library/aws/eks/idp-roles): These identity provider roles specify several pre-determined
permission levels for cluster users and come with bindings that make them easy to assign to Users and Groups. Use this
component to define custom permission within EKS.
- [`metrics-server`](/components/library/aws/eks/metrics-server): A Kubernetes addon that provides resource usage
metrics used in particular by other addons such Horizontal Pod Autoscaler. For more, see
[metrics-server](https://github.com/kubernetes-sigs/metrics-server).
- [`reloader`](/components/library/aws/eks/reloader): Installs the
[Stakater Reloader](https://github.com/stakater/Reloader) for EKS clusters. `reloader` can watch `ConfigMaps` and
`Secrets` for changes and use these to trigger rolling upgrades on pods and their associated `DeploymentConfigs`,
`Deployments`, `Daemonsets` `Statefulsets` and `Rollouts`.
#### Network
- [`cert-manager`](/components/library/aws/eks/cert-manager): A Kubernetes addon that provisions X.509 certificates.
- [`alb-controller`](/components/library/aws/eks/alb-controller): A Kubernetes addon that, in the context of AWS,
provisions and manages ALBs and NLBs based on `Service` and `Ingress` annotations. This module is also provision a
default `IngressClass`.
- [`alb-controller-ingress-group`](/components/library/aws/eks/alb-controller-ingress-group): A Kubernetes Service
that creates an ALB for a specific `IngressGroup`. An `IngressGroup` is a feature of the `alb-controller` which
allows multiple Kubernetes Ingresses to share the same Application Load Balancer.
- [`external-dns`](/components/library/aws/eks/external-dns): A Kubernetes addon that configures public DNS servers with
information about exposed Kubernetes services to make them discoverable. This component is responsible for adding DNS
records to your Route 53 Hosted Zones.
- [`echo-server`](/components/library/aws/eks/echo-server): The echo server is a server that sends it back to the client
a JSON representation of all the data the server received. We use this component is validate a cluster deployment.
#### Storage
- [`efs`](/components/library/aws/efs/): Deploys an [EFS](https://aws.amazon.com/efs/) Network File System with KMS
encryption-at-rest. EFS is an excellent choice as the default block storage for EKS clusters so that volumes are not
zone-locked.
- [`eks/efs-controller`](/components/library/aws/eks/storage-class/): Deploys
[the Amazon Elastic File System Container Storage Interface (CSI) Driver controller](https://github.com/kubernetes-sigs/aws-efs-csi-driver)
to EKS. The Amazon EFS CSI Driver implements the CSI specification for container orchestrators to manage the
lifecycle of Amazon EFS file systems.
#### Additional Operators
- [`external-secrets-operator`](/components/library/aws/eks/external-secrets-operator/): This component (ESO) is used to
create an external `SecretStore` configured to synchronize secrets from AWS SSM Parameter store as Kubernetes Secrets
within the cluster.
---
## How to Setup Vanity Domains with an ALB on EKS
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
Learn how to setup vanity domains on an existing ALB together with Service Discovery Domains.
## Pre-requisites
- [Understand the differences between Vanity Domains and Service Discovery Domains](/resources/legacy/learning-resources/#the-difference-between-vanity-domains-and-service-discovery-domains)
- Assumes our standard [Network Architecture](/layers/network/)
- Requires `dns-primary` & `dns-delegated` are already deployed.
## Context
After setting up your [Network Architecture](/layers/network) you will have 2 hosted zones in each platform account.
In `dev` for example, you will have Hosted Zones for `dev-acme.com` and `dev.platform.acme.com`.
You should also have an ACM certificate that registers `*.dev-acme.com` and `*.dev.platform.acme.com`.
We also should've deployed applications to your EKS cluster and have an ALB for service discovery. For example the
[`echo-server`](/components/library/aws/eks/echo-server) component.
Now we want to set up a vanity subdomain for `dev-acme.com` that will point to the ALB used for service discovery. This
saves us money by not requiring a new ALB for each vanity domain.
## Implementation
This is fairly simple to implement. All we need to do is set up our Kubernetes ingresses and ensure ACM doesn't have
duplicate certs for domains.
## Setup Ingresses
Ingresses for your applications can use several different `.spec.rules` to provide access to the application via many
different URLs.
#### Example
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/group.name: alb-controller-ingress-group
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-type: ip
external-dns.alpha.kubernetes.io/hostname: my-app-api.dev.plat.acme-svc.com
kubernetes.io/ingress.class: alb
outputs.platform.cloudposse.com/webapp-url: https://my-app-api.dev.plat.acme-svc.com
name: my-app-api
namespace: dev
spec:
rules:
# new Vanity Domain
- host: api.dev-acme.com
http:
paths:
- backend:
service:
name: my-app-api
port:
number: 8081
path: /api/*
pathType: ImplementationSpecific
# Existing Service discovery domain
- host: my-app-api.dev.plat.acme-svc.com
http:
paths:
- backend:
service:
name: my-app-api
port:
number: 8081
path: /*
pathType: ImplementationSpecific
```
_helpers.tpl
```yaml
{{/*
Expand the name of the chart.
*/}}
{{- define "this.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "this.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "this.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
helm.sh/chart: {{ include "this.chart" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
*/}}
{{- define "this.labels" -}}
{{ include "this.selectorLabels" . }}
{{- end }}
{{/*
Selector labels
app.kubernetes.io/name: {{ include "this.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
*/}}
{{- define "this.selectorLabels" -}}
app: {{ include "this.fullname" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "this.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "this.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
```
**Ingress.yaml**
```yaml
{{- if or (eq (printf "%v" .Values.ingress.nginx.enabled) "true") (eq (printf "%v" .Values.ingress.alb.enabled) "true") -}}
{{- $fullName := include "this.fullname" . -}}
{{- $svcName := include "this.name" . -}}
{{- $svcPort := .Values.service.port -}}
{{- $nginxTlsEnabled := and (eq (printf "%v" .Values.ingress.nginx.enabled) "true") (eq (printf "%v" .Values.tlsEnabled) "true")}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
annotations:
{{- if eq (printf "%v" .Values.ingress.nginx.enabled) "true" }}
kubernetes.io/ingress.class: {{ .Values.ingress.nginx.class }}
{{- if (index .Values.ingress.nginx "tls_certificate_cluster_issuer") }}
cert-manager.io/cluster-issuer: {{ .Values.ingress.nginx.tls_certificate_cluster_issuer }}
{{- end }}
{{- else if eq (printf "%v" .Values.ingress.alb.enabled) "true" }}
kubernetes.io/ingress.class: {{ .Values.ingress.alb.class }}
alb.ingress.kubernetes.io/group.name: {{ .Values.default_alb_ingress_group | default "alb-controller-ingress-group" }}
alb.ingress.kubernetes.io/scheme: internet-facing
{{- if .Values.ingress.alb.access_logs.enabled }}
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket={{.Values.ingress.alb.access_logs.s3_bucket_name}},access_logs.s3.prefix={{.Values.ingress.alb.access_logs.s3_bucket_prefix}}
{{- end }}
alb.ingress.kubernetes.io/target-type: 'ip'
{{- if eq (printf "%v" .Values.ingress.alb.ssl_redirect.enabled) "true" }}
alb.ingress.kubernetes.io/ssl-redirect: '{{ .Values.ingress.alb.ssl_redirect.port }}'
{{- end }}
{{- if eq (printf "%v" .Values.tlsEnabled) "true" }}
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS":443}]'
{{- else }}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
{{- end }}
{{- if eq .Values.environment "preview" }}
external-dns.alpha.kubernetes.io/hostname: {{ $svcName }}-{{ .Release.Namespace }}.{{ .Values.platform.default_ingress_domain }}
outputs.platform.cloudposse.com/webapp-url: "https://{{ $svcName }}-{{ .Release.Namespace }}.{{ .Values.platform.default_ingress_domain }}"
{{- else }}
external-dns.alpha.kubernetes.io/hostname: {{ $svcName }}.{{ .Values.platform.default_ingress_domain }}
outputs.platform.cloudposse.com/webapp-url: "https://{{ $svcName }}.{{ .Values.platform.default_ingress_domain }}"
{{- end }}
{{- end }}
labels:
{{- include "this.labels" . | nindent 4 }}
spec:
{{- if $nginxTlsEnabled }}
tls: # < placing a host in the TLS config will indicate a certificate should be created
- hosts:
- {{ .Values.ingress.hostname }}
secretName: {{ $svcName }}-cert # < cert-manager will store the created certificate in this secret.
{{- end }}
rules:
{{- if eq .Values.environment "preview" }}
- host: "{{ $svcName }}-{{ .Release.Namespace }}.{{ .Values.platform.default_ingress_domain }}"
{{- else }}
{{- range .Values.ingress.vanity_domains }}
- host: "{{.prefix | default "api" }}.{{ $.Values.platform.default_vanity_domain }}"
http:
paths:
- path: /{{.path | default "*" }}
pathType: ImplementationSpecific
backend:
service:
name: {{ $svcName }}
port:
number: {{ $svcPort }}
{{- end }}
- host: "{{ $svcName }}.{{ .Values.platform.default_ingress_domain }}"
{{- end }}
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: {{ $svcName }}
port:
number: {{ $svcPort }}
{{- end }}
```
**values.yaml**
```yaml
---
ingress:
vanity_domains:
# api.dev-acme.com, path: /*
- prefix: "api"
# api.dev-acme.com, path: /v2/*
- prefix: "api"
path: "v2/*"
nginx:
# ingress.nginx.enabled -- Enable NGiNX ingress
enabled: false
# annotation values
## kubernetes.io/ingress.class:
class: "nginx"
## cert-manager.io/cluster-issuer:
tls_certificate_cluster_issuer: "letsencrypt-prod"
alb:
enabled: true
# annotation values
## kubernetes.io/ingress.class:
class: "alb"
## alb.ingress.kubernetes.io/load-balancer-name:
### load_balancer_name: "k8s-common"
## alb.ingress.kubernetes.io/group.name:
### group_name: "common"
ssl_redirect:
enabled: true
## alb.ingress.kubernetes.io/ssl-redirect:
port: 443
access_logs:
enabled: false
## s3_bucket_name: "acme-ue2-prod-eks-cluster-alb-access-logs"
s3_bucket_prefix: ""
```
## Setup ACM Certs
By default, our `dns-primary` component and `dns-delegated` component will create ACM certs for each Hosted Zone in the
platform account, along with an **additional** cert for `*.dev-acme.com`. Depending on the level of subdomains you want,
you may need to disable this with the variable `request_acm_certificate: false`
If a single subdomain is sufficient. e.g. `api.dev-acme.com` then you can leave this enabled.
The important thing to note is that you **cannot** have duplicate certs in ACM. So if you want to add a new subdomain,
you will need to delete the existing cert for `*.dev-acme.com` and create a new one with the new subdomain. This can
lead to issues when trying to delete certificates, as they are in use by the ALB. You will need to delete the ALB first,
then delete the certificate.
See the troubleshooting section if you run into issues with recreating resources.
## How it works:
With a single valid ACM cert for your domains, the `alb-controller` is able to register your domain to the ALB. The ALB
is able to do this by recognizing the valid certificate in ACM. This is why we need to ensure we have a valid
certificate for our domains.
You can validate your cert is picked up by the ALB by checking the ALB's target group. You should see the certificate
listed under the `Certificates` tab.
## Troubleshooting
The problem with this comes when you need to remove a subdomain or ACM certificate. By running
`atmos terraform deploy dns-delegated -s plat--dev` with `request_acm_certificate: false`, you are trying to
destroy a single ACM certificate in an account. While this is a small scope deletion, the ACM certificate is in use by
the ALB, and the ALB has many different targets. Thus Terraform will stall out.
You need to:
1. Delete the listeners and targets of the ALB that are using the certificate
2. Delete the ALB
3. Terraform will then successfully delete the ACM certificate.
You will notice:
1. The ALB will be recreated
2. Ingresses should reconcile for service discovery domains
3. ALB Targets should be recreated pointing at service discovery domains.
Once you recreate the correct ACM certificates and have valid ingresses you should be able to access your applications
via the vanity domain.
---
## Tutorials(5)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These are some additional tutorials that will help you along with the associated EKS components.
---
## Build Your Foundation
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import StepNumber from '@site/src/components/StepNumber';
import Step from '@site/src/components/Step';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import SecondaryCTA from '@site/src/components/SecondaryCTA';
import ReactPlayer from 'react-player';
To build a reliable infrastructure, we must start with a solid foundation. Our reference architecture is designed with best practices and consistent conventions to ensure it is well-architected from the ground up. As part of this process, you’ll make critical design decisions that will shape your infrastructure. Next, you’ll initialize your infrastructure repository and then begin by provisioning your AWS Organizations, accounts, networks, DNS, and fine-grained IAM roles and policies. Once your foundation is complete, you’ll be ready to build a platform to deliver your applications.
## Set up your project
1. Create a GitHub repository to host your infrastructure toolchain and configurations.
2. Configure repository settings, enable branch protection, and add collaborators.
3. Then import the Cloud Posse reference architecture and prepare the Geodesic toolbox image to get ready to provision your infrastructure.
AI generated voiceGet Started
## Provision New AWS Organization and Accounts
1. Review how Cloud Posse designs and manages AWS Account architectures using Atmos and Terraform, aligning with the AWS Well-Architected Framework.
2. Begin by provisioning the Terraform state backend, which is essential before provisioning and managing infrastructure with Terraform.
3. Then proceed to organize the accounts into Organizational Units (OUs), apply Service Control Policies (SCPs), and configure account-level settings.
AI generated voiceGet Started
## Rollout Identity & Authentication
Learn how Cloud Posse sets up fine-grained access control for an entire organization using Permission Sets, IAM roles, and AWS IAM Identity Center (SSO). It addresses the challenges of managing access across multiple AWS accounts with a solution that ensures precise control, easy role switching, and compatibility with different identity providers. This approach provides seamless authentication via Atmos Auth for CLI access, programmatic access for GitHub Actions via OIDC, and a user-friendly login experience with AWS Identity Center.
AI generated voiceGet Started
## Deploy VPCs & DNS
Finally, understand Cloud Posse’s approach to designing robust and scalable Network and DNS architectures on AWS, with a focus on symmetry, account-level isolation, security, and reusability. We cover essential topics such as account isolation, connecting multiple accounts together using Transit Gateways, deploying AWS Client VPN for remote network access by developers, and differentiating between DNS service discovery and branded vanity domains used by customers. The solution includes reusable network building blocks, ensuring consistent deployment of VPCs and subnets, accommodating multi-region global networks, and addressing special network design considerations depending on whether you'll use ECS or EKS.
AI generated voiceGet Started
When you're done with your foundation, our attention will shift to how you [set up your platform](/layers/platform) to deliver your apps.
---
## Decide on Self-Hosted Runner Architecture
import Intro from "@site/src/components/Intro";
import Note from "@site/src/components/Note";
Decide on how to operate self-hosted runners that are used to run GitHub
Actions workflows. These runners can be set up in various ways and allow us to
avoid platform fees while running CI jobs in private infrastructure, enabling
access to VPC resources. This approach is ideal for private repositories,
providing control over instance size, architecture, and control costs by
leveraging spot instances. The right choice depends on your platform, whether
you’re using predominantly EKS, ECS, or Lambda.
## Problem
When using GitHub Actions, you can opt for both GitHub Cloud-hosted and self-hosted runners, and they can complement each other. In some cases, self-hosted runners are essential—particularly for accessing resources within a VPC, such as databases, Kubernetes API endpoints, or Kafka servers, which is common in GitOps workflows.
However, while self-hosted runners are ideal for private infrastructure, they pose risks in public or open-source repositories due to potential exposure of sensitive resources. If your organization maintains open-source projects, this should be a critical consideration, and we recommend using cloud-hosted runners for those tasks.
The hosting approach for self-hosted runners should align with your infrastructure. If you use Kubernetes, it's generally best to run your runners on Kubernetes. Conversely, if your infrastructure relies on ECS or Lambdas, you may want to avoid unnecessary Kubernetes dependencies and opt for alternative hosting methods.
In Kubernetes-based setups, configuring node pools with Karpenter is key to maintaining stability and ensuring effective auto-scaling with a mix of spot and on-demand instances. However, tuning this setup can be challenging, especially with recent changes to ARC, where the [newer version does not support multiple labels for runner groups](https://github.com/actions/actions-runner-controller/issues/2445), leading to community disagreement over trade-offs. We provide multiple deployment options for self-hosted runners, including EKS, Philips Labs' solution, and Auto Scaling Groups (ASG), tailored to your specific runner management needs.
## Considered Options
### Option 1: EC2 Instances in an Auto Scaling Group (`github-runners`)
The first option is to deploy EC2 instances in an Auto Scaling Group. This is the simplest option. We can use the
`github-runners` component to deploy the runners. However, this option is not as scalable as the other options.
### Option 2: Actions Runner Controller on EKS (`eks/actions-runner-controller`)
The second option is to deploy the Actions Runner Controller on EKS. Since many implementations already have EKS, this
option is a good choice to reuse existing infrastructure.
We can use the `eks/actions-runner-controller` component to deploy the runners, which is built with the
[Actions Runner Controller helm chart](https://github.com/actions/actions-runner-controller).
### Option 3: GitHub Actions Runner on EKS (`eks/github-actions-runner`)
Alternatively, we can deploy the GitHub Actions Runner on EKS. This option is similar to the previous one, but it uses
the GitHub Actions Runner instead of the Actions Runner Controller.
This component deploys self-hosted GitHub Actions Runners and a
[Controller](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller#introduction)
on an EKS cluster, using
"[runner scale sets](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#runner-scale-set)".
This solution is supported by GitHub and supersedes the
[actions-runner-controller](https://github.com/actions/actions-runner-controller/blob/master/docs/about-arc.md)
developed by Summerwind and deployed by Cloud Posse's
[actions-runner-controller](https://docs.cloudposse.com/components/library/aws/eks/actions-runner-controller/)
component.
However, there are some limitations to the official Runner Sets implementation:
- #### Limited set of packages
The runner image used by Runner Sets contains [no more packages than are necessary](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller#about-the-runner-container-image) to run the runner. This is in contrast to the Summerwind implementation, which contains some commonly needed packages like `build-essential`, `curl`, `wget`, `git`, and `jq`, and the GitHub hosted images which contain a robust set of tools. (This is a limitation of the official Runner Sets implementation, not this component per se.) You will need to install any tools you need in your workflows, either as part of your workflow (recommended), by maintaining a [custom runner image](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller#creating-your-own-runner-image), or by running such steps in a [separate container](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container) that has the tools pre-installed. Many tools have publicly available actions to install them, such as `actions/setup-node` to install NodeJS or `dcarbone/install-jq-action` to install `jq`. You can also install packages using `awalsh128/cache-apt-pkgs-action`, which has the advantage of being able to skip the installation if the package is already installed, so you can more efficiently run the same workflow on GitHub hosted as well as self-hosted runners.
There are (as of this writing) open feature requests to add some commonly
needed packages to the official Runner Sets runner image. You can upvote
these requests
[here](https://github.com/actions/actions-runner-controller/discussions/3168)
and [here](https://github.com/orgs/community/discussions/80868) to help get
them implemented.
- #### Docker in Docker (dind) mode only
In the current version of this component, only "dind" (Docker in Docker) mode has been tested. Support for "kubernetes" mode is provided, but has not been validated.
- #### Limited configuration options
Many elements in the Controller chart are not directly configurable by named inputs. To configure them, you can use the `controller.chart_values` input or create a `resources/values-controller.yaml` file in the component to supply values.
Almost all the features of the Runner Scale Set chart are configurable by named inputs. The exceptions are:
- There is no specific input for specifying an outbound HTTP proxy.
- There is no specific input for supplying a [custom certificate authority (CA) certificate](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#custom-tls-certificates) to use when connecting to GitHub Enterprise Server.
You can specify these values by creating a `resources/values-runner.yaml` file in the component and setting values as shown by the default Helm [values.yaml](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml), and they will be applied to all runners.
- #### Component limitations
Furthermore, the Cloud Posse component has some additional limitations. In particular:
- The controller and all runners and listeners share the Image Pull Secrets. You cannot use different ones for different
runners.
- All the runners use the same GitHub secret (app or PAT). Using a GitHub app is preferred anyway, and the single GitHub
app serves the entire organization.
- Only one controller is supported per cluster, though it can have multiple replicas.
These limitations could be addressed if there is demand. Contact [Cloud Posse Professional Services](https://cloudposse.com/professional-services/) if you would be interested in sponsoring the development of any of these features.
### Option 4: Philips Labs Runners (`philips-labs-github-runners`)
If we are not deploying EKS, it's not worth the additional effort to set up Self-Hosted runners on EKS. Instead, we deploy Self-Hosted runners on EC2 instances. These are managed by an API Gateway and Lambda function that will automatically scale the number of runners based on the number of pending jobs in the queue. The queue is written to by the API Gateway from GitHub Events.
For more on this option, see the [Philips Labs GitHub Runner](https://philips-labs.github.io/terraform-aws-github-runner/) documentation.
### Option 5: RunsOn (Recommended)
[RunsOn](https://runs-on.com/) is a managed self-hosted runner solution that provides the benefits of self-hosted runners without the operational overhead. This is our latest preferred approach for most deployments.
**Key Benefits:**
- **Zero Infrastructure Management**: No EC2 instances, Lambda functions, or Kubernetes clusters to maintain
- **Simple Setup**: Deploy a single Terraform component, install a GitHub App, and start using immediately
- **Cost Effective**: Pay only for compute time with automatic spot instance pricing—no idle infrastructure costs
- **No Kubernetes Required**: Works without EKS, making it ideal for organizations that don't need Kubernetes
- **VPC Access**: Runners operate within your AWS account and can access private VPC resources
- **Organization-Wide Configuration**: Define runner configurations once in a central `.github` repository
For setup instructions, see [Setup RunsOn](/layers/github-actions/runs-on/).
### Option 6: Other Managed Runners
There are a number of third-party services that offer managed runners. These still have the advantage over GitHub Cloud hosted runners as they can be deployed within your private VPCs.
## Recommendation
**Cloud Posse recommends [RunsOn](/layers/github-actions/runs-on/)** for most deployments. It provides the simplest setup with the lowest operational overhead while still offering VPC access and cost savings through spot instances.
For organizations with existing investments in Kubernetes-based runners or specific requirements not met by RunsOn, the legacy options (Actions Runner Controller, Philips Labs) remain available in our [Additional Tutorials](/layers/github-actions/tutorials/).
---
## Decide on Self-Hosted Runner Placement
import Intro from "@site/src/components/Intro";
Self-hosted runners are custom runners that we use to run GitHub Actions
workflows. We can use these runners to access resources in our private
networks and reduce costs by using our own infrastructure. We need to decide
where to place these runners in your AWS organization.
## Problem
We need to decide where to place self-hosted runners in your AWS organization.
We support multiple options for deploying self-hosted runners. We can deploy runners with EKS, Philips Labs, or with an ASG. For this decision, we will focus on the placement of the runners themselves.
## Considered Options
### Option 1: Deploy the runners in an `auto` account
The first option is to deploy the controller in the `auto` (Automation) account. This account would be dedicated to automation tasks and would have access to all other accounts. We can use this account to deploy the controller and manage the runners in a centralized location.
However, compliance is complicated because the `auto` cluster would have access to all environments.
### Option 2: Deploy the runners in each account
The second option is to deploy the controller in each account. This option sounds great from a compliance standpoint. Jobs running in each account are scoped to that account, each account has its own controller, and we can manage the runners independently.
This might seem like a simplification from a compliance standpoint, but it creates complexity from an implementation standpoint. We would need to carefully consider the following:
1. Scaling runners can inadvertently impact IP space available to production workloads
2. Many accounts do not have a VPC or EKS Cluster (for EKS/ARC solutions). So, we would need to decide how to manage those accounts.
3. We would need to manage the complexity of dynamically selecting the right runner pool when a workflow starts. While this might seem straightforward, it can get tricky in cases like promoting an ECR image from staging to production, where it’s not always clear-cut which runners should be used.
## Recommendation
_Option 1: Deploy the runners in an `auto` account_
We will deploy the runners in an `auto` account. This account will be connected to the private network and will have access to all other accounts where necessary. This will simplify the management of the runners and ensure that they are available when needed.
## Consequences
We will create an `auto` account and deploy the runners there.
---
## Design Decisions(3)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions of the GitHub Action Layer. These decisions
relate to how you will manage self-hosted runners for your GitHub Action
workflows.
---
## Setup GitHub Actions
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
GitHub Actions (GHA) are one of the cornerstones of your platform, automating everything from Terraform with Atmos to application build, test and deployment, fully integrated into AWS without any hardcoded, static credentials.
GitHub Actions offer a convenient way to achieve CI/CD automation directly on GitHub, without additional third-party services (e.g. CircleCI or Jenkins). GitHub doesn't charge extra for self-hosting runners, unlike many other platforms, making them an ideal choice for automation. Using self-hosted runners allows them to reside within your private networks, enabling you to manage resources like databases and Kubernetes clusters in private VPCs without exposing them publicly.
## Recommended: RunsOn
We recommend **[RunsOn](/layers/github-actions/runs-on/)** for self-hosted GitHub runners. RunsOn provides:
Zero Infrastructure Management
No EC2 instances, Lambda functions, or Kubernetes clusters to maintain. No patching, scaling, or monitoring required.
Simple Setup
Deploy a single Terraform component, install a GitHub App, and start using immediately.
Cost Effective
Pay only for what you use with automatic spot instance pricing. No idle infrastructure costs.
Works Everywhere
No Kubernetes required. Works with any GitHub repository and supports organization-wide configuration.
Deploy self-hosted runners with minimal setup and zero infrastructure management.
Setup RunsOn
## GitHub OIDC
GitHub OIDC allows your GitHub Actions workflows to assume AWS IAM roles without static credentials. The GitHub OIDC Provider is deployed as part of the [Identity layer](/layers/identity/deploy/).
For a detailed explanation of how GitHub OIDC works with AWS, see [GitHub OIDC with AWS](/layers/github-actions/github-oidc-with-aws/).
## Additional Resources
- [Design Decisions](/layers/github-actions/design-decisions/) - Architecture decisions for self-hosted runners
- [Additional Tutorials](/layers/github-actions/tutorials/) - Previous runner solutions (Philips Labs, Actions Runner Controller)
---
## How GitHub OIDC Works with AWS
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
This guide explains how GitHub OpenID Connect (OIDC) integrates with AWS to enable secure authentication for GitHub Actions without permanent credentials. Understanding OIDC is helpful for troubleshooting and extending your GitHub Actions workflows.
:::info Deployment Note
The GitHub OIDC Provider is deployed as part of the [Identity layer](/layers/identity/deploy/). This page is a reference explaining how OIDC works—you don't need to deploy anything separately.
:::
GitHub OIDC (OpenID Connect) for AWS refers to the integration between GitHub as an identity provider using OpenID Connect and AWS services. This integration allows users to use their GitHub credentials to authenticate and access AWS resources securely. By configuring GitHub as an OIDC provider in AWS Identity and Access Management (IAM), organizations can establish a federated identity model. This enables GitHub users to sign in to AWS using their GitHub credentials, streamlining access management and eliminating the need for separate AWS-specific usernames and passwords. The integration also provides a centralized way to manage access permissions and enables Single Sign-On (SSO) capabilities, enhancing security and user experience in the AWS environment. Organizations can configure OIDC settings in AWS, including client IDs, client secrets, and the GitHub OIDC discovery URL, to establish a trust relationship between GitHub and AWS. For the most accurate and up-to-date information, it's recommended to check the official documentation of GitHub and AWS.
## OpenID Connect
OIDC is short for OpenID Connect and, like SAML, is a way to federate identities. Federating Identities is a fancy way of saying we trust a 3rd party (the OIDC provider) to handle two tasks:
1. Authentication: verify who the user is
2. Authorization: verify what the user has access to (claims)
You can think of this process as similar to arriving at an airport. You present your passport to airport personnel (authentication) so they can identify you along with your boarding pass (authorization claim) indicating you are authorized to pass through security and board a specific flight.
```mermaid
---
title: OIDC Authentication Process
---
sequenceDiagram
participant client as OIDC Client (Relying Party)
participant browser as End-User Browser
participant provider as OIDC Provider
client ->>+ provider: Makes an Authentication Request (AuthN)
browser ->>+ provider: Authenticate and Authorize (e.g. username/password)
provider ->>+ client: AuthN Response with Token (JWT)
client ->>+ client: Validate Token Signature
```
This is similar to how OpenID connect works with DataDog
We share this here only if it helps, as the process is conceptually similar for GitHub and GitHub Actions.

## GitHub OIDC and AWS
:::tip
The primary reason we want to use GitHub OIDC with AWS is so GitHub Actions can assume various AWS Roles without the need to store permanent credentials (e.g. _without_ hardcoding `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) in GitHub.
:::
```mermaid
---
title: GitHub OIDC and AWS
---
sequenceDiagram
participant aws as AWS API (Relying Party)
participant gh as GitHub Action (Relying Party)
participant oidc as GitHub OIDC Provider
gh ->>+ oidc: Workflow Makes an AuthN Request
oidc ->>+ gh: GitHub Returns a Token
gh ->>+ aws: API Call
aws ->>+ aws: Validate
Note right of aws: AWS validates the token was signed by GitHub beforeissuing the temporary credentials
aws ->>+ gh: Temporary Credentials
Note right of aws: GitHub Action Workflow calls STS AssumeRoleWithWebIdentity,passing the token obtained from the GitHub OIDC providerand receives temporary AWS Credentials in return
```
## Creating IAM Roles for GitHub Actions
Once the GitHub OIDC Provider is deployed (via the [Identity layer](/layers/identity/deploy/)), you can create IAM roles that GitHub Actions workflows can assume. There are two approaches:
### Option 1: Configure GitHub OIDC Mixin Role and Policy
Use the mixin to grant GitHub the ability to assume a role for a specific component.
- Add the [GitHub OIDC Mixin](https://github.com/cloudposse/terraform-aws-components/tree/main/mixins/github-actions-iam-role) to any component that needs to generate an IAM Role for GitHub Actions
- Implement a custom IAM Policy with least privilege for the role. See [example policies here](https://github.com/cloudposse/terraform-aws-components/tree/main/mixins/github-actions-iam-policy)
### Option 2: Deploy GitHub OIDC Role Component
Deploy the [GitHub OIDC Role component](/components/library/aws/github-oidc-role/) to create a generalized role for GitHub to access several resources in AWS.
### Configure GitHub Action Workflows
First, give the GitHub Action Workflow the proper permissions
```yaml
permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout
```
Then, use the official [aws-actions/configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials) action to automatically obtain a token from the GitHub OIDC provider, exchange that token for AWS temporary credentials and set the proper env vars in your GitHub Action Workflow
```yaml
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-region: us-east-2
role-to-assume: arn:aws:iam::111111111111:role/my-github-actions-role
role-session-name: my-github-actions-role-session
```
## FAQ
### Should I use the Mixin or the component to deploy a GitHub OIDC role?
Use the mixin when deploying a role tightly coupled with a specific component. For example, use the mixin with `ecr` to grant GitHub access to push and pull ECR images.
However, sometimes we need a role with access to many components or resources. In this case, we use the `github-oidc-role` component to define a generalized role for GitHub to assume. For example, we use the component for the `gitops` role.
---
## Setup RunsOn
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Note from '@site/src/components/Note';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import File from '@site/src/components/File';
import TaskList from '@site/src/components/TaskList';
RunsOn is our recommended solution for self-hosted GitHub runners. It provides zero-infrastructure runner management with simple setup and cost-effective pay-per-use pricing.
## Why RunsOn?
We evaluated multiple self-hosted runner solutions and recommend RunsOn for most deployments:
Zero Infrastructure Management
No EC2 instances, Lambda functions, or Kubernetes clusters to maintain. RunsOn handles all infrastructure automatically.
Simple Setup
Deploy a single Terraform component, install a GitHub App, and you're ready to go. No complex configurations or ongoing maintenance.
Cost Effective
Pay only for compute time you use. Automatic spot instance pricing keeps costs low with no idle infrastructure.
No Kubernetes Required
Unlike Actions Runner Controller (ARC), RunsOn works without EKS. Perfect for organizations that don't need Kubernetes.
Organization-Wide Configuration
Define runner configurations once in a central `.github` repository and inherit across all repos.
- Acquire a RunsOn license
- Vendor the required components
- Deploy the RunsOn component
- Install the GitHub App
- Configure your workflows to use the runners
## Network Requirements
RunsOn runners need network access to deploy resources in your private network, such as EKS clusters, Aurora databases, or any other VPC-based resources.
With the reference architecture, we deploy RunsOn into the existing VPC in the `core-auto` account. This VPC is already configured with Transit Gateway and has access to the private network across all accounts.
If you opt to deploy RunsOn into a separate VPC, you will need to connect that VPC to Transit Gateway in order to access the private network.
## Acquire a RunsOn License
RunsOn requires a license key to operate. We recommend the **Commercial License** for production deployments. A free 15-day trial is available if you need to evaluate RunsOn first.
**License Options:**
| License | Cost | Use Case |
|---------|------|----------|
| Demo | Free (15 days) | Evaluation and testing |
| Commercial (recommended) | $300/year | Production deployments |
| Sponsorship | $1,500/year | Source code access + dedicated support |
Non-profit organizations qualify for free non-commercial licenses.
Once you have your license key, add it to your Atmos stack catalog at `stacks/catalog/runs-on/defaults.yaml` in the `LicenseKey` parameter.
Share your RunsOn license key via 1Password and we'll add it to your stack catalog for you.
For more details, see the [RunsOn Pricing](https://runs-on.com/pricing/) page.
## Vendor RunsOn Component
Vendor the required components using the included Atmos workflow:
## Deploy RunsOn Component
Deploy the RunsOn component using the included Atmos workflow:
## Install GitHub App
After deployment, follow these steps to install the GitHub App:
1. Check the Terraform outputs for `RunsOnEntryPoint`
1. Use the provided URL to install the GitHub App
1. Follow the prompts to complete the installation in your GitHub Organization
1. Ensure you have the necessary permissions in GitHub to install the app
### Configure Workflows
These workflows should already be configured as part of the reference architecture package. You can confirm the following configuration is in place.
Update your GitHub Actions workflow files to use the self-hosted runners:
```yaml
jobs:
build:
runs-on:
- "runs-on=${{ github.run_id }}"
- "runner=terraform" # Note `terraform` is a runner group name defined by a RunsOn configuration
## If no configuration is present, use
# - "runner=2cpu-linux-x64"
## Optional Tags
# - "tag=${{ inputs.component }}-${{ inputs.stack }}"
steps:
- uses: actions/checkout@v3
# Add your build steps here
```
For more information on available runner types and configurations, check the [RunsOn: Runner Types documentation](https://runs-on.com/runners/linux/).
### (Optional) Setup a RunsOn Repo or Organization Configuration
In your Repository you can add a file to configure RunsOn. This can also extend the configuration for the Organization.
This snippet below is an extremely simplified example. If you want to see what Cloud Posse uses as a starting point, checkout our configuration [here](https://github.com/cloudposse/.github/blob/main/.github/runs-on.yml).
Reference architecture users should see an included default configuration in their `.github` folder in the infrastructure repository.
Here's a sample configuration. We recommend storing this in a centralized `.github` repository so you can define a shared `runs-on` configuration that you can use across all repositories, without duplicating it in each one. This is especially useful when managing many repositories.
```yaml
runners:
terraform:
image: ubuntu22-full-x64
disk: default
spot: price-capacity-optimized
retry: when-interrupted
private: false
ssh: false
cpu: [2, 32]
ram: [8, 64]
tags:
- "gha-runner:runs-on/terraform"
```
To use your organization's shared configuration in an individual repository, you need to define a local configuration that uses the `_extends` keyword to inherit from the centralized setup — it won't be applied automatically.
```yaml
# See https://runs-on.com/configuration/repo-config/
_extends: .github
terraform:
cpu: [4, 32] # example override
```
## Troubleshooting
### GitHub Action Runner Not Found
First determine if the Workflow or the Runner is the issue, sometimes the workflow doesn't kick off because it is on a feature branch and not on the default.
If the workflow kicks off but is waiting on a runner, checkout [RunsOn Troubleshooting](https://runs-on.com/guides/troubleshoot/) as they have great docs on figuring out why a runner is not available.
---
## Actions Runner Controller (EKS)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
:::warning Legacy Approach
This page documents a legacy approach to self-hosted GitHub runners. **For new deployments, we recommend using [RunsOn](/layers/github-actions/runs-on/)**, which provides:
- Zero infrastructure management
- Simple GitHub App installation
- Cost-effective pay-per-use pricing
- No Kubernetes required
This content is preserved for organizations with existing ARC deployments on EKS.
:::
The GitHub Action Runner Controller (ARC) is a Kubernetes operator that automates the management of self-hosted GitHub Actions runners in a Kubernetes cluster, that works very well together with Karpenter for EKS.
By default, GitHub Actions are run in the cloud on hosted machines, but we can opt to use "Self-Hosted" GitHub Action
Runners instead. Historically, we've deployed an Auto Scaling Group that gives each run a dedicated and customized
instance. Now that we've deployed EKS, we can save money by utilizing the `actions-runner-controller` to deploy
virtual-machines inside of EKS, and run GitHub Actions from these containers. These virtual-machines will be fully
customizable, scale automatically, and be cheaper than both GitHub hosted runners and ASG instances.
## Quick Start
| Steps | Example |
| :---------------------------------------------------- | :------------------------------------------------------------------------------------ |
| 1. Generate GitHub Private Key | `ssm_github_secret_path: "/github_runners/controller_github_app_secret"` |
| 2. Generate GitHub Webhook Secret Token | `ssm_github_webhook_secret_token_path: "/github_runners/github_webhook_secret_token"` |
| 3. Connect to the VPN | |
| 4. Deploy cluster and resources into the `auto` stack | See deployment steps below |
| 5. Set up Webhook Driven Scaling | Click Ops |
## Requirements
In order to deploy Self-Hosted GitHub Runners on EKS, follow the steps outlined in the [EKS setup doc](/layers/eks). Those steps will complete the EKS requirements.
- We'll begin by generating the required secrets, which is a manual process.
- AWS SSM will be used to store and retrieve secrets.
- Then we need to decide on the SSM path for the GitHub secret (Application private key) and GitHub webhook secret.
### GitHub Application Private Key
Since the secret is automatically scoped by AWS to the account and region where the secret is stored, we recommend the
secret be stored at `/github/acme/github_token`.
`stacks/catalog/eks/actions-runner-controller.yaml`:
```yaml
ssm_github_secret_path: "/github_runners/controller_github_app_secret"
```
The preferred way to authenticate is by _creating_ and _installing_ a GitHub App. This is the recommended approach as it
allows for more much more restricted access than using a personal access token, at least until
[fine-grained personal access token permissions](https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github/)
are generally available. Follow the instructions
[here](https://github.com/actions/actions-runner-controller/blob/master/docs/authenticating-to-the-github-api.md) to
create and install the GitHub App.
At the creation stage, you will be asked to generate a private key. This is the private key that will be used to
authenticate the Action Runner Controller. Download the file and store the contents in SSM using the following command,
adjusting the profile and file name. The profile should be the `admin` role in the account to which you are deploying
the runner controller. The file name should be the name of the private key file you downloaded.
```
AWS_PROFILE=acme-core-use1-auto-admin chamber write github_runners controller_github_app_secret -- "$(cat APP_NAME.DATE.private-key.pem)"
```
You can verify the file was correctly written to SSM by matching the private key fingerprint reported by GitHub with:
```
AWS_PROFILE=acme-core-use1-auto-admin chamber read -q github_runners controller_github_app_secret | openssl rsa -in - -pubout -outform DER | openssl sha256 -binary | openssl base64
```
At this stage, record the Application ID and the private key fingerprint in your secrets manager (e.g. 1Password). You
will need the Application ID to configure the runner controller, and want the fingerprint to verify the private key.
Proceed to install the GitHub App in the organization or repository you want to use the runner controller for, and
record the Installation ID (the final numeric part of the URL, as explained in the instructions linked above) in your
secrets manager. You will need the Installation ID to configure the runner controller.
In your stack configuration, set the following variables, making sure to quote the values so they are treated as
strings, not numbers.
```
github_app_id: "12345"
github_app_installation_id: "12345"
```
### GitHub Webhook Secret Token
If using the Webhook Driven autoscaling (recommended), generate a random string to use as the Secret when creating the
webhook in GitHub.
Generate the string using 1Password (no special characters, length 45) or by running
```bash
dd if=/dev/random bs=1 count=33 2>/dev/null | base64
```
Store this key in AWS SSM under the same path specified by `ssm_github_webhook_secret_token_path`
`stacks/catalog/eks/actions-runner-controller.yaml`:
```yaml
ssm_github_webhook_secret_token_path: "/github_runners/github_webhook_secret_token"
```
## Deploy
Automation has an unique set of components from the `plat` clusters and therefore has its own Atmos Workflow. Notably,
`auto` includes the `eks/actions-runner-controller` component, which is used to create the `self-hosted` runners for the
GitHub Repository or Organization
The deployment steps below cover the complete process for setting up ARC on EKS.
### `iam-service-linked-roles` Component
At this point we assume that the `iam-service-linked-roles` component is already deployed for `core-auto`. If not,
deploy this component now with the following command:
```bash
atmos terraform apply iam-service-linked-roles -s core-gbl-auto
```
### Deploy Automation Cluster and Resources
Deploy the cluster with the same commands as `plat` cluster deployments. See the [EKS layer documentation](/layers/eks/) for detailed deployment instructions.
Validate the `core-auto` deployment using Echo Server. For example: https://echo.use1.auto.core.acme-svc.com/
### Deploy the Actions Runner Controller
Finally, deploy the `actions-runner-controller` component with the following command:
```bash
atmos terraform deploy eks/actions-runner-controller -s core-use1-auto
```
### Using Webhook Driven Autoscaling (Click Ops)
To use the Webhook Driven autoscaling, you must also install the GitHub organization-level webhook after deploying the
component (specifically, the webhook server). The URL for the webhook is determined by the `webhook.hostname_template`
and where it is deployed. Recommended URL is
`https://gha-webhook.[environment].[stage].[tenant].[service-discovery-domain]`, which for this organization would be
`https://gha-webhook.use1.auto.core.acme-svc.com`
As a GitHub organization admin, go to
`https://github.com/organizations/acme/settings/hooks`, and then:
- Click "Add webhook" and create a new webhook with the following settings:
- Payload URL: copy from Terraform output `webhook_payload_url`
- Content type: `application/json`
- Secret: whatever you configured in the secret above
- Which events would you like to trigger this webhook:
- Select "Let me select individual events"
- Uncheck everything ("Pushes" is likely the only thing already selected)
- Check "Workflow jobs"
- Ensure that "Active" is checked (should be checked by default)
- Click "Add webhook" at the bottom of the settings page
After the webhook is created, select "edit" for the webhook and go to the "Recent Deliveries" tab and verify that there
is a delivery (of a "ping" event) with a green check mark. If not, verify all the settings and consult the logs of the
`actions-runner-controller-github-webhook-server` pod.
# Related Topics
- [EKS Documentation](/layers/eks/)
- [Decision on Self Hosted GitHub Runner Strategy](/layers/software-delivery/design-decisions/decide-on-self-hosted-github-runner-strategy#self-hosted-runners-on-kubernetes)
- [Karpenter Documentation](https://karpenter.sh/)
---
## Philips Labs GitHub Action Runners
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import Steps from "@site/src/components/Steps";
import Step from "@site/src/components/Step";
import StepNumber from "@site/src/components/StepNumber";
import TaskList from "@site/src/components/TaskList";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
:::warning Legacy Approach
This page documents a legacy approach to self-hosted GitHub runners. **For new deployments, we recommend using [RunsOn](/layers/github-actions/runs-on/)**, which provides:
- Zero infrastructure management
- Simple GitHub App installation
- Cost-effective pay-per-use pricing
This content is preserved for organizations with existing Philips Labs runner deployments.
:::
The Philips Labs approach deploys self-hosted runners on EC2 instances managed by an API Gateway and Lambda function that automatically scales based on pending jobs in the queue.
## Quick Start
| Steps | Actions |
| :------------------------------------------------- | :-------------------------------------------------------------------------------------- |
| 1. Create GitHub App | ClickOps |
| 2. Upload GitHub App ID and Private Key to AWS SSM | Set SSM Param `"/pl-github-runners/id"` and `"/pl-github-runners/key"` (base64 encoded) |
| 3. Deploy GitHub OIDC Provider | Deploy GitHub OIDC to every needed account |
| 4. Deploy GitHub Runners | `atmos terraform deploy philips-labs-github-runners -s core-use1-auto` |
| 5. Update Webhook (if changed or redeployed) | ClickOps |
## Deploy
The setup for the Philips Labs GitHub Action Runners requires first creating the GitHub App, then deploying the
`philips-labs-github-runner` component, and then finalizing the GitHub App webhook. Cloud Posse typically does not have
access to the customer's GitHub Organization settings, so the customer will need to create the initial GitHub App, then
hand the setup back to Cloud Posse. Then Cloud Posse can deploy the component and generate the webhook. Finally, the
customer will then need to add the webhook to the GitHub App and ensure the App is installed to all relevant GitHub
repositories.
Follow the guide with the upstream module,
[philips-labs/terraform-aws-github-runner](https://github.com/philips-labs/terraform-aws-github-runner#setup-github-app-part-1),
or follow the steps below.
### Vendor Components
Vendor in the necessary components:
```bash
atmos vendor pull --component philips-labs-github-runners
```
### Create the GitHub App
:::info Customer Requirement
This step requires access to the GitHub Organization. Customers will need to create this GitHub App in Jumpstart
engagements.
:::
1. Create a new GitHub App
1. Choose a name
1. Choose a website (mandatory, not required for the module).
1. Disable the webhook for now (we will configure this later or create an alternative webhook).
1. Add the following permission for your chosen runner scope:
#### Repository Permissions
- Actions: Read-only (check for queued jobs)
- Checks: Read-only (receive events for new builds)
- Metadata: Read-only (default/required)
- Administration: Read & write (to register runner)
#### Repository Permissions
- Actions: Read-only (check for queued jobs)
- Checks: Read-only (receive events for new builds)
- Metadata: Read-only (default/required)
#### Organization Permissions
- Self-hosted runners: Read & write (to register runner)
1. Generate a Private Key
1. If you are working with Cloud Posse, upload this Private Key and GitHub App ID to 1Password and inform Cloud Posse. Otherwise, continue to the next step.
### Upload AWS SSM Parameters
:::tip
This step does _not_ require access to the GitHub Organization. Cloud Posse will run this deployment for Jumpstart
engagements.
:::
Now that the GitHub App has been created, upload the Private Key and GitHub App ID to AWS SSM Parameter Store in `core-use1-auto` (or your chosen region).
1. Upload the PEM file key to the specified ssm path, `/pl-github-runners/key`, in `core-use1-auto` as a base64 encoded string.
2. Upload the GitHub App ID to the specified ssm path, `/pl-github-runners/id`, in `core-use1-auto`.
Alternatively, use the AWS CLI or console to set these SSM parameters directly.
### Deploy GitHub OIDC Providers
The GitHub OIDC provider should already be deployed as part of the [Identity layer](/layers/identity/deploy/). If not, deploy it to all accounts where GitHub Actions need to assume roles.
### Deploy the Philips Labs GitHub Runners
Now that the GitHub App has been created and the SSM parameters have been uploaded, deploy the `philips-labs-github-runners` component:
```bash
atmos terraform deploy philips-labs-github-runners -s core-use1-auto
```
### Add the Webhook to the GitHub App
:::info Customer Requirement
This step requires access to the GitHub Organization. Customers will need to finalize the GitHub App in Jumpstart
engagements.
:::
Now that the component is deployed and the webhook has been created, add that webhook to the GitHub App. Both the
webhook URL and secret should now be stored in 1Password. If not, you can retrieve these values from the output of the
`philips-labs-github-runners` component in `core-use1-auto` as described in the previous step.
1. Open the GitHub App created in
[Create the GitHub App above](#-create-the-github-app)
1. Enable the webhook.
1. Provide the webhook url, should be part of the output of terraform.
1. Provide the webhook secret (`terraform output -raw `).
1. In the _"Permissions & Events"_ section and then _"Subscribe to Events"_ subsection, check _"Workflow Job"_.
1. Ensure the webhook for the GitHub app is enabled and pointing to the output of the module. The endpoint can be found by running `atmos terraform output philips-labs-github-runners -s core-use1-auto 'webhook'`
## Usage
Once you've deployed Self Hosted runners select the appropriate runner set with the `runs-on` configuration in any
GitHub Actions workflow. For example, we can use the default runner set as such:
```yaml
runs-on: ["self-hosted", "default"]
```
However, it's very likely you will have resource-intensive jobs that the default runner size may not satisfy. We
recommend deploying additional runner sets for each tier of workflow resource requirements. For example in our internal
GitHub Organization, we have `default`, `medium`, and `large` runners.
### Using the `terraform` Label
By default, we configure the Atmos Terraform GitHub Actions to use the `terraform` labeled Self-Hosted runners.
```yaml
runs-on: ["self-hosted", "terraform"]
```
However also by default we only have the single runner set. We recommend deploying a second runner set with a larger
resource allocation for these specific jobs.
Remove the `terraform` label from the default runner set and add the `terraform` label to your new, larger runner set.
Since the workflows are all labeled with `terraform` already, they will automatically select the new runner set on their
next run.
## FAQ
### I cannot assume the role from GitHub Actions after deploying
The following error is very common if the GitHub workflow is missing proper permission.
```bash
Error: User: arn:aws:sts::***:assumed-role/acme-core-use1-auto-actions-runner@actions-runner-system/token-file-web-identity is not authorized to perform: sts:TagSession on resource: arn:aws:iam::999999999999:role/acme-plat-use1-dev-gha
```
In order to use a web identity, GitHub Action pipelines must have the following permission. See
[GitHub Action documentation for more](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services#adding-permissions-settings).
```yaml
permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout
```
---
## Additional Tutorials
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
These pages document legacy approaches to self-hosted GitHub runners that we previously recommended. For new deployments, we recommend using [RunsOn](/layers/github-actions/runs-on/).
:::info Current Recommendation
**[RunsOn](/layers/github-actions/runs-on/)** is now our recommended approach for self-hosted GitHub runners. It provides:
- Zero infrastructure management
- Simple GitHub App installation
- Cost-effective pay-per-use pricing
- No Kubernetes required
The approaches documented here are preserved for organizations with existing deployments.
:::
---
## Example Workflows
import Intro from '@site/src/components/Intro';
import CodeBlock from '@theme/CodeBlock';
import CollapsibleText from '@site/src/components/CollapsibleText';
import PartialAtmosTerraformPlan from '@site/examples/legacy/snippets/.github/workflows/atmos-terraform-plan.yaml';
import PartialAtmosTerraformApply from '@site/examples/legacy/snippets/.github/workflows/atmos-terraform-apply.yaml';
import PartialAtmosTerraformDispatch from '@site/examples/legacy/snippets/.github/workflows/atmos-terraform-dispatch.yaml';
import PartialAtmosTerraformDriftDetection from '@site/examples/legacy/snippets/.github/workflows/atmos-terraform-drift-detection.yaml';
import PartialAtmosTerraformDriftRemediation from '@site/examples/legacy/snippets/.github/workflows/atmos-terraform-drift-remediation.yaml';
import PartialAtmosTerraformPlanMatrix from '@site/examples/legacy/snippets/.github/workflows/atmos-terraform-plan-matrix.yaml';
import PartialAtmosTerraformApplyMatrix from '@site/examples/legacy/snippets/.github/workflows/atmos-terraform-apply-matrix.yaml';
:::warning Deprecated
These example workflows are for the legacy GitHub Actions GitOps approach.
**The recommended approach now uses [Atmos Pro](/layers/atmos-pro/)**, which provides these workflows out-of-the-box with no custom configuration required.
This content is preserved for users with existing GitHub Actions GitOps deployments.
:::
Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.
The following GitHub Workflows should be used as examples. These are created in a given Infrastructure repository and can be modified however best suites your needs. For example, the labels we've chosen for triggering or skipping workflows are noted here as "Conventions" but can be changed however you would prefer.
### Atmos Terraform Plan
:::info Conventions
Use the `no-plan` label on a Pull Request to skip this workflow.
:::
The Atmos Terraform Plan workflow is triggered for every affected component from the Atmos Describe Affected workflow. This workflow takes a matrix of components and stacks and creates a plan for each, using the [Atmos Terraform Plan composite action](https://github.com/cloudposse/github-action-atmos-terraform-plan). For more on the Atmos Terraform Plan composite action, see [the official atmos.tools documentation](https://atmos.tools/integrations/github-actions/atmos-terraform-plan).
If an affected component is disabled with `terraform.settings.github.actions_enabled`, the component will show up as affected but all Terraform steps will be skipped. See [Enabling or disabling components](#enabling-or-disabling-components).
{PartialAtmosTerraformPlan}
### Atmos Terraform Apply
:::info Conventions
1. Use the `auto-apply` label on Pull Request to apply all plans on merge
1. Use the `no-apply` label on a Pull Request to skip _all workflows_ on merge
1. If a Pull Request has neither label, run drift detection for only the affected components and stacks.
:::
The Atmos Terraform Apply workflow runs on merges into main. There are two different workflows that can be triggered based on the given labels.
If you attach the Apply label (typically `auto-apply`), this workflow will trigger the [Atmos Terraform Apply composite action](https://github.com/cloudposse/github-action-atmos-terraform-apply) for every affected component in this Pull Request. For more on the Atmos Terraform Apply composite action, see [the official atmos.tools documentation](https://atmos.tools/integrations/github-actions/atmos-terraform-apply).
Alternatively, you can choose to merge the Pull Request _without_ labels. If the "apply" label and the "skip" label are not added, this workflow will trigger the [Atmos Drift Detection composite action](https://github.com/cloudposse/github-action-atmos-terraform-drift-detection) for only the affected components in this Pull Request. That action will create a GitHub Issue for every affected component that has unapplied changes.
{PartialAtmosTerraformApply}
### Atmos Terraform Drift Detection
:::info Max Opened Issues
Drift detection is configured to open a set number of Issues at a time. See `max-opened-issues` for the `cloudposse/github-action-atmos-terraform-drift-detection` composite action.
:::
The Atmos Terraform Drift Detection workflow runs on a schedule. This workflow will gather _every component in every stack_ and run the [Atmos Drift Detection composite action](https://github.com/cloudposse/github-action-atmos-terraform-drift-detection) for each.
For every stack and component included with drift detection, the workflow first triggers an Atmos Terraform Plan.
1. If there are changes, the workflow will then create or update a GitHub Issue for the given component and stack.
2. If there are no changes, the workflow will check if there's an existing Issue. If there's an existing issue, the workflow will then mark that Issue as resolved.
{PartialAtmosTerraformDriftDetection}
### Atmos Terraform Drift Remediation
:::info Conventions
Use the `apply` label to apply the plan for the given stack and component
:::
The Atmos Terraform Drift Remediation workflow is triggered from an open Github Issue when the remediation label is added to the Issue. This workflow will run the [Atmos Terraform Drift Remediation composite action](https://github.com/cloudposse/github-action-atmos-terraform-drift-remediation) for the given component and stack in the Issue. This composite action will apply Terraform using the [Atmos Terraform Apply composite action](https://github.com/cloudposse/github-action-atmos-terraform-apply) and close out the Issue if the changes are applied successfully.
The `drift` and `remediated` labels are added to Issues by the composite action directly. The `drift` is added to all Issues created by Atmos Terraform Drift Detection. Remediation will only run on Issues that have this label. Whereas the `remediated` label is added to any Issue that has been resolved by Atmos Terraform Drift Remediation.
{PartialAtmosTerraformDriftRemediation}
### Atmos Terraform Dispatch
The Atmos Terraform Dispatch workflow is optionally included and is not required for any other workflow. This workflow can be triggered by workflow dispatch, will take a single stack and single component as arguments, and will run Atmos Terraform workflows for planning and applying for only the given target.
This workflow includes a boolean option for both "Terraform Plan" and "Terraform Apply":
1. If only "Terraform Plan" is selected, the workflow will call the Atmos Terraform Plan Worker (`./.github/workflows/atmos-terraform-plan-matrix.yaml`) workflow to create a new planfile
1. If only "Terraform Apply" is selected, the workflow will call the Atmos Terraform Apply Worker (`./.github/workflows/atmos-terraform-apply-matrix.yaml`) for the given branch. This action will take the latest planfile for the given stack, component, and branch and apply it.
1. If both are selected, the workflow will run both actions. This means it will create a new planfile and then immediately apply it.
1. If neither are selected, the workflow does nothing.
{PartialAtmosTerraformDispatch}
### Atmos Terraform Plan Matrix (Reusable)
The Atmos Terraform Plan Matrix is reusable workflow that called by another workflows to create a terraform plan.
{PartialAtmosTerraformPlanMatrix}
### Atmos Terraform Apply Matrix (Reusable)
The Atmos Terraform Apply Matrix is reusable workflow called by another workflow to apply an existing plan file.
{PartialAtmosTerraformApplyMatrix}
---
## Frequently Asked Questions
:::warning Deprecated
This FAQ is for the legacy GitHub Actions GitOps approach.
**The recommended approach now uses [Atmos Pro](/layers/atmos-pro/)**.
This content is preserved for users with existing GitHub Actions GitOps deployments.
:::
### What are the included labels?
By default, Cloud Posse includes a few labels for common use-cases.
#### Pull Request Labels
- `auto-apply` - If added, the Atmos Terraform Apply workflow will be triggered for all affected components when the Pull Request is merged.
- `no-plan` - If added, the Atmos Terraform Plan workflow will be skipped on commits to the Pull Request, and the Atmos Apply workflow will be skipped when the Pull Request is merged.
#### Issue Labels
- `apply` - Triggers the Atmos Terraform Drift Remediation workflow for a specific component and stack
- `discarded` - Issue was closed by the drift detection workflow
- `drift` - Indicates that drift was detected by the drift detection workflow
- `drift-recovered` - Indicates that an issue is no longer experiencing drift
- `error` - Indicates an error occurred during planning for a specific component and stack
- `error-recovered` - Indicates that an error state has been resolved
- `remediated` - Issue was successfully remediated by the drift remediation action
- `removed` - Issue is closed because the component no longer exists (code was deleted)
### Enabling or disabling components
Components are included in the Atmos GitHub Action workflows only if they have actions enabled with the `terraform.settings.github.actions_enabled` option.
If they do not have this setting or the value is `false`, the component may still appear in the list of affected components, but Terraform will not be run against the given component and stack.
:::info Global Defaults
Typically Cloud Posse sets the default value to `true` for all components and disables individual components on a case-by-case basis.
For example in an `acme` organization, the default value could be set with `stacks/orgs/acme/_defaults.yaml`:
```yaml
terraform:
# These settings are applied to ALL components by default but can be overwritten
settings:
github:
actions_enabled: true
```
And the `account` component could be disabled with `stacks/catalog/account.yaml`:
```yaml
components:
terraform:
account:
settings:
github:
actions_enabled: false
```
:::
### I cannot assume the `gitops` role from GitHub Workflows
The following error commonly occurs when setting up `gitops` roles and permission:
```
Error: Could not assume role with OIDC: Not authorized to perform sts:AssumeRoleWithWebIdentity
```
To resolve this error, thoroughly read through each of the [Authentication Prerequisites](/layers/gitops/setup#authentication-prerequisites) for GitOps setup. In particular, check the capitalization of `trusted_github_repos` within `aws-teams` and check the `permissions` for the workflow in GitHub.
### How does GitHub OIDC work with AWS?
Please see [How to use GitHub OIDC with AWS](/layers/github-actions/github-oidc-with-aws)
---
## Quick Start
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import ReactPlayer from 'react-player';
import CodeBlock from '@theme/CodeBlock';
:::warning Deprecated
This documentation describes the legacy GitHub Actions GitOps approach for Terraform automation.
**The recommended approach now uses [Atmos Pro](/layers/atmos-pro/)**, which provides:
- Integrated plan/apply workflows with no custom configuration
- Built-in drift detection and remediation
- Seamless integration with AWS SSO via `iam-role` component
This content is preserved for users with existing GitHub Actions GitOps deployments.
:::
GitOps is a cloud-native continuous deployment methodology that uses Git as the single source of truth for declarative infrastructure and applications. Changes to infrastructure or applications are made through Git commits, and the actual state is automatically adjusted to match the desired state expressed in the Git repository. This approach provides an audit trail for changes, simplifies rollback, and enhances collaboration and visibility across teams.
AI generated voice
## The Problem
Collaboration with Terraform in team environments is more difficult than traditional [release-engineering](/layers/software-delivery) for web applications. Unlike containerized deployments, Terraform deployments are constantly modifying the state of infrastructure and behave a lot more like database migrations, except there are no transactions and therefore no practical way to do automated rollbacks. This means we have to be extra cautious.
When teams start collaborating on infrastructure, the rate of change increases, and the likelihood of collisions as well. We need approval gates to control what changes and when, plus have the ability to review changes prior to deployment to make sure nothing catastrophic happens (e.g. database destroyed). Pull Requests are not enough to restrict what changes, since every Pull Request merged, changes the graph and therefore requires all other open Pull Requests to be re-validated. There's also a need to reconcile the desired state of infrastructure in Git with what is deployed; in busy team environments, a change can accidentally not be deployed or at other times, ClickOps can result in a drift between what's running and what's in code.
Multiple platforms have emerged that solve this problem, under the general category of "Terraform Automation and Collaboration Software" or TACOS for short. Examples include Terraform Cloud, Spacelift, Env0, Scalr, and that's just a start. TACOs can easily cost tens of thousands of dollars a year and can be cost-prohibitive for certain companies.
## Our Solution
We've implemented [GitHub Actions](https://atmos.tools/category/github-actions) designed around our architecture and toolset. These actions run Atmos commands to generate a Terraform planfile, store the planfile in a S3 bucket with metadata in DynamoDB, and generate a plan summary on all pull requests. Then once the pull request is merged, a second workflow runs to pull that same planfile, apply it with Atmos commands, and then generate an apply summary.
While this solution does not offer some of the more fine-grained policy controls of TACOs nor provide a centralized UI, it does provide many of the other benefits that the other solutions offer. But the overwhelming benefit is it's much cheaper and fully integrated with Cloud Posse's architecture and design.
### Features
* **Implements Native GitOps** with Atmos and Terraform
* **GitHub Actions** can be integrated seamlessly anywhere you need to run Terraform
* **No hardcoded credentials.** Use GitHub OIDC to assume roles.
* **Compatible with GitHub Cloud & Self-hosted Runners** for maximum flexibility.
* **Beautiful Job Summaries** don't clutter up pull requests with noisy GitHub comments
* **100% Open Source with No Platform Fees** means you can leverage your existing GitHub runners to provision infrastructure
Expect these actions to constantly evolve as we build out these workflows.
### Implementation
Once the required S3 Bucket, DynamoDB table, and two separate roles to access Terraform planfiles and plan/apply Terraform are deployed, simply add your chosen [workflows](#workflows). Read the [Setup Documentation](/layers/gitops/setup) for details on deploying the requirements.
## Workflows
## References
- [Setup Documentation](/layers/gitops/setup)
- [Atmos integration documentation](https://atmos.tools/category/integrations/github-actions).
- [GitHub OIDC Integration with AWS](/layers/github-actions/github-oidc-with-aws)
- [`cloudposse/github-action-atmos-terraform-plan`](https://github.com/cloudposse/github-action-atmos-terraform-plan)
- [`cloudposse/github-action-atmos-terraform-apply`](https://github.com/cloudposse/github-action-atmos-terraform-apply)
- [`cloudposse/github-action-atmos-terraform-drift-detection`](https://github.com/cloudposse/github-action-atmos-terraform-drift-detection)
- [`cloudposse/github-action-atmos-terraform-drift-remediation`](https://github.com/cloudposse/github-action-atmos-terraform-drift-remediation)
- [`gitops/s3-bucket`](/components/library/aws/s3-bucket/): Deploy a S3 Bucket using the `s3-bucket` component. This bucket holds Terraform planfiles.
- [`gitops/dynamodb`](/components/library/aws/dynamodb/): Deploy a DynamoDB table using the `dynamodb` component. This table is used to hold metadata for Terraform Plans
- [`github-oidc-role`](/components/library/aws/github-oidc-role/): Deploys an IAM Role that GitHub is able to assume via GitHub OIDC. This role has access to the bucket and table for planfiles.
---
## Setup GitOps with GitHub Actions
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
import Admonition from '@theme/Admonition';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import CodeBlock from '@theme/CodeBlock';
:::warning Deprecated
This documentation describes the legacy GitHub Actions GitOps setup.
**The recommended approach now uses [Atmos Pro](/layers/atmos-pro/)**, which provides integrated Terraform automation with no custom workflow configuration required.
This content is preserved for users with existing GitHub Actions GitOps deployments.
:::
## Quick Start
| Steps | |
| :------------------------------ | :----------------------------------------------------- |
| 1. Verify Identity requirements | |
| 2. Expand GitHub OIDC | `atmos workflow deploy/github-oidc-provider -f quickstart/foundation/identity` |
| 2. Vendor | `atmos workflow vendor -f quickstart/foundation/atmos-pro` |
| 3. Deploy | `atmos workflow deploy -f quickstart/foundation/atmos-pro` |
## Requirements
### Self-Hosted Runners
Although not required, we recommend deploying Self-Hosted GitHub runners if you need to manage any resources inside of a VPC (e.g. RDS Users). We recommend [Runs On](/layers/github-actions/runs-on/) for self-hosted runners.
If you do not wish to use Self Hosted runners, simply change the `runs-on` option for all included workflows in
`.github/workflows`
### Set Up GitHub Variables
The `gitops` stack config depends on the following GitHub variables:
`ATMOS_VERSION`
The version of Atmos to use
`ATMOS_CONFIG_PATH`
The path to the Atmos config file
Please set the following GitHub variables in the repository settings:
1. Open the repository [settings](https://github.com/acme/infra-acme/settings/variables/actions)
2. Set variable `ATMOS_VERSION` to the `1.63.0` value
3. Set variable `ATMOS_CONFIG_PATH` to the `./rootfs/usr/local/etc/atmos/` value
### Authentication Prerequisites
The GitHub Action workflows expect both the `gitops` and `planners` AWS Teams to be properly setup and connected to GitHub OIDC. Both of
these components should already be deployed with `aws-teams`/`aws-team-roles` and `github-oidc-provider` respectively,
but `github-oidc-provider` will likely need to deployed to several additional accounts. Verify the following to complete
the authentication prerequisites.
By default in the Reference Architecture, the `trusted_github_repos` input is commented out for `aws-teams`. Now is the time to uncomment those lines. Follows the tasks below. Please see `stacks/catalog/aws-teams.yaml`
- The `gitops` and `planners` Teams are defined and deployed by `aws-teams`.
- Both teams have trusted relationships with the infrastructure repo via `trusted_github_repos`.
_Capitalization matters!_ In the reference architecture, these values are initially commented out and will need to be updated with your specific repository information:
```yaml
components:
terraform:
aws-teams:
vars:
trusted_github_repos:
gitops:
- "acme/infra:main"
planners:
- "acme/infra"
```
- The `aws-team-roles` default catalog allows the `gitops` team to assume the `terraform` role, including anywhere
`aws-team-roles` is overwritten (`plat-dev` and `plat-sandbox`)
- Similarly, the `planners` team can assume the `planner` role in `aws-team-roles` to plan Terraform only.
- `tfstate-backend` allows both teams to assume the default access role from the `core-identity` account
- `github-oidc-provider` is deployed to every account that GitHub will be able to access. This should be every account
except `root`.
- The workflows have adequate permission
In order to assume GitHub OIDC roles, a workflow needs the following:
```yaml
permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout
```
In order to assume GitHub OIDC roles _and_ manage Github Issues, a workflow needs these permissions:
```yaml
permissions:
id-token: write # This is required for requesting the JWT
contents: write # This is required for actions/checkout and updating Issues
issues: write # This is required for creating and updating Issues
```
## How To Setup
### Vendor Components
The `gitops` stack config depends on two components that may already exist in your component library (`s3-bucket` and
`dynamodb`) and adds one new component (`gitops`) to manage the GitHub OIDC access. Vendor these components either with
the included Atmos Workflows or using [Atmos Vendoring](https://atmos.tools/core-concepts/components/vendoring).
### Deploy GitOps Prerequisites
Deploy the GitOps prerequisite components, `gitops/s3-bucket`, `gitops/dynamodb`, and `gitops` with the following workflow
### Reapply `aws-teams`
Now we need to reapply `aws-teams` to add the trusted GitHub repositories to `gitops` and `planners`.
Uncomment or add the `trusted_github_repos` input:
```yaml
# stacks/catalog/aws-teams.yaml
components:
terraform:
aws-teams:
vars:
trusted_github_repos:
gitops:
- "acme/infra:main"
planners:
- "acme/infra"
```
Run the following command to apply:
`aws-teams` is a sensitive component deployed to the `core-identity` account and therefore needs to be applied with a role or user with access to the account. For example use the `managers` AWS Team or the SuperAdmin user.
```bash
atmos terraform apply aws-teams -s core-gbl-identity
```
And that's it! Now you can try creating a new pull request. If properly configured, you should see GitHub Actions kick off `Atmos Terraform Plan`.
1. Enable GitHub Actions support for any component by enabling `settings.github.actions_enabled: true` and let the
workflow handle the rest. Keep in mind this setting is likely enabled by default for your organization stack
configuration, `stacks/catalog/acme/_defaults.yaml`
1. The roles created by `aws-teams` or `gitops` should already be included in your workflows. Verify these roles match
the `env` settings in `.github/workflows/atmos-terraform-*`
1. You do not need to create a GitHub App or complete additional steps to trigger these workflows
---
## Atmos Auth
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
Atmos Auth provides seamless authentication to AWS using IAM Identity Center or GitHub OIDC. It automatically handles credential management and session refresh.
## Overview
In the previous steps, we deployed Identity Center with Permission Sets for human users and IAM roles for machine access (GitHub Actions). Now we map those Permission Sets and roles to groups of users using Atmos profiles. These profiles are preconfigured in the reference architecture.
For daily usage, see [Login to AWS](/layers/identity/how-to-log-into-aws/). For detailed commands and the latest documentation, see [Atmos Auth](https://atmos.tools/cli/commands/auth/usage/).
## Profiles
Profiles define role-based access patterns for different user types. Each profile configures which AWS SSO Permission Sets a user can assume across different accounts.
### Available Profiles
| Profile | Description | Root | Core Accounts | Dev/Sandbox | Staging/Prod |
|---------|-------------|------|---------------|-------------|--------------|
| `managers` | Full access to all accounts | Write | Write | Write | Write |
| `devops` | Full access to most accounts | Read | Write | Write | Write |
| `developers` | Limited access | State | State | Write | Read |
| `github-plan` | CI/CD plan operations | State | Read | Read | Read |
| `github-apply` | CI/CD apply operations | State | Write | Write | Write |
**Access Levels:**
1. **Write** — Full access including `AdministratorAccess` or `PowerUserAccess` permission sets, plus `TerraformApplyAccess` roles
1. **Read** — Read-only access via `ReadOnlyAccess` permission sets or `TerraformPlanAccess` roles
1. **State** — Access via `TerraformStateAccess` permission set, used for reading Terraform outputs from components deployed in that account
### Setting Your Profile
Set the `ATMOS_PROFILE` environment variable to the appropriate profile for your group (e.g., `managers`, `devops`, or `developers`):
```bash
export ATMOS_PROFILE=devops
```
Add this to your shell configuration (`~/.zshrc` or `~/.bashrc`) to make it persistent.
## Configuration
Atmos Auth is configured in your profile's `atmos.yaml` file located in `profiles//atmos.yaml`:
```yaml
# profiles/devops/atmos.yaml
auth:
providers:
sso:
kind: aws/iam-identity-center
region: us-east-1
start_url: https://your-org.awsapps.com/start
auto_provision_identities: true
identities:
# Terraform identities for each account
plat-dev/terraform:
kind: aws/permission-set
via:
provider: sso
principal:
name: TerraformApplyAccess
account:
name: plat-dev
core-identity/terraform:
kind: aws/permission-set
via:
provider: sso
principal:
name: TerraformApplyAccess
account:
name: core-identity
```
### Identity Naming Convention
Identities follow the format `/`:
1. **Static Terraform identities** — Preconfigured identities like `plat-dev/terraform` that map the correct Permission Set or IAM role for Terraform operations (plan, apply, or state access) for each stack. Defined in stack defaults (e.g., `stacks/orgs/acme/plat/dev/_defaults.yaml`)
1. **Auto-provisioned Permission Sets** — When `auto_provision_identities: true` is set, Atmos automatically populates all Permission Sets available to the user (e.g., `plat-dev/ReadOnlyAccess`, `plat-prod/AdministratorAccess`) for console and CLI access
## Next Steps
With Atmos Auth configured and profiles ready, learn how to access AWS and deploy Terraform using Atmos.
Login to AWS
---
## AWS Identity Center (SSO) ClickOps
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import Steps from "@site/src/components/Steps";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import StepNumber from "@site/src/components/StepNumber";
import Step from "@site/src/components/Step";
import ActionCard from "@site/src/components/ActionCard";
import PrimaryCTA from "@site/src/components/PrimaryCTA";
import AtmosWorkflow from "@site/src/components/AtmosWorkflow";
This guide provides an overview of setting up AWS Identity Center (SSO) with
ClickOps, detailing prerequisites and supported external identity providers.
It explains how to integrate AWS SSO with providers like Azure AD, JumpCloud,
Okta, and Google Workspace, including specific steps for configuring each.
## How it Works
AWS Single Sign-On (AWS SSO) is a service that simplifies access management for AWS accounts and applications. It enables users to sign in to AWS once and access multiple AWS accounts and applications without the need to re-enter credentials. To use it with an identity provider (IdP) for AWS SSO, administrators typically need to configure the integration within the AWS Management Console. This involves setting up a new AWS SSO instance, connecting it to the IdP, and specifying the users or groups that should have access to AWS resources. AWS SSO provides logging and auditing capabilities, allowing organizations to track user access to AWS resources and monitor security-related events.
### SAML-Based Authentication
The integration between the IdP and AWS SSO relies on the Security Assertion Markup Language (SAML) for authentication and authorization. SAML enables the exchange of authentication and authorization data between your IdP and AWS, allowing users to log in once to their IdP and gain access to AWS resources without additional logins.
### User Provisioning
AWS SSO can be configured to automatically provision and de-provision user accounts based on changes in the IdP directory. This helps keep user access in sync with changes made in your IdP.
### AWS SSO Permission Sets
AWS SSO allows administrators to define fine-grained access policies, specifying which AWS accounts and services users from the IdP can access.
### Multi-Factor Authentication (MFA)
Organizations using an IdP for authentication with AWS SSO can enhance security by enforcing multi-factor authentication (MFA) for added identity verification.
Once configured, users can experience single sign-on when accessing AWS resources. They log in to their IdP account and seamlessly gain access to AWS without needing to provide credentials again.
It's important to note that the specifics of the integration process may be subject to updates or changes, so it's recommended to refer to the official AWS documentation and your IdP's documentation for the most accurate and up-to-date information.
## Prerequisites
First, enable the AWS IAM Identity Center (successor to AWS Single Sign-On) service in the `core-root` account. This is the account where the `aws-sso` component will be deployed.
1. **Navigate** to the `core-root` account in the AWS Web Console
1. **Select** your primary region
1. **Go to** AWS IAM Identity Center (successor to AWS Single Sign-On)
1. **Enable** the service
## Configure your Identity Provider
These are the instructions for the most common Identity Providers. Alternatives are available, but the steps may vary depending on the provider.
It's important to note that the specifics of the integration process may be subject to updates or changes, so it's recommended to refer to the official AWS documentation and respective IdP documentation for the most accurate and up-to-date information based on your current date.
For providers not included in the following section, please [follow the AWS documentation for setting up an IdP integration with AWS](https://docs.aws.amazon.com/singlesignon/latest/userguide/supported-idps.html). This list includes Azure AD, CyberArk, OneLogin, and Ping Identity.
Okta is a common business suite that has an active director to manage users and permissions. We can utilize this to login to AWS by leveraging **Applications** that are used to sign in to things from your Okta Account.
#### Setup Okta
1. Under the Admin Panel go to **Applications**
2. Click **Browse App Catalog**
3. Search for `AWS IAM Identity Center` and click **Add Integration**
4. Keep the default settings of **App Label** ("AWS IAM Identity Center") and **Application Visibility**
5. Go to **Sign On** and Copy information from the SAML Metadata section, this will be used in AWS SSO.
6. Then go to Provisioning and click **Configure API Integration**
#### Setup AWS SSO
1. Sign into AWS SSO under your management account (`core-root`)
2. Go to the AWS IAM Identity Center (successor to AWS Single Sign-On) application
3. Enable IAM Identity Center
4. On the left panel click **Settings**
5. Under Identity Source click edit and add an **External identity provider**
6. Copy the information from Okta into the fields
7. The Okta App will need to be updated with the **Service provider metadata**
JumpCloud is a cloud-based directory service that provides secure, frictionless access to AWS resources. It can be used as an identity provider for AWS (Amazon Web Services) through a feature called AWS Single Sign-On (AWS SSO).
Follow the JumpCloud official documentation for setting up JumpCloud with AWS IAM Identity Center:
[Integrate with AWS IAM Identity Center](https://jumpcloud.com/support/integrate-with-aws-iam-identity-center)
:::caution Integrating JumpCloud with AWS IAM Identity Center
The official AWS documentation for setting up JumpCloud with AWS IAM Identity Center is not accurate. Instead, please
refer to the [JumpCloud official documentation](https://jumpcloud.com/support/integrate-with-aws-iam-identity-center)
:::
Microsoft Entra ID (formerly known as Office 365, Microsoft 365, Azure AD) can be used as an identity provider for AWS (Amazon Web Services) through a feature called AWS Single Sign-On (AWS SSO).
AWS SSO allows organizations to centralize identity management and provide users with seamless access to AWS resources using their existing Microsoft Entra ID credentials.
#### Setup Microsoft Entra ID
#### Open Microsoft Entra ID Application
Go to [Microsoft Entra's Admin Center](https://entra.microsoft.com/) and search for `Entra ID`
Go to [Microsoft Azure](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview) and search for `Entra ID`
#### Add AWS IAM Identity Center as an **Enterprise Application**
Click `Add` then `Enterprise Application`
Then select `AWS IAM Identity Center (successor to AWS Single Sign-On)`
Click `Create`, default options are fine
#### Download the Microsoft Entra ID Metadata XML File
On the left panel click `Single sign-on`, then download the XML SAML metadata file by pressing the button on step 5 `Set up AWS IAM Identity Center (successor to AWS Single Sign-On)`
#### Download the AWS IAM Identity Center Metadata XML File
Navigate to the AWS IAM Identity Center (successor to AWS Single Sign-On) application in AWS IAM Identity Center.
Go to Setup/Change Identity Source, click `External identity provider` from the available identity sources. Click `Next`.
Download the XML SAML Metadata file by clicking `Download metadata file`.
#### Upload the Microsoft Entra ID Metadata XML File to AWS
Upload the XML SAML metadata file from Microsoft Entra ID to AWS IAM Identity Center.
#### **SAVE**
Click `Save` to save the changes.
#### Automatic Provisioning
#### Generate SCIM URL and Secret
In AWS IAM Identity Center, you can set up automatic
provisioning by generating a URL and secret
#### Navigate to the new App
Go to Your newly created App for Single Sign On in Microsoft Entra ID, on
the left Panel go to `Provisioning`.
#### Set the mode to **Automatic** and paste the values from AWS
into the **Admin Credentials** Section{" "}
#### Verify Connection
Click `Test Connection` to verify the connection.
For non-explicitly supported Identity Providers, such as GSuite, set up the app integration with a custom external
identity provider. The steps may be different for each IdP, but the goal is ultimately the same.
:::tip aws-ssosync
GSuite does not automatically sync _both_ Users and Groups with AWS Identity Center without additional configuration! If using
GSuite as an IdP, considering deploying the [ssosync](https://github.com/awslabs/ssosync) tool.
Please see our [aws-ssosync component](/components/library/aws/aws-ssosync/) for details!
:::
Open the Identity account in the AWS Console
On the Dashboard page of the IAM Identity Center console, select Choose your identity source
In the Settings, choose the Identity source tab, select the Actions dropdown in the top right, and then select Change identity source
By default, IAM Identity Center uses its own directory as the IdP. To use another IdP, you have to switch to an external identity provider. Select External identity provider from the available identity sources
Configure the custom SAML application with the Service provider metadata generated from your IdP. Follow the next steps from your IdP, and then complete this AWS configuration afterwards
Open your chosen IdP
Create a new SSO application
Download the new app's IdP metadata and use this to complete step 5 above
Fill in the Service provider details using the data from IAM Identity Center, and then choose Continue. The mapping for the data is as follows:
```
For ACS URL, enter the IAM Identity Center Assertion Consumer Service (ACS) URL.
For Entity ID, enter the IAM Identity Center issuer URL.
Leave the Start URL field empty.
For Name ID format, select EMAIL.
```
If required for the IdP, enable the application for all users
Finally, define specific Groups to match the given Group names by the `aws-sso` component (`stacks/catalog/aws-sso.yaml`). In the default catalog, we define four Groups: `DevOps`, `Developers`, `BillingAdmin`, and `Everyone`
If set up properly, Users and Groups added to your IdP will automatically populate and update in AWS.
Additional IdP specific setup reference can be found here:
- [How to use Google Workspace as an external identity provider for AWS IAM Identity Center](https://aws.amazon.com/blogs/security/how-to-use-g-suite-as-external-identity-provider-aws-sso/)
## Required Groups
Before deploying the `aws-sso` component, you must create the following groups in your Identity Provider. These names are **case-sensitive** and must match exactly:
1. **Managers** — Full access to all accounts
1. **DevOps** — Full access to most accounts (except root)
1. **Developers** — Limited access for development work
1. **BillingAdmin** — Access to billing and cost management
Once these groups are provisioned and synced to AWS Identity Center, you can deploy the `aws-sso` component.
## Deploy Permission Sets
The `aws-sso` component deploys all Permission Sets and assigns them to the appropriate groups. This includes:
1. **`TerraformApplyAccess`** — Write access for Terraform operations
1. **`TerraformPlanAccess`** — Read-only access for Terraform plan operations
1. **`TerraformStateAccess`** — Access to Terraform state only
1. **`AdministratorAccess`** — Full AWS administrator access
1. **`PowerUserAccess`** — Power user access without IAM management
1. **`ReadOnlyAccess`** — Read-only access to AWS resources
1. **`RootAccess`** — Organizational root access for privileged operations (see [Centralized Root Access](/layers/identity/centralized-root-access/))
The `aws-sso` component is responsible for deploying all Permission Sets and mapping them to the appropriate groups per account. This mapping is configured in `stacks/catalog/aws-sso.yaml`, where you define which groups receive which Permission Sets for each account type. This is how you assign permissions to users — by adding them to the appropriate IdP group, they automatically receive the corresponding Permission Sets when they access AWS.
Deploy the component using the identity workflow:
:::caution Groups Must Exist First
The `aws-sso` component will fail if the required groups (`Managers`, `DevOps`, `Developers`, `BillingAdmin`) do not exist in AWS Identity Center. Ensure your IdP is configured and groups are synced before deploying.
:::
## Next Steps
Now that Identity Center and Permission Sets are provisioned, configure centralized root access management. This allows secure, auditable root operations on member accounts without maintaining root credentials.
Centralize Root Access
---
## Centralized Root Access
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
Centralized root access management allows you to securely perform privileged root actions on member accounts without maintaining root credentials. This eliminates the need to manage root passwords or MFA devices for each AWS account.
## Overview
AWS Organizations now supports centralized root access, which enables the management account (or a delegated administrator) to assume root on member accounts for specific privileged tasks. This is configured through the `RootAccess` permission set in IAM Identity Center.
With centralized root access:
1. **No root credentials needed** — Member accounts have no root passwords, access keys, or MFA devices
1. **Task-scoped access** — Root access is limited to specific AWS-managed task policies
1. **Audit trail** — All root operations are logged through CloudTrail
1. **Centralized control** — Only users with the `RootAccess` permission set can assume root
## Prerequisites
Before using centralized root access, enable the feature in your AWS Organization:
1. **Sign in** to the AWS Management Console as the root account
1. **Navigate** to IAM → Root access management
1. **Enable** both "Root credentials management" and "Privileged root actions"
1. **Deploy** the `aws-sso` component which configures the `RootAccess` permission set
:::tip
These prerequisites are typically completed during the [account setup](/layers/accounts/deploy-accounts/) or cold start process.
:::
## Available Task Policies
When assuming root, you must specify a task policy that limits what actions can be performed:
| Task Policy | Description |
|-------------|-------------|
| `IAMAuditRootUserCredentials` | Audit root user credentials across member accounts |
| `IAMCreateRootUserPassword` | Create a root user password (for recovery) |
| `IAMDeleteRootUserCredentials` | Delete root passwords, access keys, MFA devices |
| `S3UnlockBucketPolicy` | Unlock S3 bucket policies that deny all access |
| `SQSUnlockQueuePolicy` | Unlock SQS queue policies that deny all access |
## Using Centralized Root Access
Atmos supports the `aws/assume-root` identity kind, which chains from an SSO permission set to assume root in target accounts with a specific task policy.
### Configuration
Define assume-root identities in your profile's `atmos.yaml`:
```yaml
# profiles/managers/atmos.yaml
auth:
identities:
# Base identity with RootAccess permission set
organizational-root-access:
kind: aws/permission-set
via:
provider: sso
principal:
name: RootAccess
account:
name: core-root
# Chain to assume root for auditing credentials
plat-dev/audit-root:
kind: aws/assume-root
via:
identity: organizational-root-access
principal:
target_principal: "123456789012" # plat-dev account ID
task_policy_arn: arn:aws:iam::aws:policy/root-task/IAMAuditRootUserCredentials
# Chain to assume root for deleting credentials
plat-dev/delete-root-credentials:
kind: aws/assume-root
via:
identity: organizational-root-access
principal:
target_principal: "123456789012" # plat-dev account ID
task_policy_arn: arn:aws:iam::aws:policy/root-task/IAMDeleteRootUserCredentials
```
### Assume Root on a Member Account
1. **Authenticate** directly with the assume-root identity for the specific task:
```bash
atmos auth login --identity plat-dev/audit-root
```
1. **Run commands** as root on the member account:
```bash
atmos auth exec --identity plat-dev/audit-root -- aws iam list-mfa-devices
```
1. **Or start an interactive shell** with root credentials:
```bash
atmos auth shell --identity plat-dev/audit-root
aws sts get-caller-identity
```
### Example: Delete Root Credentials
To delete root credentials from a member account:
```bash
# Authenticate and run commands as root with the delete credentials task policy
atmos auth exec --identity plat-dev/delete-root-credentials -- \
aws iam delete-access-key --user-name root --access-key-id
# Or start a shell for multiple operations
atmos auth shell --identity plat-dev/delete-root-credentials
# Delete root MFA device
aws iam deactivate-mfa-device --user-name root --serial-number
aws iam delete-virtual-mfa-device --serial-number
```
## Security Considerations
1. **Restrict access** — Only the `managers` profile has the `RootAccess` permission set
1. **Task-scoped** — Each assume-root session is limited to a specific task policy
1. **Short-lived** — Root sessions have a maximum duration of 15 minutes
1. **Audited** — All `sts:AssumeRoot` calls are logged in CloudTrail
1. **No standing access** — Credentials are generated on-demand, not stored
## Additional Information
For more details, see the [AWS Centralized Root Access Documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-enable-root-access.html).
## Next Steps
Now that we have Permission Sets deployed for human access, we need to configure IAM roles for machine users.
Deploy IAM Roles
---
## Deploy IAM Roles
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Note from '@site/src/components/Note';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
Deploy IAM roles for GitHub Actions and machine users using the `iam-role` component. These roles are assumed via OIDC for CI/CD automation.
The legacy `aws-teams` and `aws-team-roles` components are deprecated. This page documents the new approach using Permission Sets for human users and IAM roles for machine users.
For the legacy approach, see [Access Control Evolution](/layers/identity/tutorials/access-control-evolution/).
## Overview
The identity layer provides two authentication paths:
1. **Human Users** — Use AWS IAM Identity Center Permission Sets (TerraformApplyAccess, TerraformPlanAccess) via SSO
1. **Machine Users** — Use IAM roles assumed via OIDC (GitHub Actions, CI/CD pipelines)
Human users authenticate through Identity Center and never need IAM roles. The `iam-role` component is specifically for GitHub Actions and other machine users that authenticate via OIDC.
## Deploy IAM Roles for GitHub Actions
### Vendor Identity Components
Pull the identity components into your local repository:
### Deploy GitHub OIDC Provider
Deploy the GitHub OIDC provider in all accounts:
This creates the OIDC identity provider in each account, allowing GitHub Actions to assume IAM roles.
### Deploy IAM Roles
The reference architecture includes pre-configured `iam-role/terraform` and `iam-role/planner` components. Deploy them across all accounts:
This deploys:
1. **`iam-role/terraform`** — Role for GitHub Actions apply operations
1. **`iam-role/planner`** — Role for GitHub Actions plan operations
1. **Trust policies** — Allow assumption via GitHub OIDC
## Human User Access
Human users do **not** use IAM roles. Instead, they authenticate via AWS IAM Identity Center:
1. **Permission Sets** — Define access levels (TerraformApplyAccess, TerraformPlanAccess, ReadOnlyAccess)
1. **SSO Groups** — Map IdP groups to Permission Sets
1. **Atmos Auth** — CLI tool that authenticates via Identity Center SSO
See [Configure Atmos Auth](/layers/identity/atmos-auth/) for human user setup.
## Next Steps
With IAM roles deployed for machine users and Permission Sets available for human users, configure Atmos Auth profiles to map users to identities.
Configure Atmos Auth
---
## Decide on AWS CLI Login
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import Note from "@site/src/components/Note";
import TaskList from "@site/src/components/TaskList";
Decide on a CLI tool that enables AWS login and credentials via SAML IDP for
CLI and web console access.
## Problem
Users need some way to login into AWS when using the CLI or applying Terraform changes. We have AWS Identity Center or AWS SAML setup for an AWS organization, but we need a way to login to AWS locally.
There are a number of tools that can help with this, but we need to decide on one.
### Option 1: Use the AWS CLI
First of all, we could use the AWS CLI to login to AWS. This is the most basic way to login to AWS, but it requires a lot of manual steps and is not very user-friendly.
### Option 2: Use Leapp
Alternatively we could use Leapp by Noovolari. This is a tool that allows you to login to AWS using SAML and then automatically generates temporary credentials for you to use in the CLI. Once setup, Leapp makes it very easy to login to AWS and use the CLI, assume roles across your accounts with Role Chaining pre-configured, and even launch directly into the AWS web console.
> ![IMPORTANT]
> Leapp has been a popular choice for this use case, but with Noovolari announcing the shutdown of their paid service, this could raise concerns about the long-term viability of the project. While the [Leapp](https://github.com/Noovolari/leapp) project will continue to be supported, the discontinuation of the paid option might make it less appealing to future users.
Leapp requires several manual steps during the initial setup, which has been a pain point for some users. See [How to Login to AWS (with Leapp)](/layers/identity/how-to-log-into-aws/) for more on the required setup and usage.
Leapp requires setup steps outside of our Geodesic containers, which makes it less convenient for users who primarily work in the shell and increases the likelihood of configuration errors.
### Option 3: Use `aws-sso-cli` (AWS SSO Only)
The most recent option we've come across is [aws-sso-cli](https://github.com/synfinatic/aws-sso-cli). This is a CLI tool that allows you to login to AWS using SAML and then automatically generates temporary credentials for you to use in the CLI. It is similar to Leapp, and is also open source and free to use. It also has a number of features that make it easier to use, such as the ability to login to multiple AWS accounts and roles at the same time.
One potential benefit of `aws-sso-cli` is that it is a CLI tool, which means it could likely be integrated into our Geodesic containers. This would make it easier for users to login to AWS and use the CLI, and would reduce the risk of user configuration errors.
However, `aws-sso-cli` is designed specifically for AWS SSO, which means it may not be suitable for users who are using AWS SAML.
### Option 4: Use `saml2aws` (AWS SAML Only)
Another option is to use `saml2aws`, which is a CLI tool that allows you to login to AWS using SAML. It is similar to Leapp and `aws-sso-cli`, but is specifically designed for AWS SAML. This means it may not be suitable for users who are using AWS SSO.
Most IdPs supported by `aws2saml` with the exception of Okta, depend on screen scraping for SAML logins, which is far from ideal. This approach can lead to issues, especially with services like GSuite that use bot protection, which occasionally disrupts users attempting to log in. Additionally, SAML providers differ in how they handle login processes and multi-factor authentication (MFA), meaning you may need to make specific adjustments to ensure smooth integration with your identity provider.
If your organization uses Okta, then `aws2saml` is good option.
### Option 5: Use a browser plugin
Another option is to use a browser plugin, such as [aws-extend-switch-roles](https://github.com/tilfinltd/aws-extend-switch-roles), that allows you to login to AWS using SAML. This is a simple and user-friendly way to login to AWS, but it requires you to use a browser and is not suitable for users who are working in the CLI.
### Option 6: Use a custom solution
Finally we could build our own custom solution for logging into AWS. This would give us complete control over the process, but would require a lot of development effort and ongoing maintenance.
## Recommendation
Cloud Posse continues to recommend Leapp for now, but we are evaluating alternatives.
---
## Decide on Identity Provider (IdP) Integration Method
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Ensure your organization can efficiently and securely manage access to AWS
resources. By choosing the appropriate IdP integration method, either AWS
Identity Center (SSO) or AWS SAML, you can align your authentication processes
with their operational structures, avoiding potential overlaps and
inefficiencies.
## Problem
After [deciding on which IdP](/layers/identity/design-decisions/decide-on-idp) will be used, companies need to decide on how they will authenticate with AWS using that IdP. Organizations require efficient and streamlined methods to authenticate and manage access to their AWS resources. Without a centralized or user-friendly system, managing access across multiple AWS organizations or accounts becomes cumbersome and prone to errors. Multiple SSO authentication options exist within AWS. Choosing between AWS Identity Center (SSO) and SAML requires organizations to determine which method aligns best with their operational structure and goals. This choice can be daunting without clear guidance.
Each authentication method comes with its own set of advantages and limitations. AWS SAML offers centralized access
across multiple organizations but may be overkill for entities with a single AWS Organization. On the other hand, AWS
Identity Center provides a user-friendly interface for single organizations but may not be as efficient for those
managing multiple AWS accounts or organizations. Implementing both methods simultaneously can lead to potential
overlaps, confusion, and inefficiencies unless managed correctly.
Organizations need clarity on which method to adopt and an understanding of the trade-offs involved to ensure efficient
and secure access to AWS resources. The best option depends on your unique organizational structures and user
preferences.
## Solution
Cloud Posse supports both AWS SAML and AWS Identity Center (AWS SSO) for authenticating with AWS. Choose one or both
options.
- **[AWS SAML 2.0 based federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html)** is
provisioned in the centralized identity account and then permits roles to assume access roles in other accounts across
the organization. It works well with multiple IdPs, enabling roles to be programmatically associated with specific
providers. Aside from the obvious benefit of using AWS SAML to provide a single authentication page for users, is that
the benefit that the AWS SAML approach enables granular control over all the mechanics, giving administrators ultimate
control over how it works. Internally, Cloud Posse uses AWS SAML to authenticate with all customers to access many AWS
Organizations easily.
- **[AWS Identity Center (AWS SSO)](https://aws.amazon.com/iam/identity-center/)**, alternatively, is deployed for a
single AWS Organization. With AWS Identity Center, we have a single access page for all accounts in the Organization and
can connect directly to a given account. **AWS Identity Center is the recommended choice for customers**, given that
most customers manage a single AWS Organization, and the single login page is the most user-friendly option. It's also
ideal for business users, requiring no additional software configuration like Leapp to access resources through the AWS
web console. However, it is limited to a single IdP, so companies that depend on multiple IdP's should consider other
options.
Both options can be deployed simultaneously. You can choose to have both or either option deployed.
:::info Cloud Posse Access
Since AWS Identity Center does not support multiple identity providers, we always deploy AWS SAML for Cloud Posse's
access for the duration of our engagement. We do that to control access in our own team and to make it easier for the
customer to cut off all of Cloud Posse's access at any time. For more on offboarding, see
[Offboarding Cloud Posse](/jumpstart/tutorials/offboarding-cloudposse)
:::
## Consequences
### AWS Identity Center (AWS SSO)
In order connect your chosen IdP to AWS SSO, we will to configure your provider and create a metadata file. Please
follow the relevant linked guide and follow the steps for the Identity Provider. All steps in AWS will be handled by
Cloud Posse.
Please also provision a single test user in your IdP for Cloud Posse to use for testing and add those user credentials
to 1Password.
- [AWS Identity Center (SSO) ClickOps](/layers/identity/aws-sso/)
### AWS SAML
If deploying AWS SAML as an alternative to AWS SSO, we will need a separate configuration and metadata file. Again,
please refer to the relevant linked guide.
- [GSuite](https://aws.amazon.com/blogs/desktop-and-application-streaming/setting-up-g-suite-saml-2-0-federation-with-amazon-appstream-2-0/):
Follow Steps 1 through 7. This document refers to Appstream, but the process will be the same for AWS.
- [Office 365](/layers/identity/tutorials/how-to-setup-saml-login-to-aws-from-office-365)
- [JumpCloud](https://support.jumpcloud.com/support/s/article/getting-started-applications-saml-sso2)
- [Okta](https://help.okta.com/en-us/Content/Topics/DeploymentGuides/AWS/aws-configure-identity-provider.htm)
---
## Decide on Identity Provider (IdP)
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import Note from "@site/src/components/Note";
import TaskList from "@site/src/components/TaskList";
Simplify AWS authentication by leveraging existing email providers or Identity
Providers (IdPs), ensuring streamlined access management and ease of use for
your team.
## Problem
Users need a way to authenticate to AWS.
## Solution
Verified working IdPs:
- GSuite (Google Workspaces)
- Office 365 (Microsoft Entra ID)
- Okta
- JumpCloud
- Auth0
Cloud Posse recommends using your existing email provider (e.g. Google, Microsoft, etc) as the IdP, unless you
already have a specialized one, such as Okta, Auth0, or JumpCloud.
## Consequences
Follow the steps below to integrate your IdP of choice with AWS.
Cloud Posse requires this information for your team to sign in to the new AWS
Accounts.
- [ ] Please create a temporary User in your IdP for the Cloud Posse Team.
The Cloud Posse Team will use this account to verify access to several
resources. For example `cloudposse@acme.com`.
---
## Design Decisions(4)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions of the Identity Layer. These decisions relate
to how you will manage identity and access management (IAM) in your AWS
accounts together with your Identity Provider (IdP).
---
## How to Log into AWS
import Intro from '@site/src/components/Intro';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
Locally authenticating with Atmos Auth and AWS Identity Center (AWS SSO).
Leapp is no longer used for authentication. If you see references to Leapp in older documentation, they are deprecated. We now use **Atmos Auth** for AWS authentication.
## Requirements
Atmos Auth is built into the Atmos CLI, so no additional tools are required beyond what's already in the project.
## Setting Your Profile
Before authenticating, you need to set your profile to match your team role. Profiles are located in the `profiles/` directory:
1. **`developers`** — For developers team members
1. **`devops`** — For DevOps team members
1. **`managers`** — For managers team members
### Setting the Profile
Set the `ATMOS_PROFILE` environment variable to your team's profile:
```bash
export ATMOS_PROFILE=developers # or devops, managers
```
To make this persistent, add the export command to your shell configuration file:
1. **zsh** — Add to `~/.zshrc`
1. **bash** — Add to `~/.bashrc` or `~/.bash_profile`
1. **fish** — Add to `~/.config/fish/config.fish` (use `set -gx ATMOS_PROFILE developers`)
After adding to your shell config, reload it:
```bash
source ~/.zshrc # or source ~/.bashrc
```
If you only run the export command without adding it to your config file, it will only apply to your current session.
The profile determines which permissions and identities are available to you. Identities follow the format `/` and include:
1. **Terraform identities** — `plat-dev/terraform`, `core-identity/terraform` - Used automatically by Atmos for Terraform operations
1. **Permission set identities** — `plat-dev/ReadOnlyAccess`, `plat-prod/AdministratorAccess` - Used for AWS CLI and console access
## Authentication
Authentication with Atmos Auth is simple and streamlined. The authentication configuration is already set up in `atmos.yaml` and your selected profile.
### Quick Login
To authenticate with AWS, run:
```bash
atmos auth login --provider sso
```
This will:
1. **Use your configured profile** — Determine the appropriate identity
1. **Open your browser** — Navigate to the AWS SSO login page
1. **Authenticate with your IdP** — Sign in with your organization credentials
1. **Store credentials securely** — Save to your system keychain
1. **Set up AWS credentials** — Configure access to the infrastructure
No need to specify an identity - your profile handles that automatically!
### Check Authentication Status
To verify you're authenticated and see your current session details:
```bash
atmos auth whoami
```
This will show you:
1. **Identity** — Which identity you're using
1. **Account and role** — The AWS account and role
1. **Expiration** — Credential expiration time
## Daily Workflow
Your typical workflow with Atmos is simple and straightforward.
### Using Atmos CLI (Recommended)
With Atmos Auth, you can run Atmos commands directly on your local machine. Once you've authenticated with `atmos auth login` and selected an identity, Atmos will automatically use your credentials and select the appropriate identity for each stack.
**Run Terraform commands:**
```bash
# Plan a terraform component
atmos terraform plan vpc -s plat-ue1-dev
# Apply a terraform component
atmos terraform apply vpc -s plat-ue1-dev
```
**Launch AWS Console in browser:**
```bash
atmos auth console --identity plat-dev/ReadOnlyAccess
```
**Run a specific AWS CLI command:**
```bash
# Execute a single AWS CLI command with read-only access
atmos auth exec --identity plat-dev/ReadOnlyAccess -- aws sts get-caller-identity
# List S3 buckets in production with admin access
atmos auth exec --identity plat-prod/AdministratorAccess -- aws s3 ls
```
**Start an interactive shell with a specific identity:**
```bash
# Open a shell session for running multiple commands
atmos auth shell --identity core-security/PowerUserAccess
# Exit the shell with Ctrl+D or type 'exit'
```
**List all available identities:**
```bash
atmos auth list
```
:::tip
Atmos will automatically select the correct identity for the stack you're working with. You can override this with the `--identity` flag if needed.
:::
### Using Geodesic (Optional)
If you prefer a containerized development environment with all tools pre-configured, you can use Geodesic:
**First time setup:**
```bash
make all
```
**Subsequent launches:**
```bash
make run
```
**When to use Geodesic:**
1. **Containerized environment** — You prefer a containerized development environment
1. **Pre-configured tools** — You want all tools pre-configured without managing versions locally
1. **Persistent shell** — You're working on multiple components and want a persistent shell session
1. **Multiple projects** — You're working across multiple projects with different tool requirements
1. **Kubernetes access** — You need to set up Kubernetes access with the `set-cluster` script
**When to use direct Atmos:**
1. **Quick commands** — You're running quick one-off commands
1. **Local integration** — You want to integrate Atmos into your local scripts or workflows
1. **Local machine** — You prefer working directly on your local machine
Both approaches work seamlessly with Atmos Auth!
## Additional Information
For more details about Atmos Auth, including advanced features like identity chaining, multiple identities, and troubleshooting, see the [Atmos Auth documentation](https://atmos.tools/cli/commands/auth/usage/#overview).
---
## Identity and Authentication
import ReactPlayer from "react-player";
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import SecondaryCTA from '@site/src/components/SecondaryCTA';
Cloud Posse's identity architecture provides fine-grained access control for AWS organizations using **AWS IAM Identity Center** (formerly AWS SSO) with **Permission Sets** for human users and **IAM roles** for machine users (CI/CD).
1. **Centralized Identity** — Users managed via your IdP (Okta, Google Workspace, Azure AD) and synced to IAM Identity Center
1. **Permission Sets** — Define what users can do in each account, assigned to SSO Groups
1. **Atmos Profiles** — Role-based access patterns for different user types (devops, developers, CI/CD)
1. **Atmos Auth** — Seamless CLI authentication with automatic credential refresh
1. **Static Configuration** — Account mappings as static YAML, no dynamic lookups or circular dependencies
AI generated voice
## Our Requirements
Let’s start by identifying the minimum requirements for an identity and authentication system.
### Accessing AWS as a human or as a machine user.
First we need to implement a system that is easy for both humans and machines to access securely, following the principle of least privilege
### Centralized management of user permissions
Plus, all users and permissions must be centrally managed and integrated into an identity platform. We don’t want to copy and paste permissions across accounts or rely on any manual processes
### Tight control over user groups
Next, we need fine grained access control for user groups. Then assign users to one or more groups depending on what they need access to. It needs to be easy to understand for both users and administrators.
### Apply Terraform for many accounts across an Organization
With Terraform, we need to manage resources concurrently across multiple accounts. We don’t want to put this burden on the operator to constantly switch roles, so Terraform needs to do this for us.
### Switch roles into other accounts easily both in the UI and locally
Finally for engineers, we want to quickly jump between accounts, access the “AWS” web console, or run “AWS” “CLI” commands without having to think how to do it every time.
## Problem We've Encountered
Now you may be asking some questions. There are plenty of existing solutions out there for authentication and identity. How did Cloud Posse arrive at their solution? What’s wrong with the alternatives?
### AWS Control Tower lacked APIs
First off, you might notice we don’t use AWS Control Tower. That’s because until recently, Control Tower didn’t have an API available. So we couldn’t programmatically manage it with Terraform. Of course Cloud Posse does everything with infrastructure as code, so that was a hard stop for us.
We’re planning to add support for Control Tower now that it’s available, once it matures.
### AWS IAM Identity Center is only for humans
The ideal way to access AWS for humans is with Identity Center, formally called “AWS SSO”, and this is included in our reference architecture. But that doesn’t solve how we provide access to machines. For example, integrating with GitHub Actions or Spacelift.
With Identity Center, a user assumes a single role for one account. That’s not going to work for us with Terraform if we’re trying to apply Terraform concurrently across accounts. For example, transit gateway architecture requires provisioning resources and connecting them across accounts. Identity Center is also limited because it only works with a single IdP. For larger enterprises, multiple IdPs might be used.
In addition, for the duration of an engagement, Cloud Posse configures our own IdP to access your infrastructure so that it’s easy for you to revoke our access when we’re done.
### AWS IAM Roles with SAML is cumbersome
We needed to find a solution for machine access and to apply Terraform across accounts. To do that, we can use IAM roles with a SAML provider and assume them in Terraform or third party integrations such as GitHub Actions.
The challenge then becomes making it easy for users. AWS SAML provides low level controls and is a little more cumbersome to use, especially when you compare how easy it is for users to use the IAM Identity Center.
How can we have a consistent solution that works for both?
Ultimately, AWS does not provide a single solution that meets all our requirements. We need to combine the best of both worlds. Identity Center is great for human access, but it doesn’t work well for machines. On the other hand, AWS SAML is great for machines but is cumbersome for users to navigate the AWS web console without a third party tool.
## Our Solution
### Integrated with Single Sign On
```mermaid
flowchart LR
user["User"] --> idp["Identity Provider"]
idp --> identity_center["AWS IAM\nIdentity Center"]
identity_center --> permission_set["Permission Set"]
permission_set --> account["AWS Account"]
style user fill:#9b59b6,color:#fff
style idp fill:#3578e5,color:#fff
style identity_center fill:#e67e22,color:#fff
style permission_set fill:#28a745,color:#fff
style account fill:#2c3e50,color:#fff
```
We use IAM Identity Center to manage users and groups, connected to your Identity Provider (Okta, Google Workspace, Azure AD, etc.).
Users sign into Identity Center to access any account they're authorized for. Administrators manage access through **Permission Sets** that define what actions users can perform. All access control is defined in Terraform, providing a complete audit trail and infrastructure-as-code management.
### Permission Set Based Access
```mermaid
flowchart LR
subgraph sso_groups["SSO Groups"]
devops["DevOps"]
developers["Developers"]
end
subgraph prod_account["Prod Account"]
prod_apply["TerraformApplyAccess"]
prod_plan["TerraformPlanAccess"]
end
subgraph dev_account["Dev Account"]
dev_apply["TerraformApplyAccess"]
dev_plan["TerraformPlanAccess"]
end
devops --> prod_apply
devops --> dev_apply
developers --> prod_plan
developers --> dev_apply
style prod_apply fill:#28a745,color:#fff
style dev_apply fill:#28a745,color:#fff
style prod_plan fill:#3578e5,color:#fff
style dev_plan fill:#3578e5,color:#fff
```
Access is managed through **Permission Sets** assigned to SSO Groups:
| Permission Set | Purpose |
|----------------|---------|
| `TerraformPlanAccess` | Read-only access for running `terraform plan` |
| `TerraformApplyAccess` | Full access for running `terraform apply` |
| `TerraformStateAccess` | Read access to Terraform state for Atmos functions |
Users are assigned to SSO Groups in your Identity Provider, and those groups are mapped to Permission Sets. This provides a clean separation between identity management (in your IdP) and access control (in AWS).
### CLI Authentication with Atmos Auth
For command-line access, we use **Atmos Auth** which provides seamless authentication:
```bash
# Login to AWS
atmos auth login
# Use a specific profile for operations
ATMOS_PROFILE=devops atmos terraform plan vpc -s plat-ue1-dev
```
Atmos Auth integrates with IAM Identity Center and handles credential refresh automatically. See [Atmos Auth](/layers/identity/atmos-auth) for setup details.
## Next Steps
Start by configuring AWS Identity Center with your IdP and deploying Permission Sets for your team.
Setup Identity CenterHow to Log into AWS
---
## Access Control Evolution
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
Cloud Posse's approach to AWS access control has evolved significantly over time. This document traces that evolution from our earliest architecture through to the current recommended approach using Permission Sets and IAM roles.
## Overview
Unsurprisingly, Cloud Posse has evolved its approach to access control over time. This document traces that evolution in terms of "evolutions" that were introduced over time. The intent is to help you understand which version of the access control system you are currently using, how it integrates with current and future versions of the Reference Architecture, what features are available but not yet imported into your system, and what the benefits of upgrading would be.
This document is not intended as an upgrade guide. Upgrade guides are found in release notes and component documentation. This document is intended to help you understand what you have and what you can have, and to help you decide when and if you want to upgrade.
Understanding the evolution helps you:
- **Identify your current architecture** — Know which version you're using
- **Understand migration paths** — See how to upgrade to the current approach
- **Maintain legacy systems** — Support existing deployments that haven't migrated
### Features are Both Code and Configuration
The evolutions described here are implemented in both code and configuration. Some upgrades require changes to all related components, some only to some of them. Sometimes, only a single component needs to be upgraded. Sometimes, if the components are already feature-capable, then only configuration changes are needed.
### The Four Evolutions
The access control system progressed through four major evolutions:
1. **IAM Primary and Delegated Roles** — The original architecture with separate role components
1. **AWS Teams and Team Roles** — Hub-and-spoke pattern with centralized identity account
1. **Dynamic Terraform Roles** — Refined team roles with plan-only access and normalization
1. **Permission Sets and IAM Roles** — Current approach using Identity Center directly
## Evolution 1: IAM Primary and Delegated Roles
**Components:** `iam-primary-roles`, `iam-delegated-roles`
The original Cloud Posse architecture predates Atmos and was designed to work with standard Terraform configurations.
### How It Worked
- `iam-primary-roles` deployed to the `identity` account created both the primary roles and templates for delegated roles
- `iam-delegated-roles` deployed to other accounts created roles based on those templates
- Users logged in via SAML to a primary role, then assumed delegated roles in other accounts
### Limitations
- **Confusing configuration** — The dual-purpose nature of `iam-primary-roles` made configuration error-prone
- **Inflexible role management** — Adding new roles to a subset of accounts was difficult
- **No SSO integration** — Predated AWS Identity Center
- **SuperAdmin dependency** — Many operations required the SuperAdmin role because the Terraform state backend role wasn't available until after `iam-delegated-roles` was deployed
## Evolution 2: AWS Teams and Team Roles
**Components:** `aws-teams`, `aws-team-roles`, `aws-saml`, `account-map`
Introduced in [Components v1.27.0](https://github.com/cloudposse/terraform-aws-components/releases/tag/1.27.0), this architecture separated the concepts of "teams" (user groups) from "team roles" (permissions in accounts).
### How It Worked
- `aws-teams` deployed to the `identity` account created team IAM roles (like groups)
- `aws-team-roles` deployed to each account created consistent roles (`admin`, `terraform`, `planner`, etc.)
- The `account-map` component provided dynamic lookups between teams and roles
- Users logged in to a team, then automatically assumed appropriate roles via role chaining
### Architecture
```mermaid
flowchart LR
subgraph IdP["Identity Provider"]
user["User"]
end
subgraph identity["core-identity"]
devops["devops team"]
developers["developers team"]
end
subgraph dev["plat-dev"]
dev_admin["admin role"]
dev_terraform["terraform role"]
end
subgraph prod["plat-prod"]
prod_admin["admin role"]
prod_planner["planner role"]
end
user -->|SAML| devops
user -->|SAML| developers
devops --> dev_admin
devops --> dev_terraform
devops --> prod_admin
developers --> dev_admin
developers --> prod_planner
```
### Benefits Over Evolution 1
- **Clear separation** — Teams are distinct from team roles
- **Centralized management** — All team definitions in one place
- **SSO integration** — Permission Sets could mirror teams
- **Atmos support** — DRY configuration via Atmos stacks
### Limitations
- **Dynamic dependencies** — The `account-map` component created circular dependencies
- **Complex role discovery** — Terraform had to query `account-map` to find the right role
- **Identity account overhead** — Required a dedicated `core-identity` account
## Evolution 3: Dynamic Terraform Roles
**Components:** `aws-teams`, `aws-team-roles`, `account-map`, `tfstate-backend` (updated)
Introduced in [Components v1.227.0](https://github.com/cloudposse/terraform-aws-components/releases/tag/1.227.0), this evolution refined the teams architecture with better Terraform integration.
### Key Improvements
- **Plan-only access** — New `planner` role for `terraform plan` without apply permissions
- **Normalized role names** — Consistent `terraform` role in all accounts (including `root` and `identity`)
- **Backend role in tfstate-backend** — Terraform state access role created earlier, reducing SuperAdmin dependency
- **Individual per-account access** — Users could use their own role permissions instead of team roles
### Features Added
| Feature | Description |
|---------|-------------|
| Plan-only access | `planner` role for drift detection without modification capability |
| Per-account users | SSO users can Terraform directly in accounts they have access to |
| Normalized roles | `terraform` role works consistently in all accounts |
| AWS config generation | Automated generation of AWS CLI and browser plugin configs |
### Limitations
- **Still requires account-map** — Dynamic lookups remained
- **Complex upgrade path** — Migration required careful coordination
- **Circular dependencies** — Components still had implicit dependencies on each other
## Evolution 4: Permission Sets and IAM Roles (Current)
**Components:** `aws-sso`, `iam-role`
The current recommended architecture eliminates the complexity of teams and dynamic lookups in favor of direct Permission Set assignments and static configuration.
### How It Works
- **Human users** authenticate via Identity Center and assume Permission Sets directly
- **Machine users** (CI/CD) authenticate via OIDC and assume IAM roles
- **No account-map** — Account mappings are static YAML in stack configurations
- **No identity account** — Identity managed in `core-root` alongside other core services
### Architecture
```mermaid
flowchart LR
subgraph IdP["Identity Provider"]
user["Human User"]
gh["GitHub Actions"]
end
subgraph sso["Identity Center"]
ps_admin["AdministratorAccess"]
ps_apply["TerraformApplyAccess"]
ps_plan["TerraformPlanAccess"]
end
subgraph iam["IAM Roles"]
role_terraform["iam-role/terraform"]
role_planner["iam-role/planner"]
end
subgraph accounts["AWS Accounts"]
dev["plat-dev"]
prod["plat-prod"]
end
user --> sso
ps_admin --> dev
ps_admin --> prod
ps_apply --> dev
ps_plan --> prod
gh -->|OIDC| role_terraform
gh -->|OIDC| role_planner
role_terraform --> dev
role_planner --> prod
```
### Permission Sets
| Permission Set | Purpose |
|----------------|---------|
| `AdministratorAccess` | Full access for administrative tasks |
| `PowerUserAccess` | Full access except IAM management |
| `ReadOnlyAccess` | View resources without modification |
| `TerraformApplyAccess` | Run `terraform apply` |
| `TerraformPlanAccess` | Run `terraform plan` only |
| `TerraformStateAccess` | Access Terraform state backend |
### Benefits
- **No circular dependencies** — Static configuration eliminates dynamic lookups
- **Simpler architecture** — Fewer components to manage
- **Direct SSO access** — No intermediate team role assumption required
- **Clear separation** — Human users use Permission Sets, machines use IAM roles
- **Atmos Auth integration** — Seamless CLI authentication via `atmos auth`
### Migration
For migration from the teams-based architecture, see [Migrate from Account-Map](/layers/project/tutorials/migrate-from-account-map/).
## Identifying Your Architecture
| If you have... | You're using... |
|----------------|-----------------|
| `iam-primary-roles` and `iam-delegated-roles` | Evolution 1 |
| `aws-teams`, `aws-team-roles`, `account-map` | Evolution 2 or 3 |
| `aws-sso`, `iam-role`, no `account-map` | Evolution 4 (current) |
### Component Forensics
**Evolution 1:** Look for `iam-primary-roles` or `iam-delegated-roles` directories in your components.
**Evolution 2/3:** Check if `account-map/modules/roles-to-principals/variables.tf` exists. If it contains `overridable_team_permission_sets_enabled`, you have at least Evolution 3.
**Evolution 4:** You have `aws-sso` component but no `account-map` component, and your stacks use static `account_map` variables.
## Recommendations
**New deployments** should use Evolution 4 (Permission Sets and IAM Roles).
**Existing deployments** on Evolution 2 or 3 should plan migration to Evolution 4 to benefit from:
- Simplified architecture
- No circular dependencies
- Better Atmos Auth integration
- Reduced maintenance overhead
**Evolution 1 deployments** should upgrade directly to Evolution 4 rather than going through intermediate evolutions.
---
## Using AWS SAML to Access AWS
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
AWS SAML provides federated identity access to AWS, allowing users to authenticate via their Identity Provider and assume IAM roles directly. While the reference architecture defaults to AWS IAM Identity Center with Permission Sets, AWS SAML remains fully supported for organizations that prefer or require direct SAML federation.
## When to Use AWS SAML
AWS SAML is an alternative to AWS IAM Identity Center that provides:
1. **Lower-level control** — Direct SAML federation to IAM roles without the Identity Center abstraction
1. **Multiple concurrent IdPs** — Support for multiple Identity Providers simultaneously
1. **Legacy compatibility** — Works with existing SAML-based authentication workflows
## Requirements
Using AWS SAML instead of IAM Identity Center requires modifications to your Atmos Auth configuration:
1. **Configure profiles for IAM roles** — Map Atmos profiles to IAM roles instead of Permission Sets
1. **Deploy the `aws-saml` component** — Create the SAML Identity Provider in AWS
1. **Configure your IdP** — Set up SAML federation in your Identity Provider
## Setup
### Export IdP Metadata
Export a metadata file from your Identity Provider. The process varies by provider:
1. Open the [AWS documentation for GSuite](https://aws.amazon.com/blogs/desktop-and-application-streaming/setting-up-g-suite-saml-2-0-federation-with-amazon-appstream-2-0/)
1. Follow Steps 1 through 7 (the process is the same for any AWS service)
1. Download the metadata file
1. Create an "Amazon Web Services Account Federation" application in Okta
1. Select "SAML 2.0" from the Sign-On Method
1. View and download the identity provider (IdP) metadata file
For details, see the official [Okta documentation](https://help.okta.com/en-us/Content/Topics/DeploymentGuides/AWS/aws-configure-identity-provider.htm).
Follow the [JumpCloud documentation](https://support.jumpcloud.com/support/s/article/getting-started-applications-saml-sso2) and download the metadata file.
The setup for Microsoft Entra ID (formerly Azure AD) has some nuances. See our [Microsoft Entra ID guide](/layers/identity/tutorials/how-to-setup-saml-login-to-aws-from-office-365/) for detailed instructions.
### Import the Metadata File
Place the metadata file in your infrastructure repository:
1. Save the file to `components/terraform/aws-saml/`
1. Update `stacks/catalog/aws-saml.yaml` to reference the filename
1. Commit to version control
For Okta, ensure the `var.saml_providers` map key ends with `-okta`. This suffix triggers creation of a dedicated IAM user for Okta role discovery:
```yaml
saml_providers:
acme-okta: "OktaIDPMetadata-acme.com.xml"
```
### Deploy the SAML Integration
Deploy the `aws-saml` component to your root account:
```bash
atmos terraform apply aws-saml -s core-gbl-root
```
### Complete IdP Setup
Complete the integration in your Identity Provider:
Follow the [official Okta documentation](https://help.okta.com/en-us/content/topics/deploymentguides/aws/aws-configure-aws-app.htm) to complete setup.
**Important notes:**
- The `aws-saml` component creates an IAM User for Okta to discover roles. Access keys are stored in AWS SSM Parameter Store.
- In Okta's "Provisioning" tab, check **"Update User Attributes"** for roles to populate correctly.
### Configure Atmos Auth for SAML
Update your Atmos Auth configuration to use the SAML provider and IAM roles instead of Permission Sets.
First, define the SAML provider in your `atmos.yaml`:
```yaml
# atmos.yaml
auth:
providers:
acme-okta:
kind: aws/saml
region: us-east-1
url: https://acme.okta.com/app/amazon_aws/abc123/sso/saml
idp_arn: arn:aws:iam::123456789012:saml-provider/acme-okta
driver: Okta # Options: Browser, GoogleApps, Okta, ADFS
```
Then, define identities that use the SAML provider. The `aws/saml` provider requires chaining to an `aws/assume-role` identity:
```yaml
# atmos.yaml (continued)
auth:
identities:
plat-dev/terraform:
kind: aws/assume-role
via:
provider: acme-okta # References the SAML provider defined above
principal:
assume_role: arn:aws:iam::111111111111:role/acme-plat-gbl-dev-terraform
session_name: atmos-session
```
See [Atmos Auth Providers](https://atmos.tools/cli/configuration/auth/providers/#aws-saml) and [Atmos Auth Identities](https://atmos.tools/cli/configuration/auth/identities/) for detailed configuration options.
### (Optional) AWS Extend Switch Roles
For easier role-switching in the AWS Console, use the [AWS Extend Switch Roles](https://github.com/tilfinltd/aws-extend-switch-roles) browser extension.
Copy the configuration from `rootfs/etc/aws-config` in your infrastructure repository into the plugin.
## Comparison with IAM Identity Center
| Feature | AWS SAML | IAM Identity Center |
|---------|----------|---------------------|
| Setup complexity | Higher | Lower |
| User experience | Manual role selection | Integrated portal |
| Multiple IdPs | Supported | Single IdP |
| Permission management | IAM roles directly | Permission Sets |
| Atmos Auth support | Yes (aws/saml kind) | Yes (aws/sso kind) |
| Reference architecture default | No | Yes |
---
## How to Setup SAML Login to AWS from Office 365
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
## Problem
Office 365 is a common business suite that has an active director to manage users and permissions. We need to utilize this to login to AWS.
## Solution
Azure Devops has **Enterprise Apps** that are used to sign into things from your O365 Account.
### AWS SAML
1. Under [https://aad.portal.azure.com/#allservices/category/All](https://aad.portal.azure.com/#allservices/category/All) go to **Enterprise Applications**
2. Click **New Application**
3. Choose the right application, for SAML it’s **AWS Single-Account Access**
4. Click **Create**, default options are fine
5. On the Left Panel Click **Single sign-on**
6. Choose **SAML** as the Select a single sign-on method.
7. You may be prompted to change or use the defaults, if you have many aws single account logins you will need to modify the defaults.
8. Ensure the Identifier(Entity ID) is set to something valid (Should be `https://signin.aws.amazon.com/saml` optionally add a `#identifier`
9. Download the XML File by Pressing the Button on **Step** **5** **Setup <App Name>**. This should download an XML File, please send this to the CloudPosse Team, this will be placed in your `aws-saml` component to add your login.
### Setting up Login Role
The next steps determine which role you sign into from the app. By default we recommend this be the admin team role that has administrative in almost every account (besides creating roles in identity, nor organization permissions in root). If you want to use a different team role, please ensure you understand the team permissions.
- [aws-team-roles](/components/library/aws/aws-team-roles/)
- [aws-teams](/components/library/aws/aws-teams/)
Under the apps **Single sign-on** configuration (where the last steps left off) **Step 2** has an attribute called Role. this is the Role given to the user to attempt to sign into.
The AWS Docs ([https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml_assertions.html#saml_role-attribute](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml_assertions.html#saml_role-attribute)) require this to be of the format:
```
arn:aws:iam::account-number:role/role-name1,arn:aws:iam::account-number:saml-provider/provider-name
```
This means your **user.assignedroles** in azure should be `arn:aws:iam::account-number:role/role-name1,arn:aws:iam::account-number:saml-provider/provider-name`
for example: `arn:aws:iam::00000000000:role/abc-core-gbl-identity-admin,arn:aws:iam::00000000000:saml-provider/abc-core-gbl-identity-provider-azure-ad`, these values are generated from the `aws-saml` component, and should be given by the CloudPosse Team.
:::caution
Changing a Users Assigned Roles requires an escalated level of Azure Active Directory. (Hidden behind a paywall)
:::
:::tip
If you are using the most basic plan you can get around this paywall with a regex expression.
:::
### Workaround Regex:
- Step **2** edit the attributes and claims, edit specifically the **Role**
- Set the Source to be a transformation of anything such as `RegexReplace (user.primaryauthoritativeemail)`
- Set the `Regex pattern` to be `(.+)$`
- Set the `Replacement pattern` to be your value `arn:aws:iam::account-number:role/role-name1,arn:aws:iam::account-number:saml-provider/provider-name`
- Test the replacement and ensure you get a value like the `arn:aws:iam::account-number:role/role-name1,arn:aws:iam::account-number:saml-provider/provider-name`.
## Add The app to Specific Users
Then give this app to specific users and let them login to the aws-team!
---
## Tutorials(6)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These are some additional tutorials that will help you along with the associated identity layey components.
---
## How to Monitor Everything with Datadog
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Admonition from '@theme/Admonition';
This outlines Cloud Posse's practical approach to monitoring with Datadog, focusing on defining reusable Service Level Indicators (SLIs) and Service Level Objectives (SLOs) for consistent implementation across customer environments. It aims to help businesses streamline monitoring and incident management by aligning technical performance with business goals.
Our goal with this document is to identify the reusable, standard SLI/SLO for our customers that we can readily implement time and time again.
This document goes very deep into the theory and practice of monitoring with Datadog.
If you are looking for a quick start guide, see [our Setup Guide for DataDog](/layers/monitoring/datadog/setup/).
If you are trying to solve a specific problem, checkout [our tutorials for DataDog](/layers/monitoring/datadog/tutorials/).
## Problem
A typical business operating in AWS has literally tens of thousands of data points that leads to analysis paralysis. If we sit down and try to make sense of it all, we realize knowing what is critical and when is non-trivial. Modern advances in monitoring platforms have brought Artificial Intelligence and Anomaly Detection, but those are not foolproof. There is no silver bullet. AI can not tell us what is mission-critical for _our_ business to succeed. That’s because they only see part of the story. They have no visibility into our finances or business objectives. Defining Service Level Objectives, therefore, becomes paramount and while mathematically trivial to compute, but defining what are services in the context of your organization and what your objectives are, is still very subjective. There’s an overwhelming amount of theory on what we should do, but not that much prescriptive step-by-step advice on how to actually implement it using IaC resulting in a fully functioning system.
## Solution
- Define a process for us to help our customers easily define SLOs that are special to their business
- Define a process for us to identify generalized SLOs for every one of our customers that have a very common stack
- Use SLOs to determine the health of a service, where a service is not always what end-users of the _business,_ but might be internal customers
- Use SLOs to reduce the number of alerts and concentrate them on when specifically the objectives have been violated
## Definitions
Service
Any group of one or more applications (or "components") with a shared purpose for some customer (either internal to some team or other service or external with end-users).
Service Level Indicator (SLI)
A quantitative measurement of a service's performance or reliability (e.g. the amount of time a transaction took). SLIs may or may not be expressed as percentages. In Datadog, the SLI is implemented as a metric, synthetic or aggregation of one or more monitors. There's nothing called an SLI in Datadog.
Service Level Objective (SLO)
A target percentage for an SLI over a specific period of time. It's always expressed as a percentage. In Datadog it's a specific resource type defined with a numerator and denominator. A score of 100% is excellent; 0% is dead. Datadog has native support for SLOs as a resource.
Service Level Agreement (SLA)
An explicit (e.g. contractual agreement) or implicit agreement between a client and service provider stipulating the client's reliability expectations and service provider's consequences for not meeting them.
Error Budget
The allowed amount of unreliability derived from an SLO's target percentage (100% - target percentage) that is meant to be invested into product development. There's a corresponding Burn Rate associated with an Error Budget that is equally important to understand because it's the rate of change. Datadog automates burnrate as part of the widget.
MTTF (mean time to failure)
The outage frequency.
MTTR (mean time to restore)
The outage duration and is defined as it is experienced by users: lasting from the start of a malfunction until normal behavior resumes.
Availability
A percentage defined as (uptime)/(total length of time), using appropriate units (e.g. seconds), or (1 - (MTTR/MTTF)) x 100%.
RED method
(Rate, Errors, and Duration) focuses on monitoring your services, leaving their infrastructure aside and giving you an external view of the services themselves—in other words, from the client's point of view.
USE method
(Utilization, Saturation, and Errors) focuses on the utilization of resources to quickly identify common bottlenecks; however, this method only uses request errors as an external indicator of problems and is thus unable to identify latency-based issues that can affect your systems as well.
Some definitions borrowed from Datadog's [Key Terminology](https://docs.datadoghq.com/monitors/service_level_objectives/#key-terminology).
## Theory
### Golden Signals (SLIs)
Golden Signals are a form of telemetry that applies to anything with throughput. Anything that’s not a golden signal is considered general telemetry.
Latency
The time it takes to service a request. It’s important to distinguish between the latency of successful requests and the latency of failed requests.
Traffic
A measure of how much demand is being placed on your system, measured in a high-level system-specific metric. For a web service, this measurement is usually HTTP requests per second.
Errors
The rate of requests that fail, either explicitly (e.g., HTTP 500s), implicitly (for example, an HTTP 200 success response, but coupled with the wrong content), or by policy (for example, "If you committed to one-second response times, any request over one second is an error").
Saturation
How "full" your service is. A measure of your system fraction, emphasizing the resources that are most constrained (e.g., in a memory-constrained system, show memory; in an I/O-constrained system, show I/O).
These golden signals are closely related to the [RED metrics](https://www.weave.works/blog/the-red-method-key-metrics-for-microservices-architecture/) for microservices: rate, errors, and duration, and the older [USE method](https://www.weave.works/blog/the-red-method-key-metrics-for-microservices-architecture/) focusing on utilization, saturation, and errors. These signals are used to calculate the service level objectives (SLOs).
### Other Signals
Here are other useful metrics that are examples that don’t necessarily fit into the Golden Signals.
#### Pull Requests
Time to Merge
The duration between the creation of a pull request and when it's merged. This metric reflects the efficiency of
the review and merging process.
Lead Time
The total time from when work on a pull request starts until it's merged. It measures the overall speed of the
development process.
Size (LOC)
The number of lines of code (LOC) changed in a pull request. This metric helps gauge the complexity and potential
impact of the changes.
Flow Ratio
The ratio of the total number of pull requests opened to those closed in a day. It indicates whether the team's
workflow is balanced and sustainable.
Discussions & Comments
The number of comments and discussions on a pull request. This metric shows the level of collaboration and code
review quality.
Force Merged by Admins (bypassing approvals)
The number of pull requests merged by admins without the usual approval process. This metric highlights instances
where standard review procedures were skipped.
#### Code Quality
Code Coverage
The percentage of your codebase covered by automated tests. This metric helps ensure that your code is thoroughly
tested.
Static Code Analysis
The number of issues found by static code analysis tools. This metric helps identify potential bugs and security
vulnerabilities.
Defect Escape Rate
The percentage of defects found in production that weren't caught by automated tests. This metric indicates the
effectiveness of your testing strategy.
#### Customer Experience
Page Load Time
The time it takes for a web page to load. This metric is crucial for ensuring a positive user experience.
Browser Interaction Time
The time it takes for a user to interact with a web page. This metric helps gauge the responsiveness of your
application.
Server Error Rate
The percentage of server errors returned to users. This metric indicates the reliability of your backend
infrastructure.
JS Errors
The number of JavaScript errors encountered by users. This metric helps identify issues that impact the user
experience.
### Service-Oriented View
#### What is a Service?
- A service is anything that warrants having an SLO
- A service has a contract with internal parties or external parties to provide a “service” with certain guarantees. Guarantees with a contractual commitment are SLAs.
- We Should be able to look at a business service and drill down to the alerts affecting that service
#### Questions to ask:
- What are the services we provide? e.g. API
- Who are the consumers of those services? e.g. internal users or customers
- Who is the support team of those services?
### Error Budget
The Error Budget determines risk tolerance. It is usually a calculation of the remaining time in a given period that the SLO can be violated before the SLA is violated.
When the budget is depleted (nearing 0%), reliability should be prioritized, which often results in:
- freeze feature releases
- prioritize postmortem items
- improve automation over human toil
- improve monitoring and observability
- consult/collaborate with SRE
When an Error Budget is abundant (remaining error budget > 0%), velocity should be prioritized, which results in:
- release new features
- expected system changes
- try risky (but valuable) experiments
## Our Strategy
1. #### Decide on what qualifies as a “service” for the business
1. #### For every service, decide on how to quantify the 4 [_golden signals_](#golden-signals-slis), **these are your SLI**s
- [Latency](#latency)
- [Traffic](#traffic)
- [Errors](#errors)
- [Saturation](#saturation)
1. #### Determine how to create **SLO’s** from these SLIs over time. This should be a simple math formula over time.
1. #### Incidents are created **ONLY** in response to violations of the SLO for some period of time _t._
1. #### There should be a 24 hour, 7 day, 30 day and 1 year SLO (for customers).
1. #### All alerts that relate to some SLO and Incident share same common tag convention.
1. #### Our SLO Dashboards should be by Service, coalescing into Team Dashboards, which further group into Organization Dashboards.
1. #### Organization/Team Dashboards should include a list of services (as we can only use DD’s APM Service for things that execute - such as containers, lambdas, and hosts)
1. #### Each service should have enough monitors to accurately describe when it is operating **correctly** and when it is not, no more. **Ideally** **1-3 SLIs per user journey** [[source](https://www.datadoghq.com/videos/solving-reliability-fears-with-service-level-objectives/#focus-on-one-to-three-slis)].
**As Simple as Possible, No Simpler**
[From Google's SRE Book](https://sre.google/sre-book/monitoring-distributed-systems/):
To avoid an overly complex, and eventually unmaintainable monitoring system we propose avoiding unnecessary complexity, such as:
Alerts on different latency thresholds, at different percentiles, on all kinds of different metrics
Extra code to detect and expose possible causes
Associated dashboards for each of these possible causes
The sources of potential complexity are never-ending. Like all software systems, monitoring can become so complex that it’s fragile, complicated to change, and a maintenance burden.
Therefore, design your monitoring system with an eye toward simplicity. In choosing what to monitor, keep the following guidelines in mind:
The rules that catch real incidents most often should be as simple, predictable, and reliable as possible.
Data collection, aggregation, and alerting configuration that is rarely exercised (e.g., less than once a quarter for some SRE teams) should be up for removal.
Signals that are collected, but not exposed in any prebaked dashboard nor used by any alert, are candidates for removal.
## Theory into Practice
### SLIs
We should create monitors for the [_Golden Signals_](#golden-signals-slis) for a given service. These are the SLIs.
We can make SLIs a percentage out of 100%.
Percentages are easy to understand by the widest audience, for example, 0% is bad, 100% is good.
In Datadog, we can create a monitor for each SLI.
We can also create a Datadog SLO for each SLI.
Every SLI may or may not be a Datadog SLO.
An SLI may be part of one or more SLOs (e.g. By Monitor Uptime).
[See Datadog Docs for details](https://docs.datadoghq.com/service_management/service_level_objectives/metric/#define-queries).
### SLAs
SLAs should be thought of like SLOs but on an annual basis and tied to a contractual commitment (expectations, impacts, and consequences) with customers. These usually have penalties associated with them.
Additionally, SLAs are typically on a calendar year, and not a rolling window of 365 days.
### SLOs
With well-defined SLOs we can ensure that everyone from the business side to engineering is aligned on what’s important and what the goals are for the business at large.
- There is no one SLO. Each service gets its own SLO.
- Combine significant SLIs for a given capability into a single SLO for that capability
- A dashboard must exist that aggregates all SLOs in one place.
- Technically, SLOs are not restricted to just what’s in Datadog, however, that’s just what we’re able to monitor.
- Not every SLO is associated with an SLA
SLOs will not necessarily be the same per business.
#### Goals
- As few SLOs as possible, but no fewer
- Aggregate as much as possible into an SLO, but for the _right_ audience
#### Examples
| **Business Impact** | **Good examples of SLOs** | **Bad Examples of SLOs** |
|-----------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
| Our SEO experts require that pages respond within 300ms or we will drop in rankings. The website receives most of it’s traffic from natural search. | 99.95% of pages respond with a 200 response code in less than 300ms | CPU utilization is less than 95%SQL Database Read I/O is at 90% utilization |
| As an e-commerce business, users must be able to add products to their cart and checkout or have no revenue. | 99.99% of cart transactions succeed without error. 99.99% of checkout transactions succeed without error. | |
| As a online store, customers are much more likely to purchase products with images than with placeholder images. | 99.99% of images served are actual product images (e.g.99.99% of images are served within a latency of 100ms99.99% of images are served with a status code of 2XX | |
#### Implementation
[Datadog's SLO Page](https://app.datadoghq.com/slo/manage) and [Terraform resource](https://registry.terraform.io/providers/DataDog/datadog/latest/docs/reference/service_level_objective)
We should have multiple early indicators (aka thresholds) for when the SLO is in jeopardy: e.g. when 10%, 25%, 50%, 75% of the error budget is exhausted.
**Target**: 99.9% Uptime
**Error Budget**: 0.10
**“...at this rate, you’ll exhaust the error budget (violate the SLO) in X time.”**
Every monitor (SLI) that influences a given SLO should share a common tag value (E.g. `slo:myapp`) with the SLO. In other words, the SLO should be tagged `slo:myapp` and the monitors (SLI) corresponding to that (the _golden signals)_ should also then be tagged `slo:myapp`.
:::caution
Datadog SLOs are implemented on a rolling window, while SLAs are may be tied to a calendar period (e.g. the calendar year).
:::
```hcl
# Create a new Datadog service level objective
resource "datadog_service_level_objective" "foo" {
name = "Example Metric SLO"
type = "metric"
description = "My custom metric SLO"
query {
numerator = "sum:my.custom.count.metric{type:good_events}.as_count()"
denominator = "sum:my.custom.count.metric{*}.as_count()"
}
# 7 day rolling window
thresholds {
timeframe = "7d"
target = 99.9
warning = 99.99
target_display = "99.900"
warning_display = "99.990"
}
# 1 month rolling window
thresholds {
timeframe = "30d"
target = 99.9
warning = 99.99
target_display = "99.900"
warning_display = "99.990"
}
# 1 year rolling window (SLA)
thresholds {
timeframe = "365d"
target = 99.9
warning = 99.99
target_display = "99.900"
warning_display = "99.990"
}
tags = ["foo:bar", "baz"]
}
```
Questions:
### Dashboards
- Only SLOs are on the SLO dashboard
- A dashboard must exist that aggregates all SLOs in one place, group logically by some service
- Each widget displays an SLO with the remaining error budget.
## FAQ
- **How to visualize error budgets and burn rate?**
- [Using Datadogs SLO Summary](https://docs.datadoghq.com/dashboards/widgets/slo/#setup)
- [Datadog Error Budget Alerts](https://docs.datadoghq.com/monitors/service_level_objectives/error_budget/#overview)
- [SLO Checklist](https://docs.datadoghq.com/monitors/guide/slo-checklist/)
## References
### Datadog:
- [Track the status of all your SLOs in Datadog](https://www.datadoghq.com/blog/slo-monitoring-tracking/)
- [Best practices for managing your SLOs with Datadog](https://www.datadoghq.com/blog/define-and-manage-slos/)
- [Datadog Picking good SLIs](https://www.datadoghq.com/blog/establishing-service-level-objectives/#picking-good-slis)
- [DataDog Learning Center](https://learn.datadoghq.com/)
- Video: [Error budgets for SLOs](https://www.datadoghq.com/videos/solving-reliability-fears-with-service-level-objectives/#error-budgets-for-slos)
### Google SRE:
- [Monitoring Distributed Systems](https://sre.google/sre-book/monitoring-distributed-systems/)
- [Setting SLOs: a step-by-step guide](https://cloud.google.com/blog/products/management-tools/practical-guide-to-setting-slos)
- [SRE at Google: Our complete list of CRE life lessons](https://cloud.google.com/blog/products/devops-sre/sre-at-google-our-complete-list-of-cre-life-lessons)
### Blogs:
- [SREs: Stop Asking Your Product Managers for SLOs](https://devops.com/sres-stop-asking-your-product-managers-for-slos/?utm_source=pocket_mylist)
- [SRE fundamentals: SLIs, SLAs and SLOs](https://cloudplatform.googleblog.com/2018/07/sre-fundamentals-slis-slas-and-slos.html?utm_source=pocket_mylist)
- [Microservice Observability, Part 1: Disambiguating Observability and Monitoring](https://bravenewgeek.com/microservice-observability-part-1-disambiguating-observability-and-monitoring/?utm_source=pocket_mylist)
- [5 metrics Engineering Managers can extract from Pull Requests](https://sourcelevel.io/blog/5-metrics-engineering-managers-can-extract-from-pull-requests)
- [Metrics to Improve Continuous Integration Performance](https://harness.io/blog/continuous-integration-performance-metrics/)
- [SLIs and Error Budgets: What These Terms Mean and How They Apply to Your Platform Monitoring Strategy](https://tanzu.vmware.com/content/blog/slis-and-error-budgets-what-these-terms-mean-and-how-they-apply-to-your-platform-monitoring-strategy)
- [SLOs and SLIs best practices for systems](https://newrelic.com/blog/best-practices/best-practices-for-setting-slos-and-slis-for-modern-complex-systems)
## Appendix
#### Golden Signals Workbook
| | **Golden Signals** | | | |
|------------------------------------|--------------------------------------------------------------------------|--------------------------------------------------------------------------|----------------------------------------------------------------|----------------------------------------------------------------------------------------------|
| **Services** | **Latency** | **Traffic** | **Errors** | **Saturation** |
| | | | | |
| **Platform Owners** | | | | |
| Kubernetes Cluster: Pod Scheduling | | | | |
| Kubernetes Cluster: Capacity | | Healthy Nodes / Total Nodes online | | |
| Application Deployments | Time to Deploy / Total time of | Number of Successful Deployments / Total number of Deployments | 1 - (Unsuccessful Deployments / Total Deployments) | Total Time to Deploy in Seconds per Duration / Seconds in Duration (e.g. 86400 = 1 day) |
| AWS Spend | | | | |
| | | | | |
| **Security & Compliance** | | | | |
| | Time to Acknowledge Time to fix | | False PositivesNumber of Security Vulnerabilities | |
| | | | | |
| **Development & PM** | | | | |
| Pull Request Throughput | 1 - (Time to Close or Merge PR / 1 Sprint) | 1 - (Number of Open PRs / Total Number of PRs) | PRs Open with Tests Passing / Total PRs Open | 1 - (PRs Open / Max Number of PRs Acceptable) |
| Sprint Throughput | Total Number of Sprints to Complete Issues / Total Number of Issues | Issues Transitioned to Done / Total Issues in Sprint | Bugs Added to Active Sprint / Total Issues in Sprint | Total Issues Not Completed / Total Issues in Sprint |
| | | | | |
| **Microservice / Web Application** | | | | |
| HTTP Requests | | | | Notes: INFO, DEBUG log level alerts relative to all alerts. |
| Transactions | | | | |
| Synthetic Requests | | | | |
| | | | | |
| **Customer Experience** | | | | |
| | | | | |
| **CI/CD and Release Management** | | | | |
| Lead Time To Deploy | | | | |
| Code Coverage | | | | |
| Test Coverage | | | | |
| GitHub Action Runs | Time to start build | | | |
---
## Setup Datadog
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Admonition from '@theme/Admonition';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
You need to set up monitoring for all of your newly deployed accounts.
Datadog setup can get started after the accounts have been provisioned, though it won't be incredibly useful until your
plat is in place, usually EKS or ECS.
## Prepare Datadog
You'll need a datadog account and to generate an app key and api key for that datadog account and place them in SSM of
your `auto` account. These should be placed under `datadog/default/datadog_app_key` and
`datadog/default/datadog_api_key` respectively.
To generate these keys we recommend using a Datadog Service Account. This allows you to create a service account
with limited permissions to your Datadog account. This is useful for security and auditing purposes. This also allows
any admins to rotate the keys without having to go through the account owner.
## Creating a Datadog Service Account
1. Go to your Organizations settings page
2. Click on the `Service Accounts` tab
3. Click on `New Service Account`
4. Give the service account a name and an email address
5. Give the service account the `Datadog Admin Role` (can be refined later)
6. Click `Create Service Account`
7. Click the created service account
8. Under Application Keys, click `New Key`
9. Give the Application Key a name (we recommend something like `terraform`) and click `Create Key`
10. Copy the `Application Key` for later. This is your `datadog_app_key`
11. Under Organization Settings, click `API Keys`
12. Click `New Key`
13. Give the API Key a name (we recommend something like `terraform`)
14. Click `Create Key`
15. Copy the API Key for later. This is your `datadog_api_key`
## Short Version
There are two core components to the Datadog implementation
1. [**datadog-configuration**](https://docs.cloudposse.com/components/library/aws/datadog-credentials)
2. [**datadog-integration**](https://docs.cloudposse.com/components/library/aws/datadog-integration)
Both are deployed to every account except `identity` and `root`. They are deployed to the global stack as they are done
once per account.
Once those are setup, we can begin deploying other components, such as
- [**Monitors**](https://docs.cloudposse.com/components/library/aws/datadog-monitor)
- [**Lambda Log Forwarders**](https://docs.cloudposse.com/components/library/aws/datadog-lambda-forwarder)
- [**Datadog Log Archives**](https://docs.cloudposse.com/components/library/aws/datadog-logs-archive)
We then deploy a setup for monitoring applications based on whether you use EKS or ECS.
For **EKS**
- [**Datadog Agent**](https://docs.cloudposse.com/components/library/aws/eks/datadog-agent)
- [**Datadog Private Locations**](https://docs.cloudposse.com/components/library/aws/datadog-synthetics-private-location)
For **ECS**
- [**ECS-Service**](https://docs.cloudposse.com/components/library/aws/ecs-service) has a
[datadog file](https://github.com/cloudposse/terraform-aws-components/blob/master/modules/ecs-service/datadog-agent.tf)
that manages all of datadog agent configuration for a service (Datadog as a sidecar)
- [**ECS Private Locations**](https://docs.cloudposse.com/components/library/aws/datadog-private-location-ecs)
## Step by Step
You should have a workflow to vendor in your components. This workflow can be run with the following command. Otherwise vendor in each component individually.
## Datadog Configuration
This component handles the creation and duplication of Datadog API and APP keys. This component specifies a source
account (usually `auto`) and a format for copying keys. You specify a source and destination format and a key store.
This allows you to use separate keys for each account, tenant, or anything in between. We recommend either a single set
of keys per Organization or tenant.
This component also handles default configurations such as Datadog URL and provides a default configuration for other
components to utilize via its submodule `datadog_keys`.
Use a configuration similar to the following but check the
[`README.md`](https://docs.cloudposse.com/components/library/aws/datadog-credentials/) for exact input references.
```yaml
components:
terraform:
datadog-configuration:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: datadog-configuration
datadog_secrets_store_type: SSM
datadog_secrets_source_store_account_stage: auto
datadog_secrets_source_store_account_region: "us-east-2"
datadog_site_url: us5.datadoghq.com
```
The most important variables are the key patterns to determine how keys are placed and the Datadog site URL
configuration which should match how you signed up with Datadog.
## Datadog Integration
Vendor in this component with `atmos vendor pull -c datadog-integration`. This component configures the integrations you
have between Datadog and your AWS Accounts. This component is deployed to every account (except `root` and `identity`)
to allow data from everywhere.
This component is used by other components as this component creates the Datadog role for your account.
Deploy this with `atmos terraform deploy datadog-integration -s ${tenant}-gbl-${stage}`
alternatively
```shell
atmos terraform deploy datadog-integration -s core-gbl-artifacts
atmos terraform deploy datadog-integration -s core-gbl-audit
atmos terraform deploy datadog-integration -s core-gbl-auto
atmos terraform deploy datadog-integration -s core-gbl-dns
atmos terraform deploy datadog-integration -s core-gbl-network
atmos terraform deploy datadog-integration -s core-gbl-security
atmos terraform deploy datadog-integration -s plat-gbl-sandbox
atmos terraform deploy datadog-integration -s plat-gbl-dev
atmos terraform deploy datadog-integration -s plat-gbl-staging
atmos terraform deploy datadog-integration -s plat-gbl-prod
```
## Datadog Monitors
The `datadog-monitor` component creates monitors for Datadog. It contains a catalog of monitor entries that are deployed
by default to every account this is deployed to. This component is deployed _globally_ as it is only deployed once per
account. By default, we only apply this to `auto` and `plat` accounts. However, it can be added to more accounts as
necessary for monitoring.
Monitors are cataloged through YAML files and perform substitution through Terraform syntax, for example `${stage}`. It
is important to note that this is different from Datadog syntax which is `{{ stage }}`. Anything in Datadog syntax will
be inserted into the monitor as is, whereas Terraform will be substituted. That way we can deploy the same monitors
across accounts and filter by stage or variable known to Terraform.
In order to add new monitors, simply add a yaml file to `components/terraform/datadog-monitor/catalog/monitors/`. By
default, the component includes a global collection of monitors:
```bash
components/terraform/datadog-monitor/catalog/monitors/
├── README.md
├── catalog
│ └── monitors
│ ├── aurora.yaml
│ ├── ec2.yaml
│ ├── efs.yaml
│ ├── elb.yaml
│ ├── host.yaml
│ ├── k8s.yaml
│ ├── lambda-log-forwarder.yaml
│ ├── lambda.yaml
│ ├── rabbitmq.yaml
│ └── rds.yaml
├── component.yaml
├── context.tf
├── main.tf
├── outputs.tf
├── provider-datadog.tf
├── providers.tf
├── variables.tf
└── versions.tf
```
Alternatively, we can add an additional level of nesting to the `datadog-monitor` catalog to categorize monitors by
account. By arranging the catalog as follows, we can distinguish which monitors are deployed to a given stack with
`local_datadog_monitors_config_paths`. This allows us to specify entirely unique monitor paths for each stage.
```bash
components/terraform/datadog-monitor/catalog/monitors/
├── README.md
├── catalog
│ └── monitors
│ ├── _defaults
│ │ └── example.yaml
│ ├── plat
│ │ ├── dev
│ │ │ └── example.yaml
│ │ ├── staging
│ │ └── prod
└── ...
```
```yaml
# stacks/org/acme/plat/dev/monitoring.yaml
components:
terraform:
datadog-monitor:
vars:
local_datadog_monitors_config_paths:
- catalog/monitors/_defaults/*.yaml
- catalog/monitors/plat/*.yaml
- catalog/monitors/plat/dev/*.yaml
```
```yaml
# stacks/org/acme/plat/prod/monitoring.yaml
components:
terraform:
datadog-monitor:
vars:
local_datadog_monitors_config_paths:
- catalog/monitors/_defaults/*.yaml
- catalog/monitors/plat/*.yaml
- catalog/monitors/plat/prod/*.yaml
```
Each monitor is then defined in `components/terraform/datadog-monitors/catalog/monitors/_defaults/`
categorized by component. It can then be extended into other stages, where the later in the array
(`local_datadog_monitors_config_paths`) the higher precedence it takes in merging.
Please see [datadog-monitor](https://docs.cloudposse.com/components/library/aws/datadog-monitor/) for more information.
## Lambda Log Forwarders
This component is pretty straightforward to vendor and deploy. The important variables of note are
```yaml
forwarder_rds_enabled: false
forwarder_log_enabled: false
forwarder_vpc_logs_enabled: false
```
as these variables determine which logs are forwarded to Datadog. The main implication of this decision is the cost, as
VPC Flow logs can become incredibly expensive.
## Datadog Logs Archive
This component is also relatively simple to deploy as well. Simply vendor in the component and deploy it.
```shell
atmos vendor pull -c datadog-logs-archive
```
Use the configuration in the component readme as the stack/catalog entry.
```shell
atmos terraform deploy datadog-logs-archive -s core-gbl-auto
```
## EKS
### Datadog Agent
For EKS deployments we need to deploy the
[Datadog-Agent](https://docs.cloudposse.com/components/library/aws/eks/datadog-agent), this component deploys the helm
chart for the Datadog Agent, it allows the Datadog Agent to be fully customized and also provides a format to support
cluster-checks, which are a cheaper version of synthetic checks (though less feature-rich).
Vendor in the component and begin deploying. This component is deployed to every region and account where you have an
EKS Cluster.
The component allows customizing values passed to the helm chart, this can be useful when passing variables to support
features such as `IMDSV2`
```yaml
components:
terraform:
datadog-agent:
vars:
enabled: true
name: "datadog"
description: "Datadog Kubernetes Agent"
kubernetes_namespace: "monitoring"
create_namespace: true
repository: "https://helm.datadoghq.com"
chart: "datadog"
chart_version: "3.6.7"
timeout: 1200
wait: true
atomic: true
cleanup_on_fail: true
cluster_checks_enabled: true
helm_manifest_experiment_enabled: false
tags:
team: sre
service: datadog-agent
app: monitoring
# datadog-agent shouldn't be deployed to the Fargate nodes
values:
agents:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: eks.amazonaws.com/compute-type
operator: NotIn
values:
- fargate
datadog:
env:
- name: DD_EC2_PREFER_IMDSV2
value: "true"
```
This component should be highly customized to meet your needs. Please read through the Datadog Dogs to determine the
best configuration for your setup.
#### References
- [Configure the Datadog Agent on Kubernetes](https://docs.datadoghq.com/containers/kubernetes/configuration?tab=helm)
- [Duplicate hosts with Kubernetes on AWS (EC2 or EKS)](https://docs.datadoghq.com/containers/troubleshooting/duplicate_hosts/)
- [Datadog Agent Helm Chart Values Reference](https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml "https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml")
- [Cluster Agent Commands and Options](https://docs.datadoghq.com/containers/cluster_agent/commands/#cluster-agent-options)
- [Kubernetes Trace Collection](https://docs.datadoghq.com/containers/kubernetes/apm/?tab=helm)
### Datadog Private Locations (Optional)
This component is the Datadog Helm chart for deploying synthetic private locations to EKS. This is useful when you want
Datadog Synthetic Checks to be able to check the health of pods inside your cluster, which is private behind a VPC.
This component is straight forward and requires little to no stack customization.
Use the catalog entry included with the
[datadog-synthetics-private-location documentation](https://docs.cloudposse.com/components/library/aws/datadog-synthetics-private-location)
to get started.
## ECS
### ECS-Service
This primary component should be familiar as it deploys your applications. It also has several variables with hooks to
deploy the Datadog Agent as a sidecar container (useful for fargate). to get started simply add the following variables
to your ECS Service:
```yaml
datadog_agent_sidecar_enabled: true
datadog_log_method_is_firelens: true
datadog_logging_default_tags_enabled: true
# in addition set your service logging method to awsfirelens
containers:
service:
log_configuration:
logDriver: awsfirelens
options: {}
```
This will add The Datadog Agent sidecar to your service, add default tags, and add Firelens as the logging method which
ships logs directly to Datadog.
### `datadog-private-location-ecs`
This component deploys an ECS task that handles private locations for ECS. This is the counterpart to the Eks version.
To get started simply vendor in and use the stack catalog entry in the readme.
---
## Datadog Log Filtering
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
This document presents an overview of options regarding filtering logs sent to Datadog.
### Datadog's recommended practice
Datadog positions their capability as [Logging Without Limits™](https://www.datadoghq.com/blog/logging-without-limits). They charge only [$0.10 per GB of log data ingested](https://www.datadoghq.com/pricing/?product=log-management#products) and recommend ingesting all logs, so that data is available on demand. Their more significant charge is based on the number of log events retained and the duration of the retention period (approx $1-2.50 per million events, depending on retention period).
**TL;DR** Ingest all logs, and use dynamic processing on the server side to determine which logs to process.
### Preserve All Data, Just in Case
By sending all log data to Datadog, you get 2 major benefits.
First, you can use their [Live Tail](https://docs.datadoghq.com/logs/explorer/live_tail/) feature to view the current log stream, after processing but before indexing, so that all log data, including data excluded from indexing by filters, is available for real-time troubleshooting.
Second, all logs ingested can be saved, with tags to support later filtering, to your own S3 bucket at no additional charge from Datadog. These logs can later be re-ingested (["rehydrated"](https://docs.datadoghq.com/logs/log_configuration/rehydrating)) for detailed analysis of an event, creating an historical view for on-demand retrospective analysis.
### Save Money with Server-Side Filtering
The primary log data-processing charge from Datadog is for indexing logs. By filtering logs out of indexing, you save on the (substantial) indexing and retention costs, but, as explained above, the logs remain available via your archive storage for later analysis if needed. An additional benefit of excluding logs via server-side filtering is that the filter can be easily modified or temporarily disabled during an incident to quickly provide additional information, and then restored when the incident is resolved.
- For documentation, see [Log Configuration -> Indexes -> Exclusion filters](https://docs.datadoghq.com/logs/log_configuration/indexes/#exclusion-filters)
#### Infrastructure as Code for Server-side Filtering
Datadog has moderate support for configuring server-side filtering via Terraform. There is unexplored complexity in the fact that the order of pipelines can be significant, but there is limited support for querying or managing the automatically provisioned integration pipelines. This area requires further investigation.
Meanwhile, configuration via the Datadog web UI is going to be the easiest way to begin in any case.
### Source Filtering
If you are sure you want to filter logs out at the source and not send them to Datadog, you can add filters on any application, and on the log forwarder. Filters are limited to regular expression matches, and can either exclude matches or exclude non-matches (include matches).
#### Application Log Filtering
For Kubernetes pods, you can filter out logs [via annotations](https://docs.datadoghq.com/agent/logs/advanced_log_collection?tab=kubernetes).
> To apply a specific configuration to a given container, Autodiscovery identifies containers by name, NOT image. It tries to match `` to `.spec.containers[0].name`, not `.spec.containers[0].image`. To configure using Autodiscovery to collect container logs on a given `` within your pod, add the following annotations to your pod’s log_processing_rules:
```yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: cardpayment
spec:
selector:
matchLabels:
app: cardpayment
template:
metadata:
annotations:
ad.datadoghq.com/.logs: >-
[{
"source": "java",
"service": "cardpayment",
"log_processing_rules": [{
"type": "exclude_at_match",
"name": "exclude_datadoghq_users",
"pattern": "\\w+@datadoghq.com"
}]
}]
labels:
app: cardpayment
name: cardpayment
spec:
containers:
- name: ''
image: cardpayment:latest
```
#### Filtering Logs via the Datadog Agent
The Datadog Agent can be configured with processing rules as well, via the `DD_LOGS_CONFIG_PROCESSING_RULES` environment variable. Unlike the pod annotation, which only applies to the specified Docker container, rules configured at the Datadog Agent apply to all services for which the agent forwards logs.
Example (untested) entry in `values.yaml` (not stacks):
```yaml
datadog:
envDict:
DD_LOGS_CONFIG_PROCESSING_RULES: >-
[{
"type": "exclude_at_match",
"name": "exclude_datadoghq_users",
"pattern": "\\w+@datadoghq.com"
}]
```
#### Filtering Logs via the Log Forwarder
In a similar fashion, the Lambda Log Forwarder can be configured to filter out logs via the `EXCLUDE_AT_MATCH` and `INCLUDE_AT_MATCH` environment variables. Unlike the other options which allow you to provide a list of rules, the Lambda only accepts a single regular expression. Also unlike the other options, backslashes do not need to be escaped in the regex string in the environment variable.
Example (untested) in the stack for `datadog-lambda-forwarder`:
```yaml
vars:
datadog_forwarder_lambda_environment_variables:
EXCLUDE_AT_MATCH: '\w+@datadoghq.com'
```
### Custom and Pre-defined Log Enhancement via Pipelines
Datadog supports data transformation [pipelines](https://docs.datadoghq.com/logs/log_configuration/pipelines) to transform and filter logs on the server side. They provide [numerous pre-defined pipelines](https://app.datadoghq.com/logs/pipelines/pipeline/library) (_Datadog login required_) and allow you to create your own as well. Your pipelines can extract standard or custom fields, and generate custom metrics. Custom metrics generated by logs are unaffected by index filtering. While not required, this can be used to create additional attributes used in deciding whether or not to index a log entry or not.
### Additional Resources
- [Log Configuration](https://docs.datadoghq.com/logs/log_configuration/) documentation covers pipelines, processors, log parsers, attributes and aliasing, indexing, archiving, and generating custom metrics.
- Datadog offers free online training courses. [Going Deeper with Logs Processing](https://learn.datadoghq.com/courses/going-deeper-with-logs-processing) covers pipelines, processors, log parsers, and standard attributes for log processing.
---
## How to create a Synthetic and SLO
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
After adding a service you often have synthetic checks to determine its health and check KPIs. We also might have a business need for a specific uptime or value of a KPI. In order to do that we need to create synthetics and SLOs.
## Solution
:::tip
[datadog-synthetics](/components/library/aws/datadog-synthetics/) and datadog-slo Components make it easy to deploy new checks via yaml.
:::
We can use the [datadog-synthetics](/components/library/aws/datadog-synthetics/) and datadog-slo components to deploy a synthetic and a SLO (or many) for your new service. To create a new Synthetic or SLO, determine which environments it should be deployed to and add the yaml definition for the test. Refer to the component documentation on how to setup and write the yaml configuration.
### Synthetics
Datadog has 2 types of Synthetic Checks `API` or `browser`, decide which type you need to test your application using this documentation [https://docs.datadoghq.com/synthetics/](https://docs.datadoghq.com/synthetics/).
### SLOs
SLOs should be “monitors” of business impacting KPIs or Events. In datadog they are defined through metrics or a collection of monitors.
If you wish to make an SLO of your Synthetic Checks Datadog exposes a metric `synthetics.test_runs{*}.as_count()` which you can use to check the success rate of synthetic checks.
## References
Datadog SLO Monitor Documentation: [https://docs.datadoghq.com/monitors/service_level_objectives/monitor/#overview](https://docs.datadoghq.com/monitors/service_level_objectives/monitor/#overview)
Datadog SLO Metric Documentation: [https://docs.datadoghq.com/monitors/service_level_objectives/metric/#overview](https://docs.datadoghq.com/monitors/service_level_objectives/metric/#overview)
Datadog Synthetic Documentation: [https://docs.datadoghq.com/synthetics/](https://docs.datadoghq.com/synthetics/)
---
## How to Monitor a new Service
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
When onboarding a new service we need some monitors to ensure its in a healthy status.
## Solution
:::tip
**TL;DR** Use Cluster Checks or Datadog Synthetics
:::
Depending on how the service is exposed we can use cluster checks or synthetics.
If a simple check will suffice we recommend using Cluster Checks:
- [How to Setup Datadog Cluster Checks and Network Monitors for External URLs of Applications](/layers/monitoring/datadog/tutorials/how-to-setup-datadog-cluster-checks-and-network-monitors-for-ext)
- [Datadog Cluster Checks](/components/library/aws/eks/datadog-agent/#adding-cluster-checks)
However if multiple steps (such as login) is required we recommend Datadog Synthetics:
- [How to create a Synthetic and SLO](/layers/monitoring/datadog/tutorials/how-to-create-a-synthetic-and-slo)
These Checks are in addition to the default kubernetes checks we have that monitor for crashing pods, imagepullbackoff and other generic kubernetes issues.
## Next Steps
After setting up health check monitors we should decide if we need APM metrics for this service.
[https://docs.datadoghq.com/tracing/#send-traces-to-datadog](https://docs.datadoghq.com/tracing/#send-traces-to-datadog)
---
## How to Pass Tags Along to Datadog
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
Tags are a great approach to describing who owns what services, and who should respond to different incidents related to those services. In order for to be able to act on the information, we need to ensure it’s passed along to datadog. Depending on if something is running on Kubernetes or infrastructure like RDS or an ALB, the way we tag (or label) things will differ.
## Solution
:::tip
**TL;DR**
**AWS**
The Datadog integration for AWS allows us to specify tags we apply to the integration, since this is per account we can ensure tags are applied for everything imported from that integration.
**Kubernetes**
The datadog agent has configuration that allows us to map labels or annotations to specific datadog tags. Those labels can even be set on the namespace so that they apply to all services within the namespace.
:::
There are two main ways that events get generated:
1. AWS
2. Kubernetes Clusters
### AWS
With several different AWS accounts, we want to make sure that every resource monitored through datalog has the right tags per account, such as `stage`, `environment`, and possibly `tenant`.
Default tags fetched are found [here](https://docs.datadoghq.com/integrations/amazon_web_services/?tab=roledelegation#tags). Our [datadog-integration](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/datadog-integration) component also provides a variable `host_tags` which allows us to specify additional tags per account to help ensure all tags are assigned, such as `tenant`, `stage`, and `environment`.
E.g.
```
components:
terraform:
datadog-integration:
[...]
vars:
host_tags:
- "stage:dev"
- "tenant:platform"
```
### Kubernetes
With Kubernetes, we need to make sure that the right tags are fetched from either the namespace it was deployed or the labels attached to individual pods. As we may want one app to be monitored by team-a and another app in the same cluster to be monitored by team-b.
By default, the datadog agent does not map kubernetes labels and annotations to datadog tags. We recommend setting default tags that you can add to your apps to map to datadog tags. this allows either namespace bound tags or app-specific tags to be set. These [values](https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml#L147-L167) in the datadog helm chart can be set to create a mapping to your datadog account. Take a look at Datadogs documentation for further details: [https://docs.datadoghq.com/agent/kubernetes/tag/?tab=containerizedagent](https://docs.datadoghq.com/agent/kubernetes/tag/?tab=containerizedagent) .
---
## How to Provision and Tune Datadog Monitors by Stage
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
Datadog is a powerful platform with many ways of being configured. No one size fits all. Some companies choose to run with multiple datadog organizations, while others choose to consolidate in one account. Some companies want to adjust the thresholds at a service level, while others at the stage level. Multiple configurations exist and choosing the right one depends on what you want to accomplish.
## Solution
:::tip
Provision monitors by layer and vary the configurations by stage.
:::
Monitoring happens at every layer of your infrastructure. We should strive to push as many monitors as possible to the lower levels so that the benefits are realized by all higher layers. The manner of monitoring each layer varies. For example, the _Infrastructure_ layer is best monitored using the [datadog-integration](/components/library/aws/datadog-integration/) provisioned per AWS account. While the _Application_ layer may be better suited by provisioning monitors defined within the application repo itself or using custom resources to manage the monitors via Kubernetes.
### Application Monitors
Application monitors should be provisioned to monitor anything not caught by the underlying layers (e.g. application-specific behavior).
See [How to Use Multiple Infrastructure Repositories with Spacelift?](/resources/deprecated/spacelift/tutorials/how-to-use-multiple-infrastructure-repositories-with-spacelift) for one approach to manage monitors using terraform.
See [https://github.com/FairwindsOps/astro](https://github.com/FairwindsOps/astro)by Fairwinds, for a Kubernetes approach using an Operator and Custom Resources. Note, the Cloud Posse YAML format for monitors provisioned with terraform was directly inspired by `astro` and share almost the same schema.
[https://github.com/FairwindsOps/astro/blob/master/conf-example.yml](https://github.com/FairwindsOps/astro/blob/master/conf-example.yml)
### Platform Monitors by Stage
In general, we should strive for delivering platform-level monitors that apply to all services operating on the platform, rather than one-off monitors for individual applications. Of course, there will always be exceptions - just use this as a guideline.
Here’s an example of setting up alerts for a production-tier and non-production-tier.
```yaml
components:
terraform:
datadog-monitor-nonprod:
component: datadog-monitor # Use the shared base component for `datadog-monitor`
settings:
spacelift:
workspace_enabled: true
vars:
secrets_store_type: SSM
datadog_api_secret_key: datadog/dev/datadog_api_key
datadog_app_secret_key: datadog/dev/datadog_app_key
datadog_monitors_config_paths:
- catalog/monitors/nonprod/*.yaml # Specify the path to the configs relative to the base component.
datadog_synthetics_config_paths: []
datadog-monitor-prod:
component: datadog-monitor
settings:
spacelift:
workspace_enabled: true
vars:
secrets_store_type: SSM
datadog_api_secret_key: datadog/prod/datadog_api_key
datadog_app_secret_key: datadog/prod/datadog_app_key
datadog_monitors_config_paths:
- catalog/monitors/prod/*.yaml
datadog_synthetics_config_paths: []
```
---
## How to Setup Datadog Cluster Checks and Network Monitors for External URLs of Applications
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
## Problem
We often want a lightweight way to ensure our endpoints remain healthy. In kubernetes this requires that a loadbalancer be setup with the right annotations and certs to allow traffic to your application. This creates dependencies on cert-manager and other platform tools. The Health Check of a kubernetes app will always use the local IP, which doesn’t really test your networking. We need a way to test that your apps are still ready to receive requests.
## Solution
Use **Cluster Network Checks** to test your endpoints. These will test external URLs which helps ensure endpoints are healthy.
Cluster Checks are configured on the datadog agent, by specifying agent configuration you can set these checks to run once per cluster instead of once per node agent. These Checks test the validity of externally accessible URLs hosted in Kubernetes.
To get started follow this guide: [https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/?tab=helm](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/?tab=helm)
For helm that means ensuring the following values are set
```yaml
datadog:
clusterChecks:
enabled: true
# (...)
clusterAgent:
enabled: true
```
### External URLs
We then need to set particular helm values for each installation of the agent in each cluster. the external URL checks must be written into the agent configuration and cannot be dynamically loaded by annotations.
CloudPosse Datadog Agent now supports **Cluster Checks** this via `this pr`
Upgrade to the latest and add your network checks as yaml. This follows the same configuration as monitors. Where checks are deep merged and templated, they can be configured per environment.
## Datadog Monitors
After your Cluster Checks are setup we need to create monitors for them.
:::info
Http check will verify successful HTTP checks on a URL
SSL check will verify the certificate of your URL
:::
```yaml
https-checks:
name: "(Network Check) ${stage} - HTTPS Check"
type: service check
query: |
"http.can_connect".over("stage:${stage}").by("instance").last(2).count_by_status()
message: |
HTTPS Check failed on {{instance.name}}
in Stage: {{stage.name}}
escalation_message: ""
tags:
managed-by: Terraform
notify_no_data: false
notify_audit: false
require_full_window: true
enable_logs_sample: false
force_delete: true
include_tags: true
locked: false
renotify_interval: 0
timeout_h: 0
evaluation_delay: 0
new_host_delay: 0
new_group_delay: 0
no_data_timeframe: 2
threshold_windows: { }
thresholds:
critical: 1
warning: 1
ok: 1
```
---
## How to Sign Up for Datadog?
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
You’ve been asked to sign up for DataDog, but you're not quite sure which plans and features you need.
## Solution
Review the [https://www.datadoghq.com/pricing/](https://www.datadoghq.com/pricing/)
:::tip
We recommend the Enterprise Plan if you will need Custom Roles, otherwise the Pro Plan should suffice.
:::
After signing up, if you require child organizations, make sure to request it be enabled on your account by your Datadog Account Manager.
[https://docs.datadoghq.com/account_management/multi_organization/](https://docs.datadoghq.com/account_management/multi_organization/) (see [Decide on How to Restrict Access to Metrics and Logs in Datadog](/layers/monitoring/design-decisions/decide-on-how-to-restrict-access-to-metrics-and-logs-in-datadog))
### Recommended Features
- [https://www.datadoghq.com/blog/private-synthetic-monitoring/](https://www.datadoghq.com/blog/private-synthetic-monitoring/)
- [https://docs.datadoghq.com/logs/](https://docs.datadoghq.com/logs/)
- [https://docs.datadoghq.com/monitors/service_level_objectives/monitor/](https://docs.datadoghq.com/monitors/service_level_objectives/monitor/)
- [https://docs.datadoghq.com/tracing/](https://docs.datadoghq.com/tracing/)
- [https://docs.datadoghq.com/synthetics/](https://docs.datadoghq.com/synthetics/)
- [https://docs.datadoghq.com/real_user_monitoring/](https://docs.datadoghq.com/real_user_monitoring/)
- [https://docs.datadoghq.com/database_monitoring/](https://docs.datadoghq.com/database_monitoring/)
### Pricing Gotchas
- Commit to a certain number of hosts per month for cheaper pricing
- If using spot instances, you may be charged for multiple hosts. For instance, you may use a spot instance for an EKS node, the node is replaced with another spot instance, and Datadog now charges for 2 hosts instead of 1 host for that month.
### Enterprise Plan
Many features are restricted to Enterprise Plans.
:::caution
Custom Roles are an enterprise only feature. [https://docs.datadoghq.com/account_management/rbac/?tab=datadogapplication#custom-roles](https://docs.datadoghq.com/account_management/rbac/?tab=datadogapplication#custom-roles)
:::
- Custom Roles
- Watchdog: Automated insights
- Correlations
- Anomaly Detection
- Forecast Monitoring
- Live Processes
- Advanced Administrative Tools
### Pro Plan
The Pro Plan is the minimum acceptable plan that adds Alerts, Container Monitoring, SAML and Custom Metrics.
- Unlimited Alerts
- Unlimited Container Monitoring
- 10 per host included
- 20 per host included. Customizable.
- Custom Metrics
- Single Sign-On with SAML
- Outlier Detection
## Related Design Decisions
- [Decide on External Monitoring Solution](/layers/monitoring/design-decisions/decide-on-external-monitoring-solution)
- [Decide on Log Retention and Durability Architecture](/layers/security-and-compliance/design-decisions/decide-on-log-retention-and-durability-architecture)
- [Decide on How to Restrict Access to Metrics and Logs in Datadog](/layers/monitoring/design-decisions/decide-on-how-to-restrict-access-to-metrics-and-logs-in-datadog)
---
## How to use Datadog Metrics for Horizontal Pod Autoscaling (HPA)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
The native Kubernetes Horizontal Pod Autoscaler has only the ability to scale pods horizontally based on primitive metrics such as CPU and Memory. Your application is more complex and needs to scale based on other dimensions of data, such as queue depth. You have the data, but it’s in Datadog and you need some way for Kubernetes to scale based on complex insights.
## Solution
:::tip
**TL;DR**: Ensure the Datadog Cluster Agent Metrics Server is enabled, enable Kubernetes Integrations Autodiscovery as needed, then create `HorizontalPodAutoscaler` manifests referencing `DatadogMetric` objects or the Datadog query directly.
:::
The Datadog Cluster Agent has a metrics server feature which, as of Kubernetes v1.10, can be used for Horizontal Pod Autoscaling.
This means that metrics automatically collected by Datadog Cluster Agent can be leveraged in HorizontalPodAutoscaler k8s objects.
Take for example the following HorizontalPodAutoscaler manifest, which leverages the `nginx.net.request_per_s` metric automatically collected by Datadog Cluster Agent:
```yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: nginxext
spec:
minReplicas: 1
maxReplicas: 5
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
metrics:
- type: External
external:
metricName: nginx.net.request_per_s
metricSelector:
matchLabels:
kube_container_name: nginx
targetAverageValue: 9
```
This leverages the already-existing metrics collection plane for out-of-the-box Kubernetes metrics (see: [https://docs.datadoghq.com/agent/kubernetes/data_collected/](https://docs.datadoghq.com/agent/kubernetes/data_collected/) for the list of native Kubernetes metrics automatically collected by the Datadog Cluster Agent)
This also opens up the possibility for Horizontal Pod Autoscaling capabilities for homegrown application Deployments, where metrics exported via Prometheus / OpenMetrics will be collected by the Datadog Cluster Agent and leveraged in HorizontalPodAutoscaler definitions.
Lastly, Datadog metrics integrations exist for applications such as Redis, Nginx, which are auto-discoverable via annotations (see: [https://docs.datadoghq.com/getting_started/integrations/](https://docs.datadoghq.com/getting_started/integrations/) )
1. Use `eks-datadog` component (already created by Cloud Posse) and override [https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml#L551](https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml#L551) to enable the metrics server ([https://github.com/DataDog/helm-charts/blob/dae884481c5b3c9b67fc8dbd69c944bf3ec955e9/charts/datadog/values.yaml#L1318](https://github.com/DataDog/helm-charts/blob/dae884481c5b3c9b67fc8dbd69c944bf3ec955e9/charts/datadog/values.yaml#L1318) must be set to true as well, which is already the case by default). Also, ensure Prometheus scraping is enabled ([https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml#L425](https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml#L425) )
2. For home grown applications, ensure Prometheus metrics are being exported by the application via an exported within the Kubernetes Pod.
3. For home grown applications, ensure the appropriate Datadog Prometheus / OpenMetrics collection configuration is present within the Pod annotations in the Deployment manifest ([https://docs.datadoghq.com/agent/kubernetes/prometheus/#simple-metric-collection](https://docs.datadoghq.com/agent/kubernetes/prometheus/#simple-metric-collection) )
4. For off-the-shelf applications, configure the additional values required to insert the Datadog annotations into the Kubernetes manifests templated by the public Helm Chart — for example adding the Redis integration auto-discovery annotations into the Pod manifest ([https://github.com/bitnami/charts/blob/ba9f72954d8f21ff38018ef250477d159378e8f7/bitnami/redis/values.yaml#L275](https://github.com/bitnami/charts/blob/ba9f72954d8f21ff38018ef250477d159378e8f7/bitnami/redis/values.yaml#L275) )
5. Optional but recommended: create `DatadogMetric` objects with the queries you will use to base HPA on.
6. Create appropriate `HorizontalPodAutoscaler` manifests referencing the `DatadogMetric` objects within the same namespace as the HPA (or alternatively the query directly, if not using `DatadogMetric` objects).
### References
- [https://www.datadoghq.com/blog/autoscale-kubernetes-datadog/](https://www.datadoghq.com/blog/autoscale-kubernetes-datadog/)
- [https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=kubernetes#pagetitle](https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=kubernetes#pagetitle)
- [https://github.com/DataDog/helm-charts/tree/main/charts/datadog](https://github.com/DataDog/helm-charts/tree/main/charts/datadog)
---
## Tutorials(7)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These are some additional tutorials that will help you along with managing Datadog.
---
## Decide on Datadog Account Strategy
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Context and Problem Statement
Datadog supports multiple organizations much like AWS (note Datadog calls them “organizations” instead of “accounts”). Using organizations is a way to tightly restrict access to monitoring data but introduces limitations.
Managed Service Providers typically use this feature with customers who do not have access to each others’ data. For example, if you have a multi-tenant use case where you need to provide access to Datadog to your customers, then this may be the best way to go. In this model, users can be added to the parent organization and/or multiple child organizations and switch between them from the [user account settings menu](https://docs.datadoghq.com/account_management/#managing-your-organizations). The parent organization can view the usage (not the underlying metrics collected) of individual child organizations, allowing them to track trends in usage.
Account settings, such as allow-listed IP addresses, are not inherited by child organizations from their parent organization.
The Multi-organization Account feature is not enabled by default. Contact [Datadog support](https://docs.datadoghq.com/help/) to have it enabled.
[https://docs.datadoghq.com/account_management/multi_organization/](https://docs.datadoghq.com/account_management/multi_organization/)
## Considered Options
### Option 1: Use Single Datadog Organization
:::tip
This is our recommended approach as it’s the easiest to implement, supports tracing across all your accounts and ensures you don’t need to switch organizations to view dashboards.
:::
Cloud Posse will need access to your current production Datadog organization if we go this route.
Also, see: [Decide on How to Restrict Access to Metrics and Logs in Datadog](/layers/monitoring/design-decisions/decide-on-how-to-restrict-access-to-metrics-and-logs-in-datadog)
### Option 2: Use Multiple Datadog Child Organizations
:::danger
We do not recommend this approach because you cannot do cross-account tracing. Datadog alert email notifications do not include the account information which is problematic when using multiple accounts.
:::
:::caution
Child organizations are an optional feature and has to be requested from Datadog support.
:::
[Datadog supports organizations](https://docs.datadoghq.com/account_management/multi_organization/) the way AWS supports organizations of member accounts.
When created, each child organization has a default API and app keys.
The original organization can remain AS IS, untouched. For example, if you want to create a partition between your current environments and the new ones we’re provisioning, this would be the way to go.
There is no way to aggregate metrics across organizational boundaries.
#### Child org per AWS account
Each AWS account gets its own Datadog child organization.
e.g.
```
acme-plat-gbl-dev
acme-plat-gbl-prod
```
#### Groups of child orgs (i.e. prod and non-prod)
Singleton and dev/sandbox/staging AWS account can be placed under `non-prod` child org.
Prod would be placed under `prod` child org.
e.g.
```
acme-plat-prod
acme-plat-nonprod
```
Or perhaps a different grouping?
## Other Considerations
### Key storage
#### Option 1: Shared in Automation Account (Recommended)
In this model, we store the keys in a shared account (e.g. automation) using SSM parameterized.
For a single organization, the SSM parameters might look like this:
```
/datadog/DD_API_KEY
/datadog/DD_APP_KEY
```
Or, for a multi-child organization, each child org would be differentiated by the SSM parameter.
```
/datadog//datadog_api_key
/datadog//datadog_app_key
```
#### Option 2: Shared in Each Respective Child Organization
In each AWS account, respective child org creds can be stored like this:
```
/datadog/datadog_api_key
/datadog/datadog_app_key
```
The reason to copy the keys into each AWS account from the shared account is limited access to the SSM cross-account. Various services will only allow access to SSM in the same as account as the service e.g. ECS task definitions.
### Authentication
#### Option 1: Use SAML
This is the recommended long-term solution.
#### Option 2: Invite users
This is the fastest way to get up and running. SAML can always be added later if it’s needed.
## References
- [Decide on How to Restrict Access to Metrics and Logs in Datadog](/layers/monitoring/design-decisions/decide-on-how-to-restrict-access-to-metrics-and-logs-in-datadog)
- [https://docs.datadoghq.com/account_management/multi_organization/](https://docs.datadoghq.com/account_management/multi_organization/)
---
## Decide on Datadog Log Forwarding Requirements
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Context and Problem Statement
Datadog supports log ingestion, but [it can be costly.](https://www.datadoghq.com/pricing/?product=log-management#log-management) Some companies prefer to use in-place tooling like Splunk or Sumologic instead.
## Considered Options
### Option 1 (Recommended) - Use Datadog
:::tip
Our Recommendation is to use Option 1 because you get a single pane of glass view into all operations
:::
### Option 2 - Other
#### Pros
- Tightly integrated with your existing systems
- Possible lower cost to operate that Datadog
#### Cons
- We cannot assist with the implementation aside from forwarding logs using something like `fluentd` or `fluentbit`
---
## Decide on Datadog Private Locations
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
Datadog Private Locations are a feature that needs to be enabled in your datadog account that allows monitoring applications that aren’t accessible to the open internet.
To enable private locations, we need to:
- Enable the feature in each datadog account.
- Deploy the Private Location Docker Image.
We can deploy the private location container to EKS. This leads to another decision: Do we deploy it once with the capability to ping the rest of the clusters' internal addresses, or do we deploy it to every cluster?
## Cost
According to [https://www.datadoghq.com/pricing/?product=synthetic-monitoring#synthetic-monitoring-is-there-any-extra-charge-for-using-private-locations](https://www.datadoghq.com/pricing/?product=synthetic-monitoring#synthetic-monitoring-is-there-any-extra-charge-for-using-private-locations), registering a new Private Location is not at an additional cost; the regular costs for synthetics still apply.
> [**Is there any extra charge for using private locations?**](https://www.datadoghq.com/pricing/?product=synthetic-monitoring#synthetic-monitoring-is-there-any-extra-charge-for-using-private-locations)
> No. There are no additional costs to set up a private location. All test runs to a private location are billed just as they are to a managed location.
## Solution
### Option 1: Enable Private Locations, and deploy to every cluster (Recommended)
:::tip
Our Recommendation is to use Option 1 because it enables private location features and provides a consistent way to scale.
:::
:heavy_plus_sign: Private Location Monitoring
:heavy_plus_sign: One helm chart per cluster via a component installation
:heavy_minus_sign: Must run an additional container per cluster
### Option 2: Don’t Use Private Locations
:heavy_plus_sign: Ever So Slightly Cheaper (We don’t run the container)
:heavy_minus_sign: Monitoring Only Publicly accessible services
## References
- [https://www.datadoghq.com/pricing/?product=synthetic-monitoring#synthetic-monitoring-is-there-any-extra-charge-for-using-private-locations](https://www.datadoghq.com/pricing/?product=synthetic-monitoring#synthetic-monitoring-is-there-any-extra-charge-for-using-private-locations)
- [https://docs.datadoghq.com/synthetics/private_locations/?tab=helmchart#overview](https://docs.datadoghq.com/synthetics/private_locations/?tab=helmchart#overview)
---
## Decide on External Monitoring Solution
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
We’ll want some sort of external monitoring solution, ideally provided by an external third-party SaaS.
## Solution
Use a third-party monitoring solution, ideally one that supports more advanced synthetics as well as behind-the-firewall monitoring checks.
:::tip
We recommend you stick with the synthetic monitoring provided by Datadog which supports both public and private endpoints via the private locations.
:::
| **Service** | **Pricing Page** |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Pingdom** | [https://www.pingdom.com/pricing/](https://www.pingdom.com/pricing/) |
| **UptimeRobot** | [https://uptimerobot.com/pricing/](https://uptimerobot.com/pricing/) |
| **NewRelic Synthetic Monitors** | [https://docs.newrelic.com/docs/synthetics/synthetic-monitoring/using-monitors/intro-synthetic-monitoring/#types-of-synthetic-monitors](https://docs.newrelic.com/docs/synthetics/synthetic-monitoring/using-monitors/intro-synthetic-monitoring/#types-of-synthetic-monitors) |
| **StatusCake** | [https://www.statuscake.com/pricing/](https://www.statuscake.com/pricing/) |
| **Datadog Synthetic Monitoring** | [https://www.datadoghq.com/product/synthetic-monitoring/](https://www.datadoghq.com/product/synthetic-monitoring/) [https://docs.datadoghq.com/getting_started/synthetics/private_location/](https://docs.datadoghq.com/getting_started/synthetics/private_location/) (We have full support for this in our [https://github.com/cloudposse/terraform-datadog-platform](https://github.com/cloudposse/terraform-datadog-platform) module) |
:::caution
Datadog Synthetics can get very pricey
:::
---
## Decide on How to Restrict Access to Metrics and Logs in Datadog
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
Restricting access to metrics and logs concerns organizations subject to benchmark compliance. There are a few ways this can be done with various tradeoffs.
## Solution
### Option 1: RBAC
With RBAC, Roles can be used to categorize users and define what account permissions those users have (read, modify) on resources. Any user who is associated with one or more roles receives all permissions granted by their associated roles. The more roles a user is associated with, the more access they have within a Datadog account.
[https://docs.datadoghq.com/account_management/rbac/permissions/](https://docs.datadoghq.com/account_management/rbac/permissions/)
#### Built-in Roles (Recommended)
By default, Datadog offers three roles,
- Datadog Admin
- Datadog Standard
- Datadog Read-Only
#### Custom Roles
You can create [custom roles](https://docs.datadoghq.com/account_management/rbac/?tab=datadogapplication#custom-roles) to define a better mapping between your users and their permissions.
:::note
If you use a SAML identity provider, you can integrate it with Datadog for authentication, and you can map identity attributes to Datadog default and custom roles. For more information, see [Single Sign On With SAML](https://docs.datadoghq.com/account_management/saml/).
:::
:::caution
Creating and modifying custom roles is an **opt-in** Enterprise feature. Contact Datadog support to get it enabled for your account.
:::
[https://docs.datadoghq.com/account_management/rbac/?tab=datadogapplication](https://docs.datadoghq.com/account_management/rbac/?tab=datadogapplication)
### Option 2: Datadog Child Organizations
:::danger
We do not recommend this approach because you cannot do cross-account tracing. Datadog alert email notifications do not include the account information which is problematic when using multiple accounts.
:::
See [Decide on Datadog Account Strategy](/layers/monitoring/design-decisions/decide-on-datadog-account-strategy)
## References
- [https://docs.datadoghq.com/account_management/rbac/permissions/](https://docs.datadoghq.com/account_management/rbac/permissions/)
---
## Decide on whether to use Datadog roles
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
Roles can be used to restrict access to dashboards and monitors.
## Solution
### Option 1: Default Roles (Recommended)
:::tip
Stick with the Default roles unless you need the granularity
:::
Datadog ships with the standard roles of admin, standard, and read only
### Option 2: Custom Roles
Custom roles give you the ability to define a more granular access model. To enable custom roles, datadog support must be contacted.
## References
- https://docs.datadoghq.com/account_management/rbac/
---
## Design Decisions(5)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions for how you'll gather telemetry and logs for
your applications.
---
## Monitoring FAQ
### How do I add a new monitor?
The easiest way to get started with an IaC monitor is to create it by hand in Datadog! While this may seem counterintuitive, seeing live graphs of your data and being able to view available metrics makes it much easier to figure out what is really important to look at.
Once you have a monitor configured in Datadog, you can export it to JSON, convert it to YAML, and then add it to your catalog of monitors.
Don't forget to replace hardcoded variables with Terraform variables or Datadog variables, as both interpolations will work.
### What is Datadog Interpolation vs Terraform Interpolation?
When looking at a Datadog monitor, anything with `${foo}` is Terraform interpolation; this will be substituted before being sent to Datadog.
`{{bar}}` is Datadog interpolation. This will be used by datadog when **the event comes in**. This is useful for things like tags where you want to tag the event with the name of the cluster, but you don't know the name of the cluster when you create the monitor.
### I'm not receiving the metrics I need, what do I do?
First off, we need to figure out where **should** the metrics be coming from.
If you are trying to receive metrics or logs from an EKS deployment or service, check if the [`datadog-agent`](/components/library/aws/eks/datadog-agent/) is deployed and its logs look healthy.
If you are trying to receive metrics or logs from an ECS service, check the datadog agent sidecar container is deployed to your [`ecs-service`](/components/library/aws/ecs-service/).
If you are trying to receive metrics from an AWS Service, first check if the [Datadog AWS Integration](https://app.datadoghq.com/integrations/amazon-web-services) is deployed via [`datadog-integration`](/components/library/aws/datadog-integration/) and that the tile is working. Then check if the Datadog AWS Integration is enabled for the service you are trying to monitor. You can often find the metric is under the integration tiles' **metrics** tab. If it is enabled and working and you are not receiving metrics, check the integration role has the right permissions.
If you are trying to receive logs from an AWS Service, check the [`datadog-lambda-forwarder`](/components/library/aws/datadog-lambda-forwarder/) is deployed and working
---
## How to Setup Amazon Managed Grafana
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
The Amazon Managed Grafana is a fully managed service for Grafana used to query, visualize, and alert on your metrics,
logs, and traces. Grafana provides a centralized dashboard where we can add many data sources.
## Deployment
### Collecting Metrics
Once Prometheus is fully functional on its own, then we add the HTTP endpoint for Prometheus as a data source for Amazon
Managed Grafana, where we can centralize, visualize, query, and alert on those metrics. The Prometheus workspace is
fully managed by AWS so therefore is not deployed to an EKS cluster.
Deploy the Amazon Managed Prometheus workspace with the `managed-prometheus/workspace` component to each platform
account and/or any account where you'd like to collect metrics. Define a stack catalog as follows:
```yaml
components:
terraform:
prometheus:
metadata:
component: managed-prometheus/workspace
vars:
enabled: true
name: prometheus
# Create cross-account role for core-auto to access AMP
grafana_account_name: core-auto
```
Then import this stack catalog file anywhere you want to deploy Prometheus. For example, all platform accounts. Then
deploy the workspace into each stack:
```console
atmos terraform apply prometheus -s plat-use2-sandbox
atmos terraform apply prometheus -s plat-use2-dev
atmos terraform apply prometheus -s plat-use2-staging
atmos terraform apply prometheus -s plat-use2-prod
```
Once you have the workspace provisioned, then add a collector. There are a number of collectors that can be set up with
Prometheus, but we primarily use the Amazon managed collector for EKS, commonly referred to as a "scraper". The scraper
is deployed alongside an EKS cluster and is granted permission to read metrics for that EKS cluster. That scraper then
forwards logs to Amazon Managed Prometheus.
Deploy the managed collected with the `eks/prometheus-scraper` component to any account with Prometheus where you'd like
to collect metrics from EKS. Define a stack catalog as follows:
```yaml
components:
terraform:
eks/prometheus-scraper:
vars:
enabled: true
name: prometheus-scraper
prometheus_component_name: prometheus
```
Then import this stack catalog file anywhere you want to deploy Prometheus. For example, all platform accounts. Then
deploy the workspace into each stack:
```console
atmos terraform apply eks/prometheus-scraper -s plat-use2-sandbox
atmos terraform apply eks/prometheus-scraper -s plat-use2-dev
atmos terraform apply eks/prometheus-scraper -s plat-use2-staging
atmos terraform apply eks/prometheus-scraper -s plat-use2-prod
```
Finally after the scraper is deployed, we have to finish the Cluster Role Binding configuration with the EKS cluster's
auth map. Note the `scraper_role_arn` and `clusterrole_username` outputs from the `eks/prometheus-scraper` component and
set them to `rolearn` and `username` respectively with the `map_additional_iam_roles` input for `eks/cluster`.
```yaml
components:
terraform:
eks/cluster:
vars:
map_additional_iam_roles:
# this role is used to grant the Prometheus scraper access to this cluster. See eks/prometheus-scraper
- rolearn: "arn:aws:iam::111111111111:role/AWSServiceRoleForAmazonPrometheusScraper_111111111111111"
username: "acme-plat-ue2-sandbox-prometheus-scraper"
groups: []
```
Then reapply each given cluster component.
### Scraping Logs
Logs are collected with Loki and Promtail by Grafana.
Grafana Loki is a set of resources that can be combined into a fully featured logging stack. Unlike other logging
systems, Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels).
Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on a
filesystem.
Whereas Promtail is an agent which ships the contents of local logs to Loki. Promtail scrapers logs from an EKS cluster,
and can be enabled to receive logs on its own via an API server.
Both Loki and Promtail are deployed to EKS via Helm charts. Deploy these with the `eks/loki` and `eks/promtail`
components respectively.
First deploy `eks/loki`. Add the `eks/loki` component and stack catalog as such:
::::tip Internal ALBs
We recommend using an internal ALB for logging services. You must connect to the private network to access the Loki
endpoint.
::::
```yaml
components:
terraform:
eks/loki:
vars:
enabled: true
name: loki
alb_controller_ingress_group_component_name: eks/alb-controller-ingress-group/internal
```
Then deploy the `eks/promtail` component with an example stack catalog as follows:
```yaml
components:
terraform:
eks/promtail:
vars:
enabled: true
name: promtail
```
Import both into any account where you have an EKS cluster, and deploy then in order. For example `plat-use2-dev`.
```console
atmos terraform apply eks/loki -s plat-use2-dev
atmos terraform apply eks/promtail -s plat-use2-dev
```
### Amazon Managed Grafana Workspace
Now that we have metrics and logs collected in each platform account, we want to create a central "hub" for accessing
that data. That hub is Grafana.
The primary component of Amazon Managed Grafana is the workspace. The Amazon Managed Grafana workspace is the logically
isolated Grafana server, where we can create Grafana dashboards and visualizations to analyze your metrics, logs, and
traces without having to build, package, or deploy any hardware to run your Grafana servers.
Deploy the centralized Amazon Managed Grafana workspace to `core-auto` with the `managed-grafana/workspace` component.
For example
```yaml
components:
terraform:
grafana:
metadata:
component: managed-grafana/workspace
vars:
enabled: true
name: grafana
private_network_access_enabled: true
sso_role_associations:
- role: "ADMIN"
group_ids:
- "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
# This grafana workspace will be allowed to assume the cross
# account access role from these prometheus components.
# Add all plat accounts after deploying prometheus in those accounts
prometheus_source_accounts:
- component: prometheus
tenant: plat
stage: sandbox
- component: prometheus
tenant: plat
stage: dev
```
Import this component in `core-use2-auto` (your primary region), and then deploy this component with the following:
```bash
atmos terraform apply grafana -s core-use2-auto
```
### Managing Terraform
We have fully Terraformed this Grafana-based monitoring system using the
[Grafana Terraform Provider](https://registry.terraform.io/providers/grafana/grafana/latest). We deploy an API Key after
creating the workspace and then use that API key to create all necessary Grafana sub components, including all data
sources and dashboards.
Create that API key with the `managed-grafana/api-key` component.
```yaml
components:
terraform:
grafana/api-key
metadata:
component: managed-grafana/api-key
vars:
enabled: true
grafana_component_name: grafana
```
Then deploy it in the same account as the Grafana workspace.
```console
atmos terraform apply grafana/api-key -s core-use2-auto
```
::::info API Key Rotation
By default, this Grafana API key will expire after 30 days (max). The component is configured to automatically suggest
replacing API key after that expiration date, but Terraform will need to be reapplied to refresh that key.
::::
Now other Grafana sub components will be able to pull that API key from AWS SSM and use it to access the Grafana
workspace.
### Adding Data Sources
In order to visualize and query metrics and logs, we need to add each as a data source for the centralized Amazon
Managed Grafana workspace. We have created a data source component for each type.
Use the `managed-grafana/data-source/managed-prometheus` component to add the Managed Prometheus workspace as a data
source for Grafana. Add the following stack catalog:
```yaml
components:
terraform:
grafana/datasource/defaults:
metadata:
component: managed-grafana/data-source/managed-prometheus
type: abstract
vars:
enabled: true
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
prometheus_component_name: prometheus
grafana/datasource/plat-sandbox-prometheus:
metadata:
component: managed-grafana/data-source/managed-prometheus
inherits:
- grafana/datasource/defaults
vars:
name: plat-sandbox-prometheus
prometheus_tenant_name: plat
prometheus_stage_name: sandbox
grafana/datasource/plat-dev-prometheus:
metadata:
component: managed-grafana/data-source/managed-prometheus
inherits:
- grafana/datasource/defaults
vars:
name: plat-dev-prometheus
prometheus_tenant_name: plat
prometheus_stage_name: dev
# Plus all other Prometheus deployments ...
```
Then deploy the components into the same stack as Grafana. For example `core-use2-auto`:
```
atmos terraform apply grafana/datasource/plat-sandbox-prometheus -s core-use2-auto
atmos terraform apply grafana/datasource/plat-dev-prometheus -s core-use2-auto
atmos terraform apply grafana/datasource/plat-staging-prometheus -s core-use2-auto
atmos terraform apply grafana/datasource/plat-prod-prometheus -s core-use2-auto
```
Use the `managed-grafana/data-source/loki` component to add Grafana Loki as a data source for Grafana. Add the following
stack catalog to the same catalog you used for the Prometheus data sources.
```yaml
components:
terraform: ...
# These use the same default data source component defined for the prometheus
# data source components, since the inputs and structure are the mostly the same
grafana/datasource/plat-sandbox-loki:
metadata:
component: managed-grafana/data-source/loki
inherits:
- grafana/datasource/defaults
vars:
name: plat-sandbox-loki
loki_tenant_name: plat
loki_stage_name: sandbox
grafana/datasource/plat-dev-loki:
metadata:
component: managed-grafana/data-source/loki
inherits:
- grafana/datasource/defaults
vars:
name: plat-dev-loki
loki_tenant_name: plat
loki_stage_name: dev
# Plus all other Loki deployments ...
```
Then deploy the components into the same stack as Grafana. For example `core-use2-auto`:
```console
atmos terraform apply grafana/datasource/plat-sandbox-loki -s core-use2-auto
atmos terraform apply grafana/datasource/plat-dev-loki -s core-use2-auto
atmos terraform apply grafana/datasource/plat-staging-loki -s core-use2-auto
atmos terraform apply grafana/datasource/plat-prod-loki -s core-use2-auto
```
### Creating Dashboards
We fully support Terraformed Grafana dashboards with the `managed-grafana/dashboard` component. Search the
[Grafana Dashboard Library](https://grafana.com/grafana/dashboards/) to find the dashboards that best suite your
requirements. Once you've found a dashboard, copy the dashboard URL from "Download JSON". Right click "Download JSON"
and select "Copy Link Address". This is the dashboard URL we need.
Now create a catalog entry. For example, see the stack catalog below where we create a dashboard _for each_ of our data
sources defined earlier.
When you import a dashboard in the Grafana UI, you can specify the dashboard inputs after importing. For these
components, we instead specify the inputs that we want to replace before creating the dashboard. We do that with
`var.config_input`. This map variable will take a specific string as the map key and replace all occurrences of that
string with the given value. However to know what that input value is, you will need to open the dashboard JSON and find
any value in `${ }` format; although these can usually be logically determined by the type of the data source prefixed
with `DS_`. For example a Prometheus data source would likely be `${DS_PROMETHEUS}` and a Loki data source would likely
be `${DS_LOKI}`. Be sure to include `${ }` in the map key; we want to replace it entirely in the rendered JSON.
```yaml
components:
terraform:
grafana/dashboard/defaults:
metadata:
component: managed-grafana/dashboard
type: abstract
vars:
enabled: true
grafana_component_name: grafana
grafana_api_key_component_name: grafana/api-key
grafana/dashboard/plat-sandbox-prometheus:
metadata:
component: managed-grafana/dashboard
inherits:
- grafana/dashboard/defaults
vars:
dashboard_name: acme-plat-ue2-sandbox-prometheus
dashboard_url: "https://grafana.com/api/dashboards/315/revisions/3/download"
config_input:
"${DS_PROMETHEUS}": "acme-plat-ue2-sandbox-prometheus"
grafana/dashboard/plat-sandbox-loki:
metadata:
component: managed-grafana/dashboard
inherits:
- grafana/dashboard/defaults
vars:
dashboard_name: acme-plat-ue2-sandbox-loki
dashboard_url: "https://grafana.com/api/dashboards/13639/revisions/2/download"
config_input:
"${DS_LOKI}": "acme-plat-ue2-sandbox-loki"
grafana/dashboard/plat-dev-prometheus:
metadata:
component: managed-grafana/dashboard
inherits:
- grafana/dashboard/defaults
vars:
dashboard_name: acme-plat-ue2-dev-prometheus
dashboard_url: "https://grafana.com/api/dashboards/315/revisions/3/download"
config_input:
"${DS_PROMETHEUS}": "acme-plat-ue2-dev-prometheus"
grafana/dashboard/plat-dev-loki:
metadata:
component: managed-grafana/dashboard
inherits:
- grafana/dashboard/defaults
vars:
dashboard_name: acme-plat-ue2-dev-loki
dashboard_url: "https://grafana.com/api/dashboards/13639/revisions/2/download"
config_input:
"${DS_LOKI}": "acme-plat-ue2-dev-loki"
# Plus all other data sources in staging, prod, etc ...
```
Now import this stack file into the same stack as Grafana, for example `core-use2-auto`, and deploy those components:
```console
atmos terraform apply grafana/dashboard/plat-sandbox-prometheus -s core-use2-auto
atmos terraform apply grafana/dashboard/plat-sandbox-loki -s core-use2-auto
atmos terraform apply grafana/dashboard/plat-dev-prometheus -s core-use2-auto
atmos terraform apply grafana/dashboard/plat-dev-loki -s core-use2-auto
```
And that's it! Validate the set up in Grafana. Open the Grafana workspace, select the menu in the top left, click
"Dashboards". Choose any of your newly deployed dashboards.
## References
- [AWS Documentation on Managed Collectors for EKS](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-collector-how-to.html)
- [AWS Documentation on Connecting Grafana Data sources via a private network (VPC)](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-configure-vpc.html)
- [AWS FAQ on using VPC with Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-configure-vpc-faq.html)
- [Grafana Terraform Provider](https://registry.terraform.io/providers/grafana/grafana/latest)
- [Grafana Loki Setup Docs](https://grafana.com/docs/loki/latest/setup/install/)
- [Grafana Dashboard Library](https://grafana.com/grafana/dashboards/)
---
## Setup AWS Managed Grafana
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import Admonition from '@theme/Admonition'
import Note from '@site/src/components/Note'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
Grafana is built around the Amazon managed services for Grafana and Prometheus.
At this time, we have implemented only the EKS integrations with AWS Managed Grafana. We are open to adding ECS support. These instructions include the steps for integrated EKS with AWS Managed Grafana.
## Vendor Components
Vendor all required components
## Deploy Grafana and the Prometheus Scraper
Please see [How to Setup Grafana](/layers/monitoring/grafana/) for in depth documentation.
However, if you can find the summary of steps in Atmos Workflows here. You can choose to run these workflows
one-by-one, or run them altogether with the following:
## Grant the Prometheus Scraper Access to EKS
After deploying the `eks/prometheus-scraper` component, you will need to reapply the `eks/cluster` component with an
update to `var.map_additional_iam_roles`.
```console
atmos terraform output eks/prometheus-scraper -s plat-use1-dev
atmos terraform output eks/prometheus-scraper -s plat-use1-staging
atmos terraform output eks/prometheus-scraper -s plat-use1-prod
```
Note the `scraper_role_arn` and `clusterrole_username` outputs and set them to `rolearn` and `username` respectively
with the `map_additional_iam_roles` input for `eks/cluster`.
```yaml
# stacks/orgs/acme/plat/STAGE/us-east-1/eks.yaml
components:
terraform:
eks/cluster:
vars:
map_additional_iam_roles:
# this role is used to grant the Prometheus scraper access to this cluster. See eks/prometheus-scraper
- rolearn: "arn:aws:iam::111111111111:role/AWSServiceRoleForAmazonPrometheusScraper_111111111111111"
username: "acme-plat-ue2-sandbox-prometheus-scraper"
groups: []
```
## Reapply EKS Cluster
Then reapply `eks/cluster`:
```console
atmos terraform apply eks/cluster -s plat-use1-dev
atmos terraform apply eks/cluster -s plat-use1-staging
atmos terraform apply eks/cluster -s plat-use1-prod
```
## Accessing Grafana
We would prefer to have a custom URL for the provisioned Grafana workspace, but at the moment it's not supported
natively and implementation would be non-trivial. We will continue to monitor that Issue and consider alternatives, such
as using Cloudfront.
[Issue #6: Support for Custom Domains](https://github.com/aws/amazon-managed-grafana-roadmap/issues/6)
You can access Grafana from the Grafana workspace endpoint that is output by the `grafana` component:
```console
atmos terraform output grafana -s core-use1-auto
```
Or you can open your AWS Single-Sign-On page, navigate to the "Applications" tab, and then select "Amazon Grafana".
[https://d-1111aa1a11.awsapps.com/start/](https://d-1111aa1a11.awsapps.com/start/)
---
## Implement Telemetry
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import ReactPlayer from 'react-player';
import CategoryList from '@site/src/components/CategoryList';
Monitoring is a key component of any production system. It is important to have visibility into the health of your system and to be able to react to issues before they become problems.
AI generated voice
## The Problem
Monitoring is a difficult problem to solve. There are many different tools and services that can be used to monitor your system. It is important to have a consistent approach to monitoring that can be applied across all of your systems.
There is often a tradeoff between the cost of monitoring and the value it provides. It is important to have a monitoring solution that is cost effective and provides value to your organization. Another problem is when monitoring is configured incorrectly and causes more problems than it solves, usually seen through ignored alerts or no alerts at all.
## Our Solution
We have developed a set of Terraform modules that can be used to deploy a monitoring solution for your system. These modules are designed to be used with Datadog. Datadog is a monitoring service that provides a wide range of features and integrations with other services.
We have broken down the monitoring solution into several components to make it easier to deploy and manage.
### Implementation
#### Foundation
- [`datadog-configuration`](/components/library/aws/datadog-credentials/): This is a **utility** component. This component expects Datadog API and APP keys to be stored in SSM or ASM, it then copies the keys to SSM/ASM of each account this component is deployed to. This is for several reasons:
1. Keys can be easily rotated from one place
2. Keys can be set for a group and then copied to all accounts in that group, meaning you could have a pair of api keys and app keys for production accounts and another set for non-production accounts.
This component is **required** for all other components to work. As it also stores information about your Datadog account, which other components will use, such as your Datadog site url, along with providing an easy interface for other components to configure the Datadog provider.
- [`datadog-integration`](/components/library/aws/datadog-integration/): This component is the core component binding Datadog to AWS, this component is deployed to every account and sets up all the Datadog Integration tiles with AWS. This is what provides the majority of your metrics to AWS!
- [`datadog-lambda-forwarder`](/components/library/aws/datadog-lambda-forwarder/): This component is an AWS Lambda function that ships logs from AWS to Datadog. Details of it can be found [here](https://docs.datadoghq.com/logs/guide/forwarder/?tab=terraform)
- [`datadog-monitor`](/components/library/aws/datadog-monitor/): This component deploys monitors via yaml configuration. When you [vendor](https://atmos.tools/cli/commands/vendor/usage/#docusaurus_skipToContent_fallback) in this component you will find [our catalog of pre-built monitors](https://github.com/cloudposse/terraform-datadog-platform/tree/main/catalog/monitors). We deploy this component to every account, our monitors have Terraform interpolation to allow you to set the thresholds for each monitor. This allows you to set different thresholds per stage using the same monitors but different configurations using familiar atmos inheritance.
#### EKS
- [`datadog-agent`](/components/library/aws/eks/datadog-agent/): This component deploys the Datadog agent on EKS, it also deploys the [Datadog Cluster Agent](https://docs.datadoghq.com/agent/cluster_agent/), the agent is a daemonset that runs on every node in your cluster (with the exception of fargate (serverless) nodes). This component handles sending Kubernetes metrics, logs, and events to Datadog. This component also can deploy the [Datadog Cluster Checks](https://docs.datadoghq.com/containers/cluster_agent/clusterchecks/) which are a way to run checks on your cluster from within the cluster itself, this is often a cheaper way than [Synthetic Monitoring](https://docs.datadoghq.com/synthetics/) to monitor services in your cluster.
- [`datadog-private-location-eks`](/components/library/aws/datadog-synthetics-private-location/): This component deploys a private location for [Synthetic Monitoring](https://docs.datadoghq.com/synthetics/) to your EKS cluster. This allows synthetic checks to run even inside a private cluster.
#### ECS
- [`ecs-service`](/components/library/aws/ecs-service/): This component contains variables that enable Datadog integration with ECS. For more information on how to deploy a service to ecs, see the [ecs-service](/components/library/aws/ecs-service/) component, specifically the [`datadog_agent_sidecar_enabled`](/components/library/aws/ecs-service/#input_datadog_agent_sidecar_enabled) variable.
- [`datadog-private-location-ecs`](/components/library/aws/datadog-private-location-ecs/): This component deploys a private location for [Synthetic Monitoring](https://docs.datadoghq.com/synthetics/) to your ECS cluster. This allows synthetic checks to run against your ECS cluster.
#### Additional
- [`datadog-logs-archive`](/components/library/aws/datadog-logs-archive/): This component creates a single [log archive](https://docs.datadoghq.com/logs/log_configuration/archives/?tab=awss3) pipeline for each AWS account. Using this component you can setup [multiple logs archive rules](https://docs.datadoghq.com/logs/log_configuration/archives/?tab=awss3#multiple-archives).
- [`datadog-synthetics`](/components/library/aws/datadog-synthetics/): This component deploys Datadog synthetic checks, which are external health checks for your services, similar to [Pingdom](https://www.pingdom.com/).
## References
---
## Accessing the Network
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
Lastly, configure the VPN. The VPN will be provisioned in the `network` account and will leverage Transit Gateway to
connect various VPCs to the VPN client. VPN deployment consists of three parts: authentication, component deployment,
and client setup.
## Set up authentication
First, set up authentication.
- We recommend [using AWS IAM Identity Center to authenticate users](https://aws.amazon.com/blogs/security/authenticate-aws-client-vpn-users-with-aws-single-sign-on/).
- Follow only the first section included in the linked AWS blog, _Create and configure the Client VPN SAML applications in AWS IAM Identity Center_, through downloading the _AWS IAM Identity Center SAML metadata_.
- Save that file under the `ec2-client-vpn` component (`components/terraform/ec2-client-vpn`) as "aws-sso-saml-app.xml". This should match the given document name for `saml_metadata_document` in the `ec2-client-vpn` stack catalog (`stacks/catalog/ec2-client-vpn.yaml`)
## Deploy the VPN
Next, deploy the `ec2-client-vpn` component. This is done by running the following:
Depending on the given network configuration, you may run out of available Client VPN routes.
That error will look something like this:
```console
╷
│ Error: error creating EC2 Client VPN Route (cvpn-endpoint-0b7487fc0043a3df0,subnet-0b88f999578fd2340,10.101.96.0/19): ClientVpnRouteLimitExceeded: Limit exceeded
│ status code: 400, request id: 779f977b-2b31-490a-a4b1-2c8cb1da068d
│
│ with module.ec2_client_vpn.aws_ec2_client_vpn_route.default[40],
│ on .terraform/modules/ec2_client_vpn/main.tf line 245, in resource "aws_ec2_client_vpn_route" "default":
│ 245: resource "aws_ec2_client_vpn_route" "default" {
│
```
If this happens, you'll need to [increase the number of routes](https://console.aws.amazon.com/servicequotas/home/services/ec2/quotas/L-401D78F7) allowed for the Client VPN endpoint. That said, you should already have a quota increase request ready for this in
`stacks/orgs/acme/core/network/global-region/baseline.yaml`.
You can apply that quota using `atmos terraform apply account-quotas -s core-gbl-network`.
## Download & Install VPN Client
- Finally, set up the AWS VPN Client to access the VPN.
- [Download the AWS VPN Client](https://aws.amazon.com/vpn/client-vpn-download/) and or install it by running `brew install aws-client-vpn` in a regular terminal. Follow the [AWS Documentation](https://docs.aws.amazon.com/vpn/latest/clientvpn-user/connect-aws-client-vpn-connect.html) to complete the VPN setup.
## Configure VPN Client
The Atmos Workflow `deploy/vpn` creates a local VPN configuration as `acme-core.ovpn` (`rootfs/etc/aws-config/acme-core.ovpn`) located in the aws-config dir of `rootfs/`.
If it doesn't exist, create this file using the `client_configuration` output of the `ec2-client-vpn` component, and commit it to the repo under `rootfs/etc/aws-config/acme-core.ovpn` for future reference.
```shell
atmos terraform output ec2-client-vpn -s core-use1-network
```
## Connect to VPN
Once you configure the AWS VPN Client, set the file as the config and connect. From there you should be able to access resources on any subnet in the VPCs you've provisioned.
### Optional: Bastion hosts
If you'd like to set up bastion hosts, you can do so by running the following. This would let you further evaluate the VPN.
By default, we deploy the bastion to all accounts connected to Transit Gateway.
---
## Establish Connectivity with Transit Gateway
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
AWS Transit Gateway (TGW) provides a central hub for connecting VPCs across multiple AWS accounts. This guide explains the Transit Gateway components, their architecture, and how to deploy them to establish network connectivity.
## Components Overview
The Transit Gateway solution consists of several components that work together:
tgw/hub
Creates the Transit Gateway in the network account (core-network). This is the central routing hub that all other VPCs connect to.
tgw/attachment
Creates and manages Transit Gateway VPC attachments in connected accounts. Each account with a VPC needs an attachment to connect to the Transit Gateway.
tgw/routes
Manages Transit Gateway route tables in the network account. Controls how traffic flows between attachments.
vpc-routes
Configures VPC route tables in connected accounts to route traffic through the Transit Gateway. In stacks, this is typically configured as vpc/routes/private.
## Architecture
The Transit Gateway components work together to create a hub-and-spoke network topology:
1. The Transit Gateway is created in the `core-network` account (`tgw/hub`)
1. VPCs in other accounts attach to the Transit Gateway (`tgw/attachment`)
1. Transit Gateway route tables control routing between attachments (`tgw/routes`)
1. VPC route tables in connected accounts direct traffic through the Transit Gateway (`vpc/routes/private`)
```mermaid
graph TD
subgraph core-network
TGW[Transit Gateway]
TGW_RT[TGW Route Tables]
VPC_NET[Network VPC]
end
subgraph core-auto
VPC_AUTO[Auto VPC]
ATT_AUTO[TGW Attachment]
end
subgraph plat-dev
VPC_DEV[Dev VPC]
ATT_DEV[TGW Attachment]
end
subgraph plat-staging
VPC_STG[Staging VPC]
ATT_STG[TGW Attachment]
end
subgraph plat-prod
VPC_PROD[Prod VPC]
ATT_PROD[TGW Attachment]
end
VPC_NET <--> TGW
ATT_AUTO <--> TGW
ATT_DEV <--> TGW
ATT_STG <--> TGW
ATT_PROD <--> TGW
TGW <--> TGW_RT
VPC_AUTO <--> ATT_AUTO
VPC_DEV <--> ATT_DEV
VPC_STG <--> ATT_STG
VPC_PROD <--> ATT_PROD
```
### Connected Accounts
In the reference architecture, the following accounts connect to the Transit Gateway:
1. **`core-network`** — The hub account where the Transit Gateway is deployed
1. **`core-auto`** — Automation account for self-hosted GitHub runners
1. **`plat-dev`** — Development environment
1. **`plat-staging`** — Staging environment
1. **`plat-prod`** — Production environment
1. **`plat-sandbox`** — Optional sandbox environment
## Deployment
Deploy the Transit Gateway infrastructure using the network workflow:
This workflow deploys the components in the correct order:
1. Creates the Transit Gateway hub in `core-network`
1. Creates VPC attachments in each connected account
1. Configures Transit Gateway route tables
1. Updates VPC route tables in connected accounts
## References
1. [AWS Transit Gateway Documentation](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html)
1. [Transit Gateway Peering](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-peering.html)
1. [tgw/hub Component](/components/library/aws/tgw/hub/)
1. [tgw/attachment Component](/components/library/aws/tgw/attachment/)
---
## Deploying the Network
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
The first step in deploying the network is to deploy the VPCs in each region. This will create the necessary foundation for the platform to run and includes the VPC, subnets, route tables, security groups, and VPC endpoints.
:::tip
Up to this point, we've used the `SuperAdmin` user for administrative access. With the Identity layer now deployed, switch to using your designated AWS Team credentials for local access and deployments. Using roles rather than users provides better security through temporary credentials and easier access management. Unless otherwise requested, assume all future deployments use your AWS Team.
Please see [How to Log into AWS](/layers/identity/how-to-log-into-aws/)
:::
## Vendor the Networking components
First, vendor the networking components by running the following:
## Deploy all VPCs
Deploy all the VPCs in every configured region by running the following command:
## Decommission the default VPCs
Once all VPCs are deployed, decomission the default VPC in each region by running the following command from within the Geodesic shell and while connected to you `core-identity` AWS profile:
```bash
wipe-default-vpcs
```
---
## Decide on AWS Account VPC Subnet CIDR Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
We need to devise a subnet allocation scheme tolerant of multiple accounts operating in multiple regions that do not
conflict with any other ranges which may need to be peered in the future.
## General Considerations
- Having unique, non-overlapping VPC CIDRs makes connecting clusters to each other much easier
- Each VPC must be subdivided into several non-overlapping subnet ranges to provide public and private address spaces
across multiple availability zones
- **ALBs need a minimum of 2 subnets allocated**
### EKS Considerations
- Using Amazon’s CNI, each Kubernetes pod gets its own IP in the subnet, and additional IPs are reserved so they are
immediately available for new pods when they are launched
- You will need a lot more IPs than you anticipate due to performance optimizations in how CNIs are managed by EKS
[https://betterprogramming.pub/amazon-eks-is-eating-my-ips-e18ea057e045](https://betterprogramming.pub/amazon-eks-is-eating-my-ips-e18ea057e045)
[https://medium.com/codex/kubernetes-cluster-running-out-of-ip-addresses-on-aws-eks-c7b8e5dd8606](https://medium.com/codex/kubernetes-cluster-running-out-of-ip-addresses-on-aws-eks-c7b8e5dd8606)
- AWS supports the `eksctl` tool (we do not). Their default recommendation is:
> The default VPC CIDR used by `eksctl` is `192.168.0.0/16`. It is divided into 8 (`/19`) subnets (3 private, 3 public
> & 2 reserved).
- EKS clusters limit the number of pods based on the number of
[ENIs available per instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI).
- [Kubernetes has limits](https://kubernetes.io/docs/setup/best-practices/cluster-large/), but those are pretty high.
**The reality is most clusters operate at a much smaller scale.** At v1.18, Kubernetes supports clusters with up to
5000 nodes. More specifically, we support configurations that meet _all_ of the following criteria:
- No more than 5000 nodes
- No more than 150000 total pods
- No more than 300000 total containers
- No more than 100 pods per node
- EKS [will use either](https://github.com/aws/containers-roadmap/issues/216#issue-423314258) 10.100.0.0/16 or
172.20.0.0/16 for cluster Services, so avoiding those ranges will avoid some problems with inter-cluster routing
:::caution Use CIDR ranges smaller than a `/19` at your own risk.
Cloud Posse does not take responsibility for any EKS cluster issues related to underprovisioning CIDR ranges.
:::
## Our standard recommendation
- Each account gets it's own `/16` (65,534 usable IPs) (or `/15` = 2 x `/16` for more than 4 total regions),
consecutively numbered, starting with 10.101.0.0
- Each region in an account gets 1 x `/18` (16,382 usable IPs), usually allocated as 1 or 2 countries/legislative areas
per account, each with 2 regions for DR/failover
- Each region allocates 6 x `/21` (2,046 usable IPs) subnets (3 AZ \* (public + private)) for EKS.
- Any additional “single purpose” subnets in a region should be `/24` (254 usable IPs)
Further reading:
- [https://aws.amazon.com/blogs/containers/eks-vpc-routable-ip-address-conservation/](https://aws.amazon.com/blogs/containers/eks-vpc-routable-ip-address-conservation/)
- [https://medium.com/@jeremy.i.cowan/custom-networking-with-the-aws-vpc-cni-plug-in-c6eebb105220](https://medium.com/@jeremy.i.cowan/custom-networking-with-the-aws-vpc-cni-plug-in-c6eebb105220)
- [https://tidalmigrations.com/subnet-builder/](https://tidalmigrations.com/subnet-builder/)
### CIDR Subnet Table
| **Subnet Mask** | **CIDR Prefix** | **Total IP Addresses** | **Usable IP Addresses** | **Number of /24 networks** |
| --------------- | --------------- | ---------------------- | ----------------------- | -------------------------- |
| 255.255.255.255 | /32 | 1 | 1 | 1/256th |
| 255.255.255.254 | /31 | 2 | 2\* | 1/128th |
| 255.255.255.252 | /30 | 4 | 2 | 1/64th |
| 255.255.255.248 | /29 | 8 | 6 | 1/32nd |
| 255.255.255.240 | /28 | 16 | 14 | 1/16th |
| 255.255.255.224 | /27 | 32 | 30 | 1/8th |
| 255.255.255.192 | /26 | 64 | 62 | 1/4th |
| 255.255.255.128 | /25 | 128 | 126 | 1 half |
| 255.255.255.0 | /24 | 256 | 254 | 1 |
| 255.255.254.0 | /23 | 512 | 510 | 2 |
| 255.255.252.0 | /22 | 1,024 | 1,022 | 4 |
| 255.255.248.0 | /21 | 2,048 | 2,046 | 8 |
| 255.255.240.0 | /20 | 4,096 | 4,094 | 16 |
| 255.255.224.0 | /19 | 8,192 | 8,190 | 32 |
| 255.255.192.0 | /18 | 16,384 | 16,382 | 64 |
| 255.255.128.0 | /17 | 32,768 | 32,766 | 128 |
| 255.255.0.0 | /16 | 65,536 | 65,534 | 256 |
| 255.254.0.0 | /15 | 131,072 | 131,070 | 512 |
| 255.252.0.0 | /14 | 262,144 | 262,142 | 1024 |
| 255.248.0.0 | /13 | 524,288 | 524,286 | 2048 |
| 255.240.0.0 | /12 | 1,048,576 | 1,048,574 | 4096 |
| 255.224.0 0 | /11 | 2,097,152 | 2,097,150 | 8192 |
| 255.192.0.0 | /10 | 4,194,304 | 4,194,302 | 16,384 |
| 255.128.0.0 | /9 | 8,388,608 | 8,388,606 | 32,768 |
| 255.0.0.0 | /8 | 16,777,216 | 16,777,214 | 65,536 |
| 254.0.0.0 | /7 | 33,554,432 | 33,554,430 | 131,072 |
| 252.0.0.0 | /6 | 67,108,864 | 67,108,862 | 262,144 |
| 248.0.0.0 | /5 | 134,217,728 | 134,217,726 | 1,048,576 |
| 240.0.0.0 | /4 | 268,435,456 | 268,435,454 | 2,097,152 |
| 224.0.0.0 | /3 | 536,870,912 | 536,870,910 | 4,194,304 |
| 192.0.0.0 | /2 | 1,073,741,824 | 1,073,741,822 | 8,388,608 |
| 128.0.0.0 | /1 | 2,147,483,648 | 2,147,483,646 | 16,777,216 |
| 0.0.0.0 | /0 | 4,294,967,296 | 4,294,967,294 | 33,554,432 |
---
## Decide on CIDR Allocations
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Please also read the [design decision](/layers/network/design-decisions/decide-on-aws-account-vpc-subnet-cidr-strategy)
for more information.
---
## Decide on Client VPN Options
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
You need to remotely access resource that reside in a private VPC. Different teams or individuals need access to
different resources.
## Solution
Use AWS Client VPN for remote user access.
## Considered Options
Each option below can be integrated with AWS SSO.
### Option 1: Deploy 1 Client VPN in the Network Account
:::tip Our Recommendation is to use Option 1 for customers who do not need fine-grained network access controls. Anyone
on the VPN should have access to all network services via the Transit Gateway.
:::
Ideal for companies where one team will require access to all accounts and there are no plans to introduce access for
other teams.
#### Pros
- Anyone on the VPN has access to all network services via the Transit Gateway
- Least expensive to operate
- No need to switch networks once connected to VPN
#### Cons
- Total access to every account
### Option 2: Deploy Multiple Client VPNs Depending on Network Segments in the Network Account
Ideal for companies where certain teams require segmented access to multiple accounts. We define these accounts as a
segment.
#### Pros
- More access control options
#### Cons
- More expensive to operate
- Deciding on how to segment the network can be a complex decision
- Requires switching VPNs when accessing another account. This is more disruptive to developer workflows
### Option 3: Deploy Client VPN(s) Directly in the Accounts Needed
This is a requirement when you know you need very granular access controls with restricting access to certain accounts.
#### Pros
- Highest level of access control to each account.
#### Cons
- Most expensive to operate and grows as more accounts are added
- Requires switching VPNs when accessing another account. This is the most disruptive path to developer workflows.
## References
-
---
## Decide on DNS Registrar
import Intro from "@site/src/components/Intro";
When setting up DNS for the reference architecture, you need to decide where to register your domains. This is separate from where DNS is hosted—Route 53 will host your zones, but the domain registration (the purchase and ownership) can be done through various registrars.
We recommend registering [dedicated vended domains per stage](/layers/network/design-decisions/decide-on-vanity-branded-domain), such as `acme-prod.com`, `acme-staging.com`, and `acme-dev.com`. This is practical guidance for companies with a simple dev/staging/prod SDLC. If you have a large number of stages or accounts, adapt accordingly.
## Option 1: Use AWS Route 53 Registrar (Recommended)
Register domains directly through AWS Route 53 in the `dns` account. This is our recommended approach for most organizations.
- **Fully AWS-native** — Domain registration and DNS hosting in one place
- **Consolidated billing** — Domain costs appear on your AWS bill (requires a credit card on the account)
- **No lock-in** — Standard domain transfers available if you change your mind later
- **No downsides** — There are no real trade-offs compared to third-party registrars
The `dns` account (see [Decide on AWS Account Flavors and Organizational Units](/layers/accounts/design-decisions/decide-on-aws-account-flavors-and-organizational-units)) acts as your centralized registrar. This keeps domain ownership consolidated and simplifies management.
### Domain Registration vs DNS Delegation
Domain registration (ownership) is different from DNS (nameserver delegation). Organizations often maintain domain portfolios for multiple purposes: marketing domains, SEO, trademark defense, and branded properties. These portfolios can get large.
Once you own a domain, the NS records can be delegated to any account:
- **[Vanity domains](/layers/network/design-decisions/decide-on-vanity-branded-domain)** — We recommend delegating each stage's TLD (e.g., `acme-prod.com`, `acme-staging.com`) to the corresponding account. This lets each environment manage its own TLD, avoiding cross-account complexity.
- **[Service discovery domains](/layers/network/design-decisions/decide-on-service-discovery-domain)** — There's typically one domain (e.g., `acme.net`) with zones delegated to each member account (e.g., `prod.acme.net`, `staging.acme.net`). This avoids ambiguity of ownership while enabling account-level DNS management.
Centralizing registration in the `dns` account while delegating NS records provides clear ownership without operational bottlenecks. It also creates clear IAM boundaries—you can grant domain management permissions (e.g., to legal, procurement) in the `dns` account while day-to-day DNS operations happen in separate accounts.
## Option 2: Use an Existing Registrar
If you already have a domain portfolio managed elsewhere, you may prefer to keep registration consolidated with your existing registrar.
Common registrars include:
- **GoDaddy** — Popular general-purpose registrar
- **Squarespace Domains** — Formerly Google Domains
- **Cloudflare** — Registrar with integrated CDN and security features
- **MarkMonitor** — Enterprise-grade registrar for large domain portfolios
This approach makes sense when you have many existing domains and want to avoid managing registrations in multiple places.
## Considerations
### Cloudflare Limitation
:::caution
Cloudflare (non-Enterprise plans) cannot delegate top-level NS records. The apex domain NS always remains on Cloudflare.
:::
This is problematic if you want a fully AWS-native DNS architecture. Your top-level domain would be managed out-of-band, separately from the rest of your infrastructure. For many organizations this is acceptable, but if you want complete control of NS delegation, use Route 53 or another registrar that supports apex NS delegation.
### Enterprise Registrars (MarkMonitor)
Enterprise-scale organizations often use MarkMonitor or similar services to manage large domain portfolios. If your organization uses one of these services:
- Consult your legal department on domain ownership consolidation
- Consider IP and trademark defense implications
- Follow your organization's existing domain governance policies
### Terraform Support
The AWS provider includes an `aws_route53domains_registered_domain` resource, but it has significant limitations:
- **Cannot register new domains** — Only manages existing registrations
- **Cannot import domains** — No `terraform import` support
- **Limited operations** — Can update nameservers and contact info, but not create or delete registrations
Domain registration is intentionally ClickOps. The reference architecture does not manage domain registration via Terraform—you must register domains manually through the AWS Console or your chosen registrar.
### Route 53 Registrar Prerequisites
If you choose AWS Route 53 as your registrar:
- **The `dns` account must be provisioned first** — You cannot register domains until this account exists
- **A credit card is required** — Route 53 domain registration requires a credit card added to the account, separate from your regular AWS billing arrangement (e.g., invoicing, consolidated billing)
### Legal Considerations
We recommend checking with your legal department on where to consolidate domain ownership. Domain registration has implications for:
- IP and trademark defense
- Corporate governance requirements
- Regulatory compliance
## Related
- [Decide on Vanity (Branded) Domains](/layers/network/design-decisions/decide-on-vanity-branded-domain)
- [Decide on Service Discovery Domain](/layers/network/design-decisions/decide-on-service-discovery-domain)
- [Decide on AWS Account Flavors and Organizational Units](/layers/accounts/design-decisions/decide-on-aws-account-flavors-and-organizational-units)
- [AWS Route 53 Domain Registration](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar.html)
---
## Decide on Hostname Scheme for Service Discovery
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
### Context and Problem Statement
We need a consistent way of naming resources. Also please see related design-decision concerning DR implications.
This is not an easily reversible decision once whatever convention is in use across services.
### Considered Options
1. Multi-cloud? e.g. AWS, GCP, Azure (we recommend baking the cloud into the service discovery domain. See
[Decide on Service Discovery Domain](/layers/network/design-decisions/decide-on-service-discovery-domain))
2. Multi-region? [Decide on Primary AWS Region](/layers/network/design-decisions/decide-on-primary-aws-region)
3. Pet or Cattle? → Blue/green or multi-generational
4. Short or Long region name? see
[Decide on Regional Naming Scheme](/layers/project/design-decisions/decide-on-regional-naming-scheme)
5. Does it extend all the way down to the VPC? (we do not recommend this due to excessive subnet allocations and
complications around network routing)
6. Too many DNS zone delegations add latency to DNS lookups due to having to jump between nameservers
We typically use the following convention with tenants
- `$service.$region.$account.$tenant.$tld`
- e.g. `eks.us-east-1.prod.platform.ourcompany.com` where `platform` is the tenant
Or without tenants:
- `$service.$region.$account.$tld`
- e.g. `eks.us-east-1.prod.ourcompany.com` without a tenant
The question is now what to do for the `$service` name. Using `eks` is visually appealing but treating the cluster like
a named pet. If in the future we want to support multiple generations of clusters, we may want to consider this in
whatever convention.
We may want to consider that following convention:
- `$service-$color.$region.$account.$tld` with a `CNAME` of `$service.$region.$account.$tld` pointing to the live
cluster
- e.g. `eks-blue.us-east-1.prod.ourcompany.com`
## Related
- [Decide on Service Discovery Domain](/layers/network/design-decisions/decide-on-service-discovery-domain)
---
## Decide on How to Support TLS
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
Describe why we are making this decision or what problem we are solving.
## Considered Options
### Option 1 (Recommended)
:::tip Our Recommendation is to use Option 1 because....
:::
#### Pros
-
#### Cons
-
### Option 2
#### Pros
-
#### Cons
-
### Option 3
#### Pros
-
#### Cons
-
## References
- Links to any research, ADRs or related Jiras
---
## Decide on IPv4 and IPv6 support
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
Describe why we are making this decision or what problem we are solving.
## Considered Options
### Option 1 (Recommended)
:::tip Our Recommendation is to use Option 1 because....
:::
#### Pros
-
#### Cons
-
### Option 2
#### Pros
-
#### Cons
-
### Option 3
#### Pros
-
#### Cons
-
## References
- Links to any research, ADRs or related Jiras
---
## Decide on Opting Into Non-default Regions
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
If a Region is disabled by default, you must enable it before you can create and manage resources. It would be a
pre-requisite to deploying anything in the region.
The following Regions are disabled by default:
- Africa (Cape Town)
- Asia Pacific (Hong Kong)
- Asia Pacific (Jakarta)
- Europe (Milan)
- Middle East (Bahrain)
When you enable a Region, AWS performs actions to prepare your account in that Region, such as distributing your IAM
resources to the Region. This process takes a few minutes for most accounts, but this can take several hours. You cannot
use the Region until this process is complete.
Source:
[https://docs.aws.amazon.com/general/latest/gr/rande-manage.html](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html)
## Procedure for enabling a region
If we need to enable the regions, it needs to be done as a manual step and is convenient to do at the same time we set
up MFA for the root user of the account. We also at the same time need to edit the STS Global endpoint settings to
generate credentials valid in all regions instead of just the default regions. When you enable the regions in the AWS
console, you are prompted to do this, so just follow the prompt.
## Related
- [Decide on Primary AWS Region](/layers/network/design-decisions/decide-on-primary-aws-region)
---
## Decide on Organization Supernet CIDR Ranges
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
- We need to record all existing and provisioned CIDR ranges as a system of record, as well as any additional context as
necessary (E.g. what the CIDRs are used for).
- We need to decide on the all-encompassing CIDR for this organization for contiguous networks. It’s not a requirement,
but a strong recommendation.
- All VPCs subnets should be carved out of this supernet.
[Decide on AWS Account VPC Subnet CIDR Strategy](/layers/network/design-decisions/decide-on-aws-account-vpc-subnet-cidr-strategy)
## Solution
- Document the CIDR ranges provisioned for all the accounts in ADR so we know what is in use today
- Add any other known CIDR ranges (e.g. from other accounts not under this AWS organization)
- Take into account any multi-cloud, multi-region strategies.
- [https://tidalmigrations.com/subnet-builder/](https://tidalmigrations.com/subnet-builder/)
### Example
## Pro Tip
Use the [https://tidalmigrations.com/subnet-builder/](https://tidalmigrations.com/subnet-builder/) with an additional
overlay from CleanshotX.
---
## Decide on Primary AWS Region
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import Steps from "@site/src/components/Steps";
While the company might operate in multiple regions, one region should be selected as the primary region. There are
certain resources that will not be geographically distributed and these should be provisioned in this default region.
When starting from scratch with a new AWS account, it's a good time to revisit decisions that might have been made
decades ago. There are many new AWS regions that might be better suited for the business.
## Considerations
### Customer Proximity
One good option is picking a default region that is closest to the where the majority of end-users reside.
### Business Headquarters
One good option is picking a default region that is closest to where the majority of business operations take place.
This is especially true if most of the services in the default region will be consumed by the business itself.
### Stability
When operating on AWS, selecting a region other than `us-east-1` is advisable as this is the default region (or used to
be) for most AWS users. It has historically had the most service interruptions presumably because it is one of the most
heavily-used regions and operates at a scale much larger than other AWS regions. Therefore we advise using `us-east-2`
over `us-east-1` and the latencies between these regions is very minimal.
### High Availability / Availability Zones
Not all AWS regions support the same number of availability zones. A minimum of 3 AZs is recommended when operating
Kubernetes to avoid "split-brain" problems. Most AWS regions now have at least 3 AZs, but there are exceptions:
- `us-west-1` (US West, N. California) — newer accounts only have access to 2 AZs
- Some opt-in regions may have fewer AZs
See the [AWS Regions documentation](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html)
for the current AZ count per region.
### Service Availability
Not all regions offer the full suite of AWS services or receive new services at the same rate as others. Some regions
receive platform infrastructure updates slower than others. AWS also offers
[Local Zones](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/#AWS_Local_Zones) (e.g.
`us-west-2-lax-1a`) which operate a subset of AWS services.
See [AWS Regional Services List](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/) for
a complete breakdown of service availability by region.
Several services used in the reference architecture are only available in a subset of AWS regions:
1. **[AWS App Runner](https://aws.amazon.com/apprunner/)** is only available in these regions:
`us-east-1`, `us-east-2`, `us-west-2`,
`eu-central-1`, `eu-west-1`, `eu-west-2`, `eu-west-3`,
`ap-south-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`.
We use App Runner for [RunsOn](/layers/github-actions/runs-on/), our recommended solution for self-hosted GitHub runners.
1. **[Amazon Managed Grafana](https://aws.amazon.com/grafana/)** is only available in these regions:
`us-east-1`, `us-east-2`, `us-west-2`,
`eu-central-1`, `eu-west-1`, `eu-west-2`,
`ap-northeast-1`, `ap-northeast-2`, `ap-southeast-1`, `ap-southeast-2`.
We use Managed Grafana for centralized monitoring dashboards in the
[Grafana monitoring stack](/layers/monitoring/).
#### Deploying in Unsupported Regions
If your primary region doesn't support one of these services, you can still use that region by deploying the service
in a supported region and connecting it back. Depending on the service, this may require connecting the alternate region
via [Transit Gateway](/components/library/aws/tgw/hub/) with a cross-region peering connection, deploying cross-region
IAM roles, or a combination of both. These workarounds add complexity and cost (e.g. Transit Gateway cross-region
data transfer adds approximately **$80/month**).
### Cost
Not all regions cost the same to operate.
### Instance Types
Not all instance types are available in all regions
### Latency
Latency between v1 infrastructure and v2 infrastructure could be a factor. See
[cloudping.co/grid](https://www.cloudping.co/grid) for more information.
## Recommendation
Taking all of the above into consideration, we recommend choosing a primary region that supports the services you need,
has at least 3 availability zones, and is not `us-east-1` (due to its history of service interruptions). The regions
that support both App Runner and Managed Grafana while meeting these criteria are:
- `us-east-2` (US East, Ohio)
- `us-west-2` (US West, Oregon)
- `eu-central-1` (Europe, Frankfurt)
- `eu-west-1` (Europe, Ireland)
- `eu-west-2` (Europe, London)
- `ap-southeast-1` (Asia Pacific, Singapore)
- `ap-southeast-2` (Asia Pacific, Sydney)
- `ap-northeast-1` (Asia Pacific, Tokyo)
For US-based organizations, `us-east-2` and `us-west-2` are both solid choices. They avoid the stability concerns of
`us-east-1`, offer low latency to other US regions, and support the full reference architecture without workarounds.
## References
- [https://www.geekwire.com/2017/analysis-rethinking-cloud-architecture-outage-amazon-web-services/](https://www.geekwire.com/2017/analysis-rethinking-cloud-architecture-outage-amazon-web-services/)
- [https://www.concurrencylabs.com/blog/choose-your-aws-region-wisely/](https://www.concurrencylabs.com/blog/choose-your-aws-region-wisely/)
- [https://www.concurrencylabs.com/blog/choose-your-aws-region-wisely/](https://www.concurrencylabs.com/blog/choose-your-aws-region-wisely/)
---
## Decide on Service Discovery Domain
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import ReactPlayer from "react-player";
It's important to distinguish between branded/vanity domains (e.g. `cloudposse.com`, `slack.cloudposse.com`) used by customers and your infrastructure service discovery domains (e.g. `cloudposse.net`) used by services or internal consumers. For example, a product might have dozens of branded domains for SEO and marketing purposes, but you'll only have one infrastructure powering it. The service discovery domain is only for internal consumption. We get to define the conventions for this, not marketing. 😉 The service discovery domain will always be hosted on Route53, while the vanity domain can be hosted anywhere.
The "service discovery domain" will be further subdivided by delegating a dedicated zone to each AWS account. For
example, we don’t share DNS zones between production and staging. Therefore each account has its own service discovery domain (E.g. `prod.example.net`). See [Decide on Hostname Scheme for Service Discovery](/layers/network/design-decisions/decide-on-hostname-scheme-for-service-discovery) for more context.
This is a non-reversible decision, so we recommend taking the time to discuss with the team what they like the best.
## Considerations
### Length of Domain
:::tip
Our recommendation is to keep it short and simple.
:::
The length of the domain doesn’t technically matter, but your engineers will be typing this out all the time.
### Buy New or Reuse
:::tip We usually recommend registering a net-new domain (e.g. on route53) rather than repurposing an existing one.
Domains are too inexpensive these days to worry about the cost.
:::
The "service discovery domain" does not need to be associated with the company’s brand identity and can be something
completely separate from the company itself.
If you prefer to repurpose an existing one, then we recommend a TLD which has no existing resource records.
:::caution We do not recommend using the service discovery domain for AWS account addresses due to the cold start
problem. You cannot provision the accounts without the email & domain, and you cannot provision the email & domain in
the new accounts since they do not yet exist.
:::
### Registrar
:::tip We recommend using the Route53 Registrar from the `dns` account
:::
When registering a new domain, we have the option of using Route53’s built-in registrar or using your existing
registrar. Many enterprise-scale organizations use MarkMonitor to manage their domain portfolio. Our convention is to
use the `dns` account (see
[REFARCH-55 - Decide on AWS Account Flavors and Organizational Units](https://docs.cloudposse.com/layers/accounts/design-decisions/decide-on-aws-account-flavors-and-organizational-units))
as the registrar. Note, the AWS Route53 Registrar cannot be automated with terraform and ClickOps is still required for
domain registration.
[https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar.html](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar.html)
We recommend checking with your legal department on where they want to consolidate domain ownership. It has larger
ramifications as to IP/trademark defense.
### Choose Top-level Domain Wisely
The `.com`, `.net`, or `.org`domains are what we typically recommend due to the maturity of the TLDs.
:::caution Newer, trendier TLDs like `.io`, `.ninja`, `.sh`, etc. have unproven long-term viability. The `.io` domain,
trendy amongst startups is actually the registrar for the Indian Ocean and has been the subject of much scrutiny.
[https://thehackerblog.com/the-io-error-taking-control-of-all-io-domains-with-a-targeted-registration/](https://thehackerblog.com/the-io-error-taking-control-of-all-io-domains-with-a-targeted-registration/)
[https://fortune.com/2020/08/31/crypto-fraud-io-domain-chagos-islands-uk-colonialism-cryptocurrency/](https://fortune.com/2020/08/31/crypto-fraud-io-domain-chagos-islands-uk-colonialism-cryptocurrency/)
[https://www.spamhaus.org/statistics/tlds/](https://www.spamhaus.org/statistics/tlds/)
:::
TLDs operated by Google (`.dev`, `.app`, Et al.) have mandatory HSTS (TLS) enabled in Chrome and browsers which adopt
[https://hstspreload.org/](https://hstspreload.org/) . This means that you cannot access `http://` URLs by default,
which is a security best-practice, but nonetheless inconsistent with other TLDs.
[https://security.googleblog.com/2017/09/broadening-hsts-to-secure-more-of-web.html](https://security.googleblog.com/2017/09/broadening-hsts-to-secure-more-of-web.html)
### Multiple AWS Organizations
For customers using the “Model Organization” pattern (see
[Decide on AWS Organization Strategy](/layers/accounts/design-decisions/decide-on-aws-organization-strategy)) we
recommend one TLD Service Discovery domain per AWS Organization. Organizations are a top-level construct for isolation,
so we believe that extends all the way down to the Service Discovery domain.
### Multi-Cloud / On-prem
If your organization plans to operate in multiple public clouds or on-prem, we recommend adopting a convention where
each cloud gets its own service discovery domain, rather than sharing the domain across all clouds (e.g. by delegating
zones). The primary reason is to reduce the number of zones delegated, but also to decouple cloud dependencies. See the
related design decision on
[Decide on Hostname Scheme for Service Discovery](/layers/network/design-decisions/decide-on-hostname-scheme-for-service-discovery)
to understand our zone delegation strategy.
e.g. Suppose you had to support AWS, GCP, Azure and On-prem, the convention could be:
- `example-aws.net`
- `example-gcp.net`
- `example-azure.net`
- `example-onprem.net`
### Internal/Public Route53 Zones
:::tip We recommend using public DNS zones for service discovery
:::
We generally prescribe using public DNS zones rather than internal zones. Security is all about Defense in Depth (DiD),
and while this adds another layer between VPC, Private Subnets, Security Groups, Firewalls, Network ACLs and Shield that
the added layer of obscurity has fewer benefits than detractions. The benefits of keeping the zones public are easier
interoperability between networks that do not share a common DNS server, and the ability to expose services as necessary
using the service discovery domain to third parties services (e.g. partners, vendors, integrations like Snowflake or
Fivetran, etc).
See also our related ADR
[Proposed: Use Private and Public Hosted Zones](/resources/adrs/proposed/proposed-use-private-and-public-hosted-zones) for
additional context.
### Dedicated TLD per Organization, Delegated DNS Zones per AWS Account
Delegate one zone per AWS account name to each AWS account. For example `prod.example.net`, `staging.example.net`, and
`corp.example.net`.
:::tip We recommend delegating one zone per AWS account
:::
### Dedicated TLD per AWS Account
Delegate one dedicated "Top Level Domain" to each account (or some subnet). For example, `example.qa` for staging and
`example.com` for prod. The benefit of this approach is we truly share nothing between accounts. The downside is coming
up with a scalable DNS naming convention. Thus a hybrid between DNS zone delegation and multiple-TLDs is recommended.
We think this is overkill and instead, recommend the dedicated TLD per AWS Organization coupled and
[Decide on Hostname Scheme for Service Discovery](/layers/network/design-decisions/decide-on-hostname-scheme-for-service-discovery)
leveraging delegated zones by account.
## Related
- [Decide on DNS Registrar](/layers/network/design-decisions/decide-on-dns-registrar)
- [Decide on Hostname Scheme for Service Discovery](/layers/network/design-decisions/decide-on-hostname-scheme-for-service-discovery)
- [Decide on Vanity (Branded) Domain](/layers/network/design-decisions/decide-on-vanity-branded-domain)
- [https://youtu.be/ao-2mfA5OTE](https://youtu.be/ao-2mfA5OTE)
---
## Decide on Transit Gateway Requirements
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
The [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/) connects VPCs located in any account or organization
(and on-premises networks) through a centrally managed network hub. This simplifies the work of connecting networks and
puts an end to complex VPC peering connections. Think of it like a cloud router, where each new connection is only made
once.
As you expand globally, inter-region peering connections, the AWS Transit Gateway helps establish a global network. All
data is automatically encrypted and never travels over the public internet.
With this in mind, the transit gateway needs to be configured to support the specific For example:
- In which accounts will certain services live (e.g. Automation/Spacelift Runners, custom apps, etc)?
- Where will the VPN solution be deployed, if there is one?
- In which accounts will EKS clusters be deployed to?
- Do certain stages need to communicate with one another (e.g. staging → prod and prod → staging)?
## Considered Options
### Option 1 (Recommended)
:::tip Cloud Posse recommends Option 1 because it enables the use of automation to perform changes as well as any other
business requirement
:::
- Connect all accounts with the `auto` account to enable automation (Spacelift, GitHub Action Runners, etc)
- Connect all accounts with the `network` account to use it as the entry-point for VPN connections
- Any other requirements that are business driven (e.g. dev → staging, staging → prod, dev → dev, etc)
#### Consequences
- Can use automation (Spacelift) to handle changes to infrastructure
- Can create private EKS clusters, accessible via automation and human (via VPN)
## References
- [https://aws.amazon.com/transit-gateway/](https://aws.amazon.com/transit-gateway/)
---
## Decide on Vanity (Branded) Domains
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
import ReactPlayer from "react-player";
## Problem
We need a domain that represents your branded domain in the live environment. This will be our synthetic production
domain since we do not want to interfere with any current production environments. Also, we do not want to use any
domains currently in use for any other purpose.
:::caution **IMPORTANT**
This is not the same as
[Decide on Service Discovery Domain](/layers/network/design-decisions/decide-on-service-discovery-domain).
:::
- These domains are for public-facing endpoints
- **Prevent devs from using logic jumps based on the domain instead of using feature flags**
- **Maintain symmetry with production** (e.g. if we have `acme.com` in production redirect to `www.acme.com`, we should
be able to test/validate identical behavior with a top-level domain in staging and dev)
- `CNAME` **is not possible with a zone-apex, only** `A` **records**. (If we used `staging.acme.com`, we would
technically want to use `www.staging.acme.com` for zone-apex parity and that just gets too long)
- **Cookie domain scope should only be for staging or production but not both.** Separate domains prevent this from
happening. For example, if you set `Domain=acme.com`, cookies are available on subdomains like `staging.acme.com` and
`www.acme.com`. Therefore, we want to avoid this possibility. We have seen this affect the ability to properly QA
features.
- **CORS headers should prevent cross-origin requests from staging and product.** We want to prevent wildcards from
permitting cross-staging requests which could lead to staging hammering production (vice versa)
```
Access-Control-Allow-Origin: https://*.acme.com
```
## Considerations
_Use one domain for each stage (prod, staging, dev, sandbox, etc)_
- This top-level domain will be delegated to each account as necessary. Typically including `prod`, `staging`, `dev`,
and optionally `sandbox`.
- Our standard recommendation is to acquire a new domain with a regular TLD.
- One good convention is to use your namespace suffixed with `-prod` or `-staging`. e.g. `$namespace-$stage.com` would
become `cpco-prod.com`, `cpco-staging.com`, `cpco-dev.com`
:::info
Remember, these are for synthetic testing of branded domain functionality, and not the _actual_ domains your customers
will be using.
:::
## FAQ
### What are examples of vanity domains?
Think of vanity domains as all of your publicly branded properties. E.g. `apple.com` and `www.apple.com` and
`store.apple.com`.
### Why do we differentiate between vanity domains and service discovery domains?
It’s not uncommon that vanity domains are controlled by a different entity in the organization and may not even be
controlled using terraform or other IaC. Of course, we prefer to manage them with terraform using our
[dns-primary](/components/library/aws/dns-primary/) component, it’s not a _technical_ requirement.
### What’s the difference between vanity domains and service discovery domains?
Vanity domains are typically fronted by a CDN and then upstream to some load balancer (e.g. ALB). The load balancer on
the other hand will typically have a service discovery domain associated with it (e.g. `lb.uw2.prod.acme.org`). The
service discovery domain is a domain whose conventions we (operations teams) control (e.g. totally logical and
hierarchical with multiple zone delegations). While the vanity domains are governed by a different set of stakeholders
such as marketing, sales, legal, and SEO. You might have hundreds or thousands of vanity domains pointed to a single
service discovery domain.
### Why don’t we just use `staging.acme.com` and `dev.acme.com` as our vanity domains?
We want symmetry and this is not symmetrical to what you use currently in production. E.g. your production traffic
doesn’t go to `prod.acme.com`. It goes to `acme.com` or `www.acme.com`; so for the same reason, we want to have
something that symmetrical to `acme.com` for dev and staging purposes. Another example is a cookie set on `.acme.com`
will work for both `staging.acme.com` and `prod.acme.com`, and that's a bad thing from a testing perspective.
## Related
- [Decide on DNS Registrar](/layers/network/design-decisions/decide-on-dns-registrar)
- [Decide on Service Discovery Domain](/layers/network/design-decisions/decide-on-service-discovery-domain)
- [https://youtu.be/ao-2mfA5OTE](https://youtu.be/ao-2mfA5OTE)
---
## Decide on VPC NAT Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
**DRAFT**
## Context and Problem Statement
## Considered Options
### Option 1 - One VPC per Region, Per Platform Account with Dedicated NAT Gateways per AZ (Recommended)
:::tip Our Recommendation is to use Option 1 because it keeps the separation by stage and ensures all egress per stage
originates from a specific set of IPs
:::
#### Pros
- Easily managed with terraform
- Easier for third parties to restrict IPs for ingress traffic
- Keep accounts symmetrical
#### Cons
- More expensive to operate as more NAT gateways are deployed (mitigated by reducing the number of gateways in lower
stages)
### Option 2 - One VPC per Region, Per Platform Account with Centralized NAT Gateways per AZ in Network Account
The Compliant Framework for Federal and DoD Workloads in AWS GovCloud (US) advocates for a strategy like this, whereby
in the Network (transit) account, there will be a DMZ with a Firewall.
#### Pros
- Ideally suited for meeting specific compliance frameworks
#### Cons
- All traffic from all accounts egress through the same NAT IPs, making it hard for third-parties to restrict access
(e.g. staging accounts can access third-party production endpoints)
- Shared NAT gateways are “singletons” used by the entire organization; changes to these gateways are not be rolled out
by stage. Risky to make changes - in the critical path of everything.
### Option 3 - Shared VPCs with Dedicated NAT Gateways
#### Pros
- Less expensive
#### Cons
- All traffic from all accounts egress through the same NAT IPs, making it hard for third-parties to restrict access
(e.g. staging accounts can access third-party production endpoints)
- Shared VPCs are “singletons” used by multiple workloads; changes to these VPCs are not be rolled out by stage. Risky
to make changes.
## References
- **Compliant Framework for Federal and DoD Workloads in AWS GovCloud (US)**
[https://aws.amazon.com/solutions/implementations/compliant-framework-for-federal-and-dod-workloads-in-aws-govcloud-us/](https://aws.amazon.com/solutions/implementations/compliant-framework-for-federal-and-dod-workloads-in-aws-govcloud-us/)
- Relates to
[Decide on AWS Account VPC Subnet CIDR Strategy](/layers/network/design-decisions/decide-on-aws-account-vpc-subnet-cidr-strategy)
---
## Decide on VPC Network Traffic Isolation Policy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
Describe why we are making this decision or what problem we are solving.
## Considered Options
Both options support principles of least privilege.
### Option 1 - Use a Flat Network with two Subnets per AZ (Public, Private) (Recommended)
:::tip Our Recommendation is to use Option 1 because it is the easiest to administer and reduces the complexity of the
network architecture
:::
#### Pros
- Use Security Group ACLs to easily restrict service-to-service communication using Security Group IDs.
- Elastic network that doesn’t require advanced insights into the size and growth of the workloads
-
#### Cons
- Security Groups have limited flexibility across regions: e.g. Security Group ACLs only work with CIDRs across regions
(and not by Security Group ID)
- Harder to monitor traffic between workloads
-
### Option 2 - Use a Custom Subnet Strategy Based on Workload
#### Pros
- More easily restrict network traffic across regions and data centers
- Follows principles of Least-privilege
- Also compatible with using Security Group ACLs for an additional layer of security
- Easier to monitor traffic between workloads
#### Cons
- Requires advanced planning to identify and allocate all workloads and IP space
- Harder to scale elastically
- Puts a large burden on network administrators
- Large route tables, complicated transit gateway rules
- Requires active monitoring to ensure subnets are not at capacity
## References
- Also relates to
[Decide on AWS Account VPC Subnet CIDR Strategy](/layers/network/design-decisions/decide-on-aws-account-vpc-subnet-cidr-strategy)
- Also relates to [Decide on VPC NAT Strategy](/layers/network/design-decisions/decide-on-vpc-nat-strategy)
---
## Decide on VPC Peering Requirements (e.g. to Legacy Env)
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
VPC peering is used when we need network connectivity between VPCs - possibly residing in different accounts underneath
different organizations.
## Use cases
- Peering with legacy/heritage accounts to facilitate migrations with minimal downtime.
- Enablement of CI/CD to connect to clusters and databases within VPCs
- Service migration
- Connecting VPCs in multiple regions
## Considered Options
### VPC Peering
:::tip Our recommendation is to use VPC peering mostly for connecting third-party networks or for cost optimization over
transit gateways (when necessary).
:::
This is where we would provision a vpc-peering component and require the legacy vpc id, account id, and an IAM role that
can be assumed by the identity account.
Direct VPC peering may reduce costs where there’s significant traffic going between two VPCs.
[https://aws.amazon.com/about-aws/whats-new/2021/05/amazon-vpc-announces-pricing-change-for-vpc-peering/](https://aws.amazon.com/about-aws/whats-new/2021/05/amazon-vpc-announces-pricing-change-for-vpc-peering/)
### Transit Gateway
:::tip Our recommendation is to _always_ deploy a transit gateway so we can use it with Terraform automation to manage
clusters and databases. This is regardless of whether or not we deploy VPC peering.
:::
An alternative approach to VPC peering between accounts in AWS is also to leverage the transit gateway, which we usually
deploy in most engagements to facilitate CI/CD with GitHub Actions and Spacelift automation.
This would require a transit gateway already set up and configured in the legacy account so we can peer the v2
transit-gateway with the v1 infrastructure.
[https://aws.amazon.com/transit-gateway/pricing/](https://aws.amazon.com/transit-gateway/pricing/)
Be advised, that _excessive_ traffic over a transit-gateway will be costly. This is why there is a use-case to leverage
both VPC peering and transit gateways. If the costs are significant between any two VPCs, direct VPC peering is a more
cost-effective way to do it because the traffic doesn't egress to the transit gateway and then ingress back into the
other account, effectively cutting transit costs in half.
### NAT Gateways
If there are overlapping CIDR ranges in the VPCs, we’ll also need to consider deploying private NAT gateways to
translate network addresses.
[https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html#private-nat-overlapping-networks](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html#private-nat-overlapping-networks)
---
## Review Design Decisions(3)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions for how you'll implement the network and DNS
layer of your infrastructure.
---
## Setting up DNS
import Note from '@site/src/components/Note';
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
## Requirements
Before deploying DNS, first purchase your chosen vanity and service domains in the `core-dns` account or in your chosen registrar. Refer back to the [Decide on Vanity (Branded) Domain](/layers/network/design-decisions/decide-on-vanity-branded-domain/) and [Decide on Service Discovery Domain](/layers/network/design-decisions/decide-on-service-discovery-domain/) design decisions for more information.
When registering a new domain, we have the option of using Route53’s built-in registrar or using an existing registrar. Many enterprise-scale organizations use MarkMonitor to manage their domain portfolio. Our convention is to use the `core-dns` account as the registrar. This allows us to use AWS IAM roles and policies to manage access to the registered domains and to centralized DNS management.
the AWS Route53 Registrar cannot be automated with Terraform, so ClickOps is still required for domain registration.
[Registering domain names using Amazon Route 53 - Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar.html)
We recommend checking with your legal department on where they want to consolidate domain ownership. It has larger
ramifications as to IP/trademark defense.
## Deploy DNS Components
The DNS stacks are broken up into primary and delegated deployments. Primary DNS zones only start with an `NS` record
among other defaults and expect the the owner of their associated domain to add these `NS` records to whatever console manages the respective domain.
Consult the [dns-primary component documentation](/components/library/aws/dns-primary/) for more information.
The delegated DNS zones insert their `NS` records into the primary DNS zone; thus they are mostly automated.
Consult the [dns-delegated component documentation](/components/library/aws/dns-delegated/) for more information.
To start the dns setup, run the following. This will go through creating primaries, and then
follow up with establishing the delegates.
## Configure Registrar `NS` Records for Domain (Click Ops)
In order to connect the newly provisioned Hosted Zone to the purchased domains, add the `NS` records to the chosen
Domain Registrar. Retrieve these with the output of `dns-primary`. These will need to be manually added to the
registered domain.
- #### Delegate Shared Service Domain, `acme-svc.com`
```shell
atmos terraform output dns-primary -s core-gbl-dns
```
- #### Delegate Platform Sandbox Vanity Domain, `acme-sandbox.com`
```shell
atmos terraform output dns-primary -s plat-gbl-sandbox
```
- #### Delegate Platform Dev Vanity Domain, `acme-dev.com`
```shell
atmos terraform output dns-primary -s plat-gbl-dev
```
- #### Delegate Platform Staging Vanity Domain, `acme-stage.com`
```shell
atmos terraform output dns-primary -s plat-gbl-staging
```
- #### Delegate Platform Prod Vanity Domain, `acme-prod.com`
```shell
atmos terraform output dns-primary -s plat-gbl-prod
```
[For more on `NS` records](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html)
### ACM
Each domain managed by the `dns-primary` component will create its own ACM certificate. However, we need additional ACM certificates to validate delegated domains.
We use a separate instance of the `acm` component to provision each service domain certificate.
We can deploy all required ACM certificates with the following command:
---
## FAQ(Network)
import Intro from '@site/src/components/Intro';
import ReactPlayer from "react-player";
import Steps from '@site/src/components/Steps';
Frequently asked questions about network and DNS with Cloud Posse's reference architecture.
## What is the difference between a Vanity and a Service Domain?
Service domains are fully automated constructions of host names without concern for marketing or branding. Although they
are not secret, the public will never see them. We use these domains for logic driven service discovery of resources
across the organization.
On the other hand vanity domains are entirely up to the requirements of the business. Marketing may require hundreds or
thousand of domains to be associated with an application, and these domains may not follow any naming pattern or
hierarchy. These are the domains used by the customer.
## Other common DNS questions
- [What are examples of vanity domains?](/layers/network/design-decisions/decide-on-vanity-branded-domain#what-are-examples-of-vanity-domains)
- [Why do we differentiate between vanity domains and service discovery domains?](/layers/network/design-decisions/decide-on-vanity-branded-domain#why-do-we-differentiate-between-vanity-domains-and-service-discovery-domains)
- [What’s the difference between vanity domains and service discovery domains?](/layers/network/design-decisions/decide-on-vanity-branded-domain#whats-the-difference-between-vanity-domains-and-service-discovery-domains)
- [Why don’t we just use staging.acme.com and dev.acme.com as our vanity domains?](/layers/network/design-decisions/decide-on-vanity-branded-domain#why-dont-we-just-use-stagingacmecom-and-devacmecom-as-our-vanity-domains)
## Can we add additional VPCs?
Yes you can create additional VPCs, although we recommend against it. By design, we implement account-level separation
rather than VPC network data separation. So before creating a new VPC, ask yourself if the ultimate objective would be
better accomplished by a new account.
If you do want to continue with creating a new VPC, simply define a new instance of the `vpc` component in a given
stack. Give that component a new name, such as `vpc/data-1`, and then inherit the default vpc settings.
## How can we add an additional region?
In order to add a new network region:
1. Create a new mixin for the region: `stacks/mixins/{{ region }}/`
1. Define a new stack configuration for the region. The regions of any given account are defined by resources in the directories for the given region, `stacks/orgs/{{ namespace }}/{{ tenant }}/{{ stage }}/{{ region }}/`
1. Add the required resources to the stack file, `stacks/orgs/{{ namespace }}/{{ tenant }}/{{ stage }}/{{ region }}/network.yaml`. For example for networking, define a new VPC, connect Transit Gateway, and define Client VPN routes to the new regions.
For more, see [How to Define Stacks for Multiple Regions](/learn/maintenance/tutorials/how-to-define-stacks-for-multiple-regions)
## How can we connect a legacy AWS account to our network?
Connect a legacy AWS account with VPC Peering. For more, see the
[`vpc-peering` component](/components/library/aws/vpc-peering/)
## Why not use `dns-delegated` for all vanity domains?
The purpose of the `dns` account is to host root domains shared by several accounts (with each account being delegated
its own subdomain) and to be the owner of domain registrations purchased from Amazon.
The purpose of the `dns-primary` component is to provision AWS Route53 zones for the root domains. These zones, once
provisioned, must be manually configured into the Domain Name Registrar's records as name servers. A single component
can provision multiple domains and, optionally, associated ACM (SSL) certificates in a single account.
Cloud Posse's architecture allows/requires that root domains shared by several accounts be provisioned in the `dns`
account with `dns-primary` and delegated to other accounts with each account getting its own subdomain corresponding to
a Route 53 zone in the delegated account. Cloud Posse's architecture requires at least one such domain, called "the
service domain", be provisioned. The service domain is not customer facing and is provisioned to allow fully automated
construction of host names without any concerns about how they look. Although they are not secret, the public will never
see them.
Root domains used by a single account are provisioned with the `dns-primary` component directly in that account. Cloud
Posse calls these "vanity domains". These can be whatever the marketing or PR or other stakeholders want to be.
**There is no support for `dns-primary` to provision root domains outside of the dns account that are to be shared by
multiple accounts.**
After a domain is provisioned in the `dns` account, the `dns-delegated` component can provision one or more subdomains
for each account, and, optionally, associated ACM certificates. For the service domain, Cloud Posse recommends using the
account name as the delegated subdomain (either directly, e.g. "plat-dev", or as multiple subdomains, e.g. "dev.plat")
because that allows `dns-delegated` to automatically provision any required host name in that zone.
So, the `dns` account gets a single `dns-primary` component deployed. Every other account that needs DNS entries gets a
single `dns-delegated` component, chaining off the domains in the `dns` account. Optionally, accounts can have a single
`dns-primary` component of their own, to have apex domains (which Cloud Posse calls "vanity domains"). Typically, these
domains are configured with CNAME (or apex alias) records to point to service domain entries.
The architecture does not support other configurations, or non-standard component names.
## Why should the `dns-delegated` component be deployed globally rather than regionally?
The `dns-delegated` component is designed to manage resources across all regions within an AWS account, such as with Route 53 DNS records. Deploying it at the regional level can lead to conflicts because it implies multiple deployments per account, which would cause Terraform to fight for control over the same resources.
Although the `gbl` (“global”) region is not a real AWS region, it is used as a placeholder to signify that resources are meant to be managed globally, not regionally. Deploying `dns-delegated` globally ensures there is a single source of truth for these DNS records within the account.
Deploying this component regionally can cause issues, especially if multiple regional stacks try to manage the same DNS records. This creates an anti-pattern where resources meant to be global are unintentionally duplicated, leading to configuration drift and unexpected behavior.
Please see the [global (default) region](/learn/conventions/#global-default-region) definition for more on `gbl` as a convention.
## How is the EKS network configured?
EKS network is designed with this network and DNS architecture in mind, but is another complex topic. For more, see the
following:
- [EKS Fundamentals](/layers/eks)
- [EKS - How To Setup Vanity Domains on an ALB](/layers/eks/tutorials/how-to-setup-vanity-domains-on-alb-eks)
:::info
Private subnets are kept large to account for EKS configurations that can consume a significant amount of IP addresses.
:::
## How is the ECS network configured?
ECS connectivity is also designed with this network and DNS architecture in mind. For more, see the following:
- [ECS Fundamentals](/layers/ecs)
- [ECS - How To Setup Vanity Domains on an ALB](/layers/ecs/tutorials/how-to-setup-vanity-domains-on-alb-ecs)
---
## Network and DNS
import ReactPlayer from "react-player";
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
Learn Cloud Posse’s approach to designing robust and scalable Network and DNS architectures on AWS. We discuss our solutions to common challenges in network design, focusing on maintainability, security, and scalability. We cover essential topics such as account isolation, connecting accounts using Transit Gateway, deploying AWS Client VPN for network access, and differentiating between service and vanity domains.
This document is intended to present Cloud Posse's approach to designing Network and DNS architectures. The contents of this document assume that the reader is familiar with the basics of [networking and content delivery services in AWS](https://aws.amazon.com/products/networking/).
AI generated voice
## The Problem
There is no single solution for Network and DNS architecture. Ultimately, the right network architecture may come down to your individual business needs. Yet often too much thought in design leads to snowflake designs that are specific to a given business but are entirely unique, overly complex, and difficult to maintain.
However, all network designs have a fundamental set of requirements that we can define. All networks need some private subnets and some public subnets. All networks need to be able to restrict access and enforce boundaries externally and internally, and finally all networks need some way to discover services inside the network.
When it comes to DNS, often there is no design consideration for domain management. Companies may have hundreds or
thousands of marketing domains, e.g. for SEO, and yet have no sane method for services to discover each other using DNS. At the same time, DNS needs some of the same boundaries as the network: services should be isolated and secure.
Furthermore, networking in a cloud environment is entirely software defined. We have the ability to do things that would be too tedious or too difficult to achieve in physical environments. Companies still largely rely on IPv4 networks, which have limited IP space, and we need to ensure that how we allocate networks can scale with your business and even integrate with other third party providers.
Networking and content delivery is far from trivial. There are countless designs and architectures that accomplish
similar outcomes. Ultimately our goal is to ensure that whatever we implement will enable your success.
## Our Solution
As with all infrastructure design, Cloud Posse has an opinionated solution. We aim to reduce complexity where possible, while providing secure and robust networks that are maintainable and scalable.
We have identified the most common and reusable pattern for network architecture, so that we can define reusable
building blocks for a network. We've standardized the definition of a VPC, provided distinction between marketing and
service domains for discovering services, and created a secure and reliable way for services to communicate with each
other across the accounts.
### Account Isolation
As a foundational design with the AWS Organization, we have already isolated resources into accounts. This separation creates a physical boundary between resources in AWS, including VPCs. Therefore, we can deploy a single or multiple VPCs in an account and guarantee that resources in those subnets will not be able to access resources in other accounts.
Because of this design, we recommend deploying a single VPC per account (that needs a network). Production resources
will only live in the VPC in the `plat-prod` account, and unless connected, no other VPC will be able to access those
resources. Similarly, you could deploy a `data` account (or several) with a VPC to isolate data resources further.
### Connecting Accounts
Now that we have separated networks in each account, we need to be able to connect the account networks as required. We
do this with Transit Gateway. We deploy a Transit Gateway hub and route table to the central Network account, and then
deploy Transit Gateway spokes to all other accounts. The route table in the Network account specifies which accounts are
able to access others, and the Transit Gateway spokes provide that connection.
### Accessing the Network
In order for a user to connect to the Network, we deploy an AWS Client VPN. This VPN is deployed to the Network account,
which already has access to all other account networks. Then we define a set of rules for the VPN itself to specify
where we want this VPN to be able to connect.
### Service Domains
We recommend deploying a Service Domain in the DNS account and then connecting all app (platform) accounts to this
service domain via subdomains. We delegate a single Host Zone for each account's subdomain. Since DNS is global,
multi-resource records or resources in other regions will all be included in this same zone. This is why we consider the
`dns` components as global. Furthermore, any services added has a logically defined record in the delegated zone.
Consider the diagram below. Here `acme-svc.com` would be deployed with `dns-primary` in the DNS account, and all
subdomains would be deployed in the respective app accounts with `dns-delegated` and use logically defined and
hierarchical subdomains. For example, `echo.use1.dev.plat.acme-svc.com`, `echo.use1.prod.plat.acme-svc.com`, and
`echo.use1.auto.core.acme-svc.com`. These domains are logically creating following the service, region, stage, tenant,
and then finally the service domain.
_The `echo.use1` resource record (CNAME) is created in the `dev.plat` hosted zone with `dns-delegated`, which is
delegated from primary hosted zone, acme-svc.com, in the DNS account with `dns-primary`_
### Vanity Domains
Vanity domains are commonly referred to as "branded" or "marketing" domains and are used to meet the requirements of
your individual business. A business may have any number of vanity domains as required, and that list of domains will
grow as your business expands.
Unlike service domains, we do not recommend delegatation on vanity domains. We do not want to share stages for vanity
domains, because we must ensure total isolation between domains in each stage. To do this, we recommend deploying at
least one domain per app account. By doing so, we can create symmetry between prod and non-prod environments and avoid
cross-site scripting. This enables properly testing of an apex or root domain, so that we can fully validate a
configuration before deploying to production.
In the diagram below, we have many domains deploy across the organization. The DNS account holds only the root service
domain deployed by `dns-primary`. All other stages control their respective vanity domains with a single `dns-primary`
component. For example, we deploy a single vanity domain to dev, `acme-dev.com`, single vanity domain to staging,
`acme-staging.com`, and many vanity domains to production, `acme.com`, `acme-prod.com`, `acme-marketing.com`, or even
`alternate-brand.com`. These domains can be whatever your business requires.
---
## Monitor Everything
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import StepNumber from '@site/src/components/StepNumber';
import Step from '@site/src/components/Step';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import ReactPlayer from 'react-player';
With so many moving pieces, it's crucial to monitor what's happening under the hood to understand what's going on. This includes gathering telemetry in the form of metrics and logs coming from your services and the underlying infrastructure. This data must be shipped somewhere to build dashboards and raise alerts that will escalate to the appropriate personnel. Depending on your business needs, you may also need to monitor for security and compliance against various technical benchmarks like PCI/DSS, CIS, ISO 27001, and others.
## Set up Telemetry
Choose between Datadog or AWS-managed Prometheus and Grafana with Loki for gathering your telemetry. Datadog offers the most mature implementation, while AWS-managed Grafana and Prometheus provide lower-cost alternatives with various trade-offs, that make them a good fit for many organizations.
Datadog is our most comprehensive observability solution, offering a monitoring-as-code approach using YAML configuration fully managed with Terraform. This includes Datadog monitors, custom RBAC roles, synthetic tests, child organizations, and other resources.
We show how to define reusable Service Level Indicators (SLIs) and Service Level Objectives (SLOs) for consistent implementation, helping to reduce alert fatigue by focusing on critical business-specific metrics and leveraging Datadog's advanced capabilities.
Get Started
Amazon Managed Grafana is a fully managed service by AWS in collaboration with Grafana Labs. Although it's significantly less expensive than Datadog, it is also more barebones in comparison.
- Managed Grafana allows you to query, visualize, and set alerts for your metrics, logs, and traces through a centralized dashboard where you can add multiple data sources.
- AWS Managed Prometheus together with `promtail` collects and queries metrics from your containerized applications.
- Deploy Loki for efficient log collection from containerized applications (for EKS users)
Get StartedAI generated voice
## Monitor for Security & Compliance
Monitoring for security and compliance is essential for organizations subject to industry regulations like HIPAA or for e-commerce companies aiming for PCI compliance. Our reference architecture includes comprehensive support for AWS's suite of security-oriented services, including:
- [Security Hub](https://aws.amazon.com/security-hub/): Centralized security view
- [GuardDuty](https://aws.amazon.com/guardduty/): Threat detection service
- [Inspector](https://aws.amazon.com/inspector/): Automated security assessments
- [Macie](https://aws.amazon.com/macie/): Data security and privacy
- [AWS Config](https://aws.amazon.com/config/): Resource configuration tracking
- [IAM Access Analyzer](https://aws.amazon.com/iam/features/analyze-access/): Policy monitoring and validation
- [Shield](https://aws.amazon.com/shield/): DDoS protection
- [Audit Manager](https://aws.amazon.com/audit-manager/): Continuous audit and compliance
- [CloudTrail](https://aws.amazon.com/cloudtrail/): User activity and API usage
- [WAF](https://aws.amazon.com/waf/): Web application firewall
---
## Set Up Your Platform
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import StepNumber from '@site/src/components/StepNumber';
import Step from '@site/src/components/Step';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import ReactPlayer from 'react-player';
Your platform ensures consistent service delivery every time. A well-designed platform seamlessly integrates with your monitoring, security, and compliance systems, building on your established foundation. Automated software delivery pipelines deploy new services quickly and easily. The reference architecture supports AWS EKS, Amazon ECS, and Lambda functions.
## Container Orchestration
Choose a path for consistent delivery of your services. The reference architecture supports AWS EKS, Amazon ECS, and Lambda functions.
Elastic Container Service (ECS) is a fully-managed container orchestration service provided by Amazon Web Services (AWS) that simplifies the process of deploying, managing, and scaling containerized applications. ECS makes it easy to run and manage Docker containers on AWS infrastructure, providing a secure and scalable platform for your applications. One of the major benefits of ECS over EKS, is that there is no need to upgrade the underlying platform. ECS is a managed service that is always up to date. This means that you can focus on your application and not the underlying platform.
AI generated voiceGet Started
Amazon EKS is a managed Kubernetes service that allows you to run Kubernetes in AWS cloud and on-premises data centers. AWS handles the availability and scalability of the Kubernetes control plane, which oversees tasks such as scheduling containers, managing application availability, and storing cluster data. While AWS manages control plane upgrades, users are responsible for the worker nodes and the workloads running on them, including operators, controllers, and applications. We use Karpenter for managing node pools and support spot instances to optimize costs. Be aware that you'll need to upgrade the cluster quarterly due to the significant pace of Kubernetes innovation. Although EKS has a steeper learning curve compared to ECS, it offers greater flexibility and control, making it ideal for organizations already utilizing Kubernetes.
Get Started
## Configure GitHub Actions to enable CI/CD
Deploy self-hosted runners to automate your software delivery pipelines, within private networks.
Get Started
## Automate Terraform Your Terraform Deployments
Use GitHub Actions to automate your Terraform deployments with Atmos, ensuring consistent infrastructure across your environments.
AI generated voiceGet Started
Once you're done setting up your platform, our attention will shift to how you ship your software by leveraging GitHub Actions and GitHub Action Workflows.
---
## Creating an Infrastructure repository
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import Note from '@site/src/components/Note';
import Admonition from '@theme/Admonition';
Learn how to create a GitHub repository to host infrastructure tools and configurations. Then configure repository settings, enable branch protection, and add collaborators.
## Create a GitHub repository
Create an empty GitHub repository to host infrastructure tools and configuration, and clone it to your host computer into a directory below your `$HOME` directory in the directory tree, for example, `/Users/morpheus/src/infrastructure`.
## Setup basic repository settings
We recommend the following GitHub repository settings.
1. Under "Features", ensure "Issues" are enabled. We will use Issues with Atmos GitHub Actions for Terraform.
1. Under "Pull Requests", disable both "Allow merge commits" and "Allow rebase merging". We do this to create a clean
commit history. Otherwise, the git history on the main branch will contain individual commits from each feature
branch, which will result in a history which may contain _"dirty"_ commits.
1. Under "Pull Requests", check "Always suggest updating pull request branches"
1. Under "Pull Requests", check "Automatically delete head branches"
## Enable branch protection
1. Under branches, select "Add branch ruleset"
1. Name this ruleset whatever you'd like. For example, "Protected Branches"
1. Under "Bypass list", add your admin team. This team is able to skip the rules were are about to add, so this the
bypass list should be sparingly granted
1. Under "Target branches", select "Add target" and "Include all default branches"
1. Then check the following rules:
- [x] Restrict deletions
- [x] Require a pull request before merging
- [x] Block force pushes
## Add collaborators and teams
Now add in your collaborators and teams. Under "Collaborators and teams", click "Add Teams". Generally most teams should
have "Write" access, and "Admin" access should be granted sparingly.
For engagements with Cloud Posse, please grant Cloud Posse "Write" access.
## Import the Reference Architecture
With the GitHub repository prepared, we are now ready to import the Cloud Posse reference architecture.
The contents of this repository are supplied as part of our [Quickstart](/quickstart) or [Jumpstart](/jumpstart) packages. For the remainder of this guide, we will assume you have access to the reference architecture configurations.
Learn More
With your repository set up, we need to address some of the prerequisites for your workstation.
This includes building the toolbox image on your workstation.
Next Step
---
## Decide on 1Password Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
We need to determine the best strategy for using 1Password to securely share
sensitive information, such as passwords and integration keys, with
individuals and teams during engagements with Cloud Posse. This decision aims
to ensure a secure and efficient method for exchanging secrets while
considering compatibility with AWS root account credentials.
## Problem
We need a secure (cryptographic) way to share sensitive information (e.g. passwords, integration keys, credit card
numbers, etc) with individuals and teams. Ideally, the solution works with AWS so we can secure root account
credentials.
1Password is a great choice for sharing secrets with teams. The downside is it doesn't support cryptographically secure
means of sharing secrets with individuals. It also does not integrate with terraform.
Please see
[Decide on MFA Solution for AWS Root Accounts](/layers/accounts/design-decisions/decide-on-mfa-solution-for-aws-root-accounts)
for additional context on why we recommend 1Password.
## Supported Options
:::caution
During the course of your engagement with Cloud Posse we require using 1Password as the secrets storage for exchanging
secrets between teams. Customer is free to use whatever system internally and copy secrets out of 1Password.
:::
### Use Your 1Password (Recommended)
You can share a private vault with our team for the duration of this engagement.
### Use Cloud Posse’s 1Password (Temporary Alternative)
We can share a private vault with your team for the duration of this engagement. That way your company can work on
procuring the best solution for your team. We recommend this approach if your team does not already have a viable
solution and procurement of 1Password will delay the engagement.
## Excluded Options
### PGP / GPG / PKE
Public Key Encryption is a great way to securely exchange secrets, but it's overly complicated for non-engineers.
Anything that’s complicated or not the path-of-least-resistance tends to lose in the long run.
### Slack
Slack does not provide any secure means of exchanging secrets. It should not be used.
### LastPass
LastPass does not provide a means for shared TOTP, so we cannot work in a collaborative environment.
---
## Decide on ECR Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
This decision assumes that per the previous design decision, we’ll be using ECR to store docker images.
There are a number of ways we can do this. Here are some considerations. Many of these concepts can be combined, so
we’ll just list them out.
## Considerations
- Do you have a monorepo with multiple containers built from different `Dockerfile`. We recommend no more than one
`Dockerfile` per repo.
- What is the naming convention for repositories? We recommend naming ECR repositories after the GitHub repo.
- Lifecycle rules to restrict the number of images and avoid hard limits
## Architecture
1. We typically deploy a single ECR in the `artifacts` (or similar account like `automation`). This is our typical
recommendation. Each service will have one docker repository. All images are pushed to this repo with commit SHAs.
We'll use lifecycle rules on tags to ensure critical images are not deleted. There's no promotion of images between
ECRs and all ECRs are typically read-only from any account.
2. We can deploy multiple ECRs per service (e.g. `myservice-prod`, `myservice-dev`. Then promote images between the
ECRs. We’ve only done this once and honestly don’t like it because it adds a lot of complexity to the pipelines
without much profit.
3. We can deploy one ECR per account or set of accounts. For example, we can have a production ECR and another one for
everything else. We’ll need to orchestrate image promotion between ECRs, which is the reason we don’t usually
recommend this.
4. Docker Lambas require ECR within the same account. For this, we’ll need to provision an additional ECR repo per
account and recommend setting up replication from a centralized repo.
## Configuration
We’ll need the repo in place before we can push docker images to it. When and how should we provision it?
1. Should each service define it’s own ECR in the microservice repository?
- How should this be implemented? For example, if we’re practicing gitops for GitHub repository creation, then we can
also provision the ECR at the same time. If we’re not, then we’ll need to tie this into the pipelines for the GitHub
repository itself.
2. Should we centralize this in the foundational infrastructure?
- If we centralize it, how should it be configured?
- Provide a long static-list of repos
- Use the `terraform-github-provider` to generate that list automatically for all repositories
---
## Decide on Infrastructure Repository Name
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
We highly recommend using a mono-repo for your foundational infrastructure. This doesn’t preclude introducing other
infrastructure repositories in the future.
Suggestions:
- `infrastructure`
- `infra`
- `$namespace-infra` (e.g. `cpco-infra`)
- `cloud-infrastructure`
- `ops`
- `cloud-ops`
If you already have a repo by any of these names, we suggest you create a new one so we can start with a clean history.
---
## Decide on Namespace Abbreviation
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Using a common prefix for all resource names will help establish a consistent naming convention. Certain resources in
AWS are globally unique (e.g. for all customers). In order to maintain an (optimistically) unique naming convention,
prefixing all resources with a namespace goes a long way to ensuring uniqueness.
Shorter the better. Some AWS resource names like S3 bucket names and Elasticache Redis names are restricted to something
like 65 characters.
We recommend a namespace prefix of 2-4 characters. The longer, the more optimistic we can be about avoiding collisions
with other global resources in AWS.
Some strategies we’ve seen is removing all vowels from your company name, or taking the initials for longer company
names.
## Examples
Intel
`intl`
Google
`ggl`
Cloud Posse
`cpco`
:::note
It is advised to keep the namespace as short as possible (< 5 chars) because of resources with low max character limits
[AWS Resources Limitations](/resources/legacy/aws-feature-requests-and-limitations)
:::
## References
- [https://github.com/cloudposse/terraform-null-label](https://github.com/cloudposse/terraform-null-label)
---
## Decide on Regional Naming Scheme
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
We need to decide how we’ll handle DR if that’s a requirement. It has
far-reaching implications on naming conventions and is not an easily
reversible decision.
Our current best practice is to use the following convention:
| **Field** | **Description** | **Example** |
| ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `namespace` | Something short that uniquely identifies your organization. This relates to [Decide on Namespace Abbreviation](/layers/project/design-decisions/decide-on-namespace-abbreviation) | `cpco``eg` |
| `tenant` | | |
| `environment` (aka abbreviated region) | Environment indicates which AWS region the resource is in, using one of 2 sets of abbreviations. We use `gbl` for resources that are not specific to any region, such as IAM Roles. You have a choice of 2 sets of abbreviations: `fixed` or `short`.The `fixed` abbreviations are- exactly 3 letters- short and consistent so lists stay aligned on semantic boundaries- The drawback is that AWS regions, have collisions when algorithmically reduced to 3 letters, so some regions (particularly in Asia) have non-obvious abbreviationsThe `short` abbreviations are- 4 or more letters- easier to understand- usually identical to the prefix AWS uses for Availability Zone IDs in the region- The drawback is that there is 1 or more additional characters which can lead closer to max character restraints (e.g. target groups have a max of 32 characters)We recommend using the `short` abbreviations, which more closely canonical zone ids by AWS.See [AWS Region Codes](/resources/adrs/adopted/use-aws-region-codes/#region-codes) for the full breakdown. | AWS region code → fixed abbreviation (3 letter) → short abbreviation (4 letter+)`us-east-1` → `ue1` → `use1``us-west-2` → `uw2` → `usw1``eu-west-3` → `ew3` → `euw3``ap-south-1` → `as0` → `aps1``af-south-1` → `fs1` → `afs1``cn-north-1` → `nn0` → `cnn1``us-gov-west-1` → `gw1` → `usgw1` |
| `stage` (aka account) | The stage is where the resources operate. Our convention is to isolate every stage in a dedicated AWS member account (aka flavor), which is why we frequently call accounts stages. | `prod`, `at`, `network` |
These field names correspond to the variable inputs of the `terraform-null-label`
([https://github.com/cloudposse/terraform-null-label](https://github.com/cloudposse/terraform-null-label)) used
throughout all Cloud Posse terraform modules. Usage of this convention ensures consistency and reduces the likelihood of
resource name collisions while maintaining human legibility.
Using this convention, resource names look like this: `{namespace}-{tenant}-{environment}-{stage}-{name}-{attributes}`
Here are some more examples to help understand the relationships.
| **Inputs** | **Outputs** |
| ---------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- |
|
| `acme-vkng-ue2-automation-eks-cluster` |
Also, see the corresponding design decision for the
[Decide on Hostname Scheme for Service Discovery](/layers/network/design-decisions/decide-on-hostname-scheme-for-service-discovery)
as this will be impacted by whatever is chosen.
---
## Decide on Secrets Placement for Terraform
import Intro from "@site/src/components/Intro";
We need to decide where to store secrets used by Terraform. We have two
options: store secrets in each account or store them in a centralized account.
## Context
Often we need to integrate with third-party services or internal services that require API keys or other secrets. We need to decide where to store these secrets so that Terraform can access them. There are two reasonable options for storing secrets in our AWS account architecture. We need to decide which one to use.
### Option 1: Store Secrets in each account
The first option is to store the credential in the same account as the resource. For example, API keys scoped to `dev` would live in `plat-dev`.
#### Pros
- Accounts can easily access their given credentials
- IAM level boundaries are enforced between accounts
#### Cons
- Secret administrators need to access many accounts to create those secrets
- There is no centralized management for all secrets out there
### Option 2: Store Credentials in a Centralized Account
The second option is to store the credentials in a centralized account, such as `corp` or `auto`. Now you would need to share those credentials with each account, for example with [AWS RAM](https://aws.amazon.com/ram/).
#### Pros
- Centralized secrets management
- Secret administrators have a single place to manage secrets
- Once shared, resources in a given account still access their given secrets from their own account. They do not need to reach out to another account
#### Cons
- Complexity with AWS RAM
- Secret administrators must be careful to share secrets with the correct accounts
- You need to decide what account to use as the centralized management account. We could deploy `corp` if you'd like for this or reuse `auto`.
## Decision
We will use AWS SSM Parameter Store for all platform-level secrets used by `infrastructure` and `terraform`.
## Related
- [Decide on Secrets Strategy for Terraform](/layers/project/design-decisions/decide-on-secrets-management-strategy-for-terraform/)
---
## Decide on Secrets Management Strategy for Terraform
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Deciding how to store secrets is crucial for securing both platform
integration and application data when using Terraform. The appropriate secret
store depends on the stack layer and must account for situations where other
infrastructure might not yet be in place (e.g. Vault, Kubernetes, etc).
We need to decide where secrets will be kept. We’ll need to be able to securely store platform integration secrets (e.g. master keys for RDS, HashiCorp Vault unseal keys, etc) as well as application secrets (any secure customer data).
One consideration is that a self-hosted solution won’t be available during cold-starts, so a hosted/managed solution
like ASM/SSM is required.
- e.g. Vault deployed as helm chart in each tenant environment using KMS keys for automatic unsealing (this chart
already exists)
- SSM Parameter Store + KMS for all platform-level secrets used by `infrastructure` and Terraform
- AWS Secrets Manager supports automatic key rotation which almost nothing other than RDS supports and requires applications to be modified in order to use it to the full extent.
## Recommendation
We will use AWS SSM Parameter Store for all platform-level secrets used by `infrastructure` and Terraform.
## Related
- [Use SSM over ASM for Infrastructure](/resources/adrs/adopted/use-ssm-over-asm-for-infrastructure)
- [Decide on 1Password Strategy](/layers/project/design-decisions/decide-on-1password-strategy)
---
## Decide on Terraform Version
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Different versions of Terraform and OpenTofu offer varying features and
compatibility. Terraform 1.x versions maintain backward compatibility within
the series, providing stability for existing workflows. However, OpenTofu
offers a fully open-source alternative that aligns with Cloud Posse's values
and avoids potential legal risks introduced by Terraform's licensing changes.
To ensure consistency and compatibility across modules and components, Cloud
Posse recommends OpenTofu as the preferred choice for new projects and
workflows.
:::warning Disclaimer
The content of this document is provided for informational purposes only and should not be construed as legal advice. Cloud Posse is not qualified to provide legal counsel, and any decisions related to the use of Terraform under the Business Source License (BSL) should be reviewed by professional legal advisors. OpenTofu is recommended based on technical and operational considerations, not legal advice.
:::
## Context
Terraform is a popular infrastructure-as-code tool that allows you to define, provision, and manage cloud resources. Terraform is developed by HashiCorp. From inception to 1.5.7, all versions were permissively licensed under the OSI-approved MPL software license. All newer releases are available under the Business Source License (BSL). The BSL license imposes restrictions on the use of Terraform in certain scenarios, which may impact long-term use and compatibility with third-party tools and integrations.
Subsequently, every major open-source OS distribution (e.g. [Debian](https://wiki.debian.org/DFSGLicenses#DFSG-compatible_Licenses), [Alpine](https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.19.0#HashiCorp_packages), [Homebrew](https://formulae.brew.sh/formula/terraform)) has removed Terraform from their registries due to the BSL license. [GitLab has also removed Terraform](https://docs.gitlab.com/ee/update/deprecations.html#deprecate-terraform-cicd-templates) from their CI/CD pipelines due to the BSL license. This has created a significant challenge for organizations that rely on Terraform for infrastructure automation.
OpenTofu (previously named OpenTF) is a fork of Terraform 1.5.7 that was [accepted by the CNCF](https://www.linuxfoundation.org/press/announcing-opentofu) and is fully open-source under the MPL license. OpenTofu is designed to maintain compatibility with Terraform 1.x modules and components while providing a stable and open-source alternative to the BSL-licensed Terraform versions.
:::important
Terraform providers are not affected by this change. They are independently licensed and can be used with any version of Terraform and OpenTofu. While HashiCorp maintains some providers, the vast majority are not maintained by HashiCorp. Most importantly, the [`terraform-provider-aws`](https://github.com/hashicorp/terraform-provider-aws/blob/main/LICENSE) remains under the MPL license.
:::
### OpenTofu Supporters
[](https://landscape.cncf.io/?item=provisioning--automation-configuration--opentofu)
The project is backed by many organizations, including:
- [CNCF](https://github.com/cncf/sandbox/issues/81)
- [CloudFlare](https://blog.cloudflare.com/expanding-our-support-for-oss-projects-with-project-alexandria/)
- [OpenStreet Maps](https://twitter.com/OSM_Tech/status/1745147427324133501)
- [JetBrains](https://blog.jetbrains.com/idea/2024/11/intellij-idea-2024-3/)
- [Cisco](https://blogs.cisco.com/developer/open-tofu-providers)
- [Microsoft Azure](https://github.com/Azure/Azure-Verified-Modules/discussions/1512), [`microsoft/fabric`](https://github.com/opentofu/registry/issues/1004), [`terraform-provider-azapi`](https://github.com/opentofu/registry/issues/920)
- [VMWare Tanzu](https://docs.vmware.com/en/Tanzu-Cloud-Service-Broker-for-AWS/1.10/csb-aws/GUID-index.html)
- Cloud Posse
- Mixpanel
- Buildkite
- ExpressVPN
- Allianz
- Harness
- Gruntwork
- Spacelift
- Env0
- Digger
- Terrateam
- Terramate
For the full list of supporters, see the [OpenTofu website](https://opentofu.org/supporters/).
## Problem
Historically, Terraform versions pre-1.x were notoriously backward incompatible. This changed with Terraform 1.x releases, where backward compatibility is assured for all subsequent 1.x releases. While Terraform provides a stable experience, its recent shift to the BSL license introduces considerations for certain use cases, integrations, and compliance.
OpenTofu is based on Terraform 1.5.7 (the last MPL-licensed version) and maintains compatibility with Terraform 1.x modules and continues to evolve as a fully open-source project under the stewardship of the CNCF. Cloud Posse modules and components are verified to work with OpenTofu as part of our test automation, but with hundreds of modules, there may be delays in verifying full support with every new release.
OpenTofu has not been without controversy, with some organizations expressing concerns about the project's governance and sustainability. [HashiCorp sent a cease and desist](https://opentofu.org/blog/our-response-to-hashicorps-cease-and-desist/) to the project. However, the project has gained significant traction and support from the community, including key contributors from the original Terraform project. As a result, [it's sandbox application to the CNCF is delayed](https://github.com/cncf/sandbox/issues/81#issuecomment-2331714515) (as of 2024-09-05).
## Considerations
Using OpenTofu ensures compatibility with third-party tools and integrations that are no longer supported with BSL-licensed Terraform versions. Furthermore, OpenTofu aligns with Cloud Posse's commitment to open-source principles and avoids potential compatibility and operational risks associated with BSL-licensed software.
Cloud Posse only supports MPL-licensed versions of Terraform (Terraform 1.5.7 or older), and all versions of OpenTofu.
Terraform 1.x remains backward compatible within the major version, but its BSL license imposes restrictions that may impact long-term use.
## Recommendation
Cloud Posse recommends using the [latest OpenTofu release](https://github.com/opentofu/opentofu/releases) for all new projects and workflows.
:::important Consult with Your Legal Team
Cloud Posse cannot provide legal advice. Organizations should consult with their legal teams to understand the implications of the BSL license on their use of Terraform.
- [HashiCorp BSL License](https://www.hashicorp.com/bsl)
- [HashiCorp BSL FAQ](https://www.hashicorp.com/bsl-faq)
:::
## Latest Releases
- **OpenTofu**: [https://github.com/opentofu/opentofu/releases](https://github.com/opentofu/opentofu/releases)
- **Terraform**: [https://github.com/hashicorp/terraform/releases](https://github.com/hashicorp/terraform/releases)
## References
- Mozilla Public License (MPL) applies to HashiCorp Terraform Versions 1.5.7 and earlier: [https://www.mozilla.org/en-US/MPL/](https://www.mozilla.org/en-US/MPL/)
- Business Source License (BSL) applies to HashiCorp Terraform Versions 1.6.0 and later: [https://www.hashicorp.com/bsl](https://www.hashicorp.com/bsl)
- Announcement of Terraform 1.6.0 and BSL License: [https://www.hashicorp.com/blog/announcing-hashicorp-terraform-1-6](https://www.hashicorp.com/blog/announcing-hashicorp-terraform-1-6)
- OpenTofu Project: [https://opentofu.io/](https://opentofu.io/)
- [OpenTofu Announces General Availability](https://www.linuxfoundation.org/press/opentofu-announces-general-availability) 2024-01-10, and ready for production use.
- [OpenTofu FAQ](https://opentofu.org/faq/)
- [OpenTofu Migration Guide](https://opentofu.org/docs/intro/migration/)
- [Atmos OpenTofu Configuration](https://atmos.tools/core-concepts/projects/configuration/opentofu)
- [Spacelift OpenTofu Configuration with Atmos](https://atmos.tools/integrations/spacelift#opentofu-support)
- [Martin Atkins](https://spacelift.io/blog/two-million-and-three-things-to-celebrate-in-the-opentofu-community) - Former core contributor of HashiCorp Terraform is now a core contributor to OpenTofu.
---
## Foundational Design Decisions
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Before deploying any infrastructure, there are some fundamental design
decisions of our architecture. As you get started, be aware of these
foundational choices. In our reference architecture, we've made some default
decisions for you, but you may want to customize these based on your specific
needs.
### Review Design Decisions
Review each of the following design decisions and record your decisions now. You will need the results of these decisions
going forward.
:::tip
When working with Cloud Posse as part of our [Jumpstart](/jumpstart) or [Quickstart](/quickstart), we will review each of these decisions with you.
:::
---
## Getting Started
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import ReactPlayer from "react-player";
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import SecondaryCTA from '@site/src/components/SecondaryCTA';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import CategoryList from '@site/src/components/CategoryList';
import TaskList from '@site/src/components/TaskList';
import useBaseUrl from '@docusaurus/useBaseUrl';
## Problem
As a new engineer joining the project, you are familiar with AWS, Terraform, Kubernetes, Docker, etc, but you're not familiar with the opinionated way that Cloud Posse does it — what we call the _SweetOps_ method. There are so many tools, conventions, components, stacks, that you don't know where to get started.
## Solution
:::tip
Review the documentation, then start by getting your hands dirty with your first project. Don't be afraid to reach out and ask for help, if you get stuck. You'll learn much faster this way and be less overwhelmed trying to master the concepts that have taken us the better part of 7 years to develop.
:::
Here you will find a quick start document for each layer of infrastructure. These documents are intended to present a common problem and a Cloud Posse solution to that problem.
Also included here are common tools for Cloud Posse as well as pertinent Design Decisions with context of every decision behind the Reference Architecture.
# Checklist
:::info Documentation is our top priority
Please let us know if anything is missing or holding you up. We'll make sure to prioritize it.
:::
This guide assumes you have the following:
- Terraform experience working with modules, providers, public registry, state backends, etc
- AWS experience including a firm understanding of IAM, the web console, etc
- Comfortable using the command line, docker, git, terraform, etc
- Starting from scratch, with a new AWS account, and that you have the root account credentials
If this all sounds a little bit daunting, you may want to start by reviewing the [Learning Resources](/resources/legacy/learning-resources).
### Review Foundational Design Decisions
[Review Design Decisions](/layers/project/design-decisions) and record your decisions now. You will need the results of these decisions going forward.
### Create a Repository
Follow our guide on [Creating a Repository](/layers/project/create-repository) to set up your GitHub repository with the proper settings, branch protection, and team access. This repository will serve as the foundation for your infrastructure code and configurations.
### Set up your toolbox container
Set up your development environment by following our [Prepare the Toolbox Image](/layers/project/toolbox/) guide. Geodesic is our infrastructure automation toolbox that packages all the necessary tools into a convenient Docker image.
Let's get started by creating the repository and importing the configurations provided by Cloud Posse as part of the Quickstart. If you don't have a Quickstart, consider learning more about its benefits.
Next StepLearn More
---
## Prepare the Toolbox Image
import Intro from '@site/src/components/Intro';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import DismissibleDialog from '@site/src/components/DismissibleDialog';
import CodeBlock from '@theme/CodeBlock';
import CollapsibleText from '@site/src/components/CollapsibleText';
import PartialDockerfile from '@site/examples/snippets/Dockerfile';
import PartialMakefile from '@site/examples/snippets/Makefile';
import Note from '@site/src/components/Note';
Geodesic is a powerful Linux toolbox container designed to optimize DevOps workflows by providing essential dependencies for a DevOps toolchain, ensuring consistency and efficiency across development environments without additional software installation on your workstation. It can be extended and customized to fit specific needs by creating your own `Dockerfile` based on Geodesic, allowing you to add your favorite tools and share the container with your team for a unified working environment.
Geodesic is similar in principle to [devcontainers](https://containers.dev/). However, being a container itself, Geodesic can run anywhere containers are supported—whether on your local workstation, remotely inside clusters, or on bastion hosts. Additionally, you can use Geodesic as the base image for a devcontainer.
Geodesic in action.
Where are the configs?
The configurations are available via our Quickstart
Try Quickstart
## Building the Toolbox Image
Build a Geodesic infrastructure container. This container that has all the tools like terraform and atmos for building infrastructure. It's built from the `Dockerfile` and there are some predefined targets defined in the `Makefile` to make this easy. Customize these for your organization. Here are examples of both for reference.
{PartialDockerfile}{PartialMakefile}
The standard `Makefile` includes a number of commands. In order to build the initial, complete Geodesic image, run the following:
```bash
make all
```
On future builds, use `make run` to use the cached image.
:::tip Alias
We install a wrapper script with `make all` to your chosen namespace. For example, simply enter for given namespace to start your Geodesic container once built:
```bash
acme
```
See the `install` step of the `Makefile` for more details.
:::
Build the toolbox image locally before continuing.
Follow the [toolbox image setup steps in the How-to Get Started guide](/layers/project/#building-the-toolbox-image). In short,
run `make all`.
The container will have the given local home mapped, so you should be able to use aws normally inside it once you set a profile that has valid credentials. For instance, if I log in with [Atmos Auth](/layers/identity/how-to-log-into-aws/), I can run `aws sts get-caller-identity` and get a response.
Once you've verified that the infra container has access to aws resources, we can move on to the next step.
With your repository set up, workstation configured and toolbox in hand, you're ready to get to work provisioning your infrastructure with Atmos and Terraform. The next step is to learn how to provision AWS accounts.
Next Step
---
## Getting started with Geodesic v4
import Intro from '@site/src/components/Intro';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
In the landscape of developing infrastructure, there are dozens of tools that we all need on our personal machines to do our jobs. In SweetOps, instead of having you install each tool individually, we use Docker to package all of these tools into one convenient image that you can use as your infrastructure automation toolbox. We call it [Geodesic](/learn/toolchain/#geodesic) and we use it as our DevOps automation shell and as the base Docker image for all of our DevOps scripting / CI jobs.
In this tutorial, we'll walk you through how to use Geodesic to execute Terraform and other tooling. We'll be sure to talk about what is going on under the hood to ensure you're getting the full picture.
Geodesic v4 is the current version
This documentation is for Geodesic v4, which is the current version.
While it is largely the same as earlier versions, there are some significant
differences, and we have retained documentation on Geodesic v3 for those who
have yet to make the switch. Please be aware of which version of Geodesic
and which version of the documentation you are using in case you find
inconsistencies.
## Prerequisites
### System Requirements
To accomplish this tutorial, you'll need to have [Docker installed](https://docs.docker.com/get-docker/) on your local machine. **That's all**.
Although Geodesic is supplied as a Docker image, it is best used by installing a wrapper shell script
that configures the Docker container to mount directories and files from your local machine and support
running multiple `bash` shells simultaneously. To install the wrapper script, you must have write
access to either `/usr/local/bin` or `$HOME/.local/bin` on your local machine, and you must have
the installed directory in your `$PATH`.
### Geodesic Usage Patterns
Let's talk about a few of the ways that one can run Geodesic. Our toolbox has been built to satisfy many use-cases, and each result in a different pattern of invocation:
### Install Geodesic
You can **install** Geodesic onto your local machine running `make install` with the [Makefile](https://github.com/cloudposse/geodesic/blob/main/Makefile) provided in the Geodesic repository.
### Build Your Own Toolbox
You can **build your own toolbox** on top of Geodesic. This is what Cloud Posse generally recommends to
practitioners.
We do this when we want to provide additional packages or customization to our team while building on the foundation
that Geodesic provides. This is relatively to do by using Geodesic as your base image (e.g. `FROM
cloudposse/geodesic:latest-debian`) in your own `Dockerfile`, adding your own Docker `RUN` commands or overriding
environment variables, and then customizing the [Geodesic Makefile](https://github.com/cloudposse/geodesic/blob/main/Makefile) with your own `DOCKER_ORG`, `DOCKER_IMAGE`, `DOCKER_FILE`, and `APP_NAME` variables. (There are other variables you can customize as well, but these are the most common ones.) Then you can run `make build` to create a new image, `make install` to install the wrapper script that will run it, and then run it via the `APP_NAME` you configured. If you like, you can do this all in one step by running `make all`.
### Quick Install
You can skip using `make` and just install Geodesic
Example: `docker run --rm cloudposse/geodesic:latest-debian
init | bash` installs `/usr/local/bin/geodesic` (or `$HOME/.local/bin/geodesic`) on your local machine which you can
execute repeatedly via simply typing `geodesic`. In this example, we're pinning the script to use the `cloudposse/geodesic:latest-debian` docker image, but we could also pin to our own image or to a specific version.
### Run Standalone
You can **run standalone** Geodesic as a standard docker container using `docker run`, but in this mode, Geodesic
will not have access to your local machine's files, so it is less useful. Some use cases are to provide tools to
debug a Kubernetes cluster by installing Geodesic as a pod in the cluster, or to use it as a CI/CD tool where the
tool takes care of mounting the required files and directories.
### Interactive Shell Example
Example: `docker run -it --rm --volume $PWD:/workspace cloudposse/geodesic:latest-debian --login` opens a bash login
shell (`--login` is our Docker `CMD` here; it's actually just [the arguments passed to the `bash` shell](https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html) which is our `ENTRYPOINT`) in our Geodesic container.
### One-Off Command Example
Example: `docker run --rm cloudposse/geodesic:latest-debian -c "terraform version"` executes the `terraform version` command as a one-off and outputs the result.
In this tutorial, we'll be running the installed Geodesic `geodesic` to allow us to take advantage of the wrapper script's features.
## Tutorial
### Install the Geodesic Wrapper Script
First, at your terminal, let's install the Geodesic shell!
```bash
# Since the "latest" tag changes, ensure we do not have a stale image
docker image rm cloudposse/geodesic:latest-debian # OK if image not found
docker run --rm cloudposse/geodesic:latest-debian init | bash
```
The result of running this command should look something like this:
```bash
# Installing geodesic from cloudposse/geodesic:latest-debian...
# Installed geodesic to /usr/local/bin/geodesic
```
### Start the Geodesic Shell
You should now be able to launch a Geodesic shell just by typing `geodesic` at your terminal:
```bash
geodesic
```

Exit it for now by typing `exit` or pressing `logout`.
### Download our Tutorial Project
Great -- we've started up Geodesic so now let's do something with it. How about we pull a terraform project and apply it? To accomplish this, let's do the following:
### TODO: Continue updates from here
```bash
# Change to our /localhost directory so that we can pull our project's code to our
# local machine as well as our docker container
cd /localhost
# Clone our tutorials repository
git clone https://github.com/cloudposse/tutorials
# Change to our tutorial code
cd tutorials/01-geodesic
```
Easy! And since we changed into our `/localhost` directory inside Geodesic, the `tutorials` project that we git cloned is available both in the container that we're running our shell in **and** on our local machine in our `$HOME` directory. This enables us to share files between our local machine and our container, which should start to give you an idea of the value of mounting `$HOME` into Geodesic.
### Apply our Terraform Project
Now that we've got some code to work with, let's apply it...
```bash
# Setup our terraform project
terraform init
# Apply our terraform project
terraform apply -auto-approve
```
Sweet, you should see a successful `terraform apply` with some detailed `output` data on the original star wars hero! 😎
Just to show some simple usage of another tool in the toolbox, how about we parse that data and get that hero's name?
### Read some data from our Outputs
Let's utilize [`jq`](https://github.com/stedolan/jq) to grab some info from that terraform project's output:
```bash
# Pipe our terraform project's output into jq so we can pull out our hero's name
terraform output -json | jq .star_wars_data.value.name
```
Again, without having to install anything, we've grabbed a tool from our toolbox and were able to use it without a second thought.
## Conclusion
The beautiful thing about all of this is that we didn't need to install anything except Docker on our local machine to make this happen. Tools like `git`, `terraform`(all versions), and `jq` all involve specific installation instructions to get up and running using the correct versions across various machines/teams, but by using Geodesic we're able to quickly skip over all of that and use a container that includes them out of the box alongside [dozens of other tools as well](https://github.com/cloudposse/packages/tree/master/vendor). And with the mounting of our `$HOME` directory to `/localhost` of the container, our Geodesic shell just ends up being an extension of our local machine. That is why we call it a toolbox as it enables consistent usage of CLI tools across your entire organization!
If you want to see another usage of Geodesic, [read our next tutorial in the SweetOps series about one of our most important tools: `atmos`.](https://atmos.tools/quick-start/introduction)
---
## Migrate from Account-Map
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Note from '@site/src/components/Note';
This guide walks through migrating from the `account-map` component to the new approach using Atmos stack variables, Atmos Auth, and Atmos Functions.
:::caution Migration Support
The migration from `account-map` is a significant architectural change that has not yet been fully documented. If you're interested in migrating an existing deployment, please [reach out to Cloud Posse for support](https://cloudposse.com/support).
:::
## Overview
The `account-map` component is being deprecated in favor of a simpler approach that:
1. Stores account configuration directly in Atmos stack variables
1. Uses [Atmos Auth](https://atmos.tools/cli/auth) for authentication before Terraform runs
1. Uses [Atmos Functions](https://atmos.tools/core-concepts/stacks/templates/functions) for dynamic value resolution
1. Enables brownfield adoption where accounts already exist
For background on why `account-map` is being deprecated, see [Legacy Account Map](/layers/accounts/tutorials/legacy-account-map/).
## Why Migrate?
The legacy `account-map` component pattern required:
1. Deploying account-map component first
1. Remote state lookups for every component that needed account IDs
1. Complex `providers.tf` with remote-state module calls
1. Cross-account state access permissions
The new pattern:
1. Static account map defined once in stack defaults
1. No remote state dependencies for account lookups
1. Simpler provider configuration
1. Works with Atmos Auth for authentication
## Before You Begin
The migration involves several coordinated changes:
1. Adding account IDs to Atmos stack variables
1. Updating component providers to remove `account-map` dependency
1. Removing `aws-teams` and `aws-team-roles` components (replaced by IAM Identity Center)
1. Configuring Atmos Auth profiles and IAM Identity Center Permission Sets
1. Deploying `iam-role` components for Terraform execution
This is a breaking change that affects how Terraform authenticates and resolves account information. Plan for a maintenance window and test thoroughly in non-production environments first.
## Key Configuration
### Stack Defaults
The account map is defined in your organization's defaults file:
```yaml
# stacks/orgs/NAMESPACE/_defaults.yaml
vars:
account_map_enabled: false
account_map:
full_account_map:
acme-core-root: "111111111111"
acme-core-audit: "222222222222"
acme-core-auto: "333333333333"
acme-plat-dev: "444444444444"
acme-plat-staging: "555555555555"
acme-plat-prod: "666666666666"
# ... all accounts
iam_role_arn_templates:
terraform: "arn:aws:iam::%s:role/acme-core-gbl-auto-terraform"
audit_account_account_name: "acme-core-audit"
root_account_account_name: "acme-core-root"
```
### Vendored providers.tf
Components use a vendored `providers.tf` from Atmos mixins that includes:
1. `account_map_enabled` and `account_map` variables
1. Provider configuration that uses the static account map
1. Dummy `iam_roles` module for legacy compatibility
Vendoring is configured in each component's `component.yaml`:
```yaml
# components/terraform//component.yaml
apiVersion: atmos/v1
kind: ComponentVendorConfig
spec:
source:
uri: github.com/cloudposse-terraform-components/aws-.git//src?ref={{ .Version }}
version: v1.x.x
included_paths:
- "**/**"
excluded_paths:
- "providers.tf" # Exclude upstream providers.tf
mixins:
# Vendor the providers.tf with account-map support
- uri: https://raw.githubusercontent.com/cloudposse-terraform-components/mixins/{{ .Version }}/src/mixins/provider-without-account-map.tf
version: v0.3.0
filename: providers.tf
- uri: https://raw.githubusercontent.com/cloudposse-terraform-components/mixins/{{ .Version }}/src/mixins/account-verification.mixin.tf
version: v0.3.0
filename: account-verification.mixin.tf
```
Key points:
1. The upstream `providers.tf` is excluded via `excluded_paths`
1. The `provider-without-account-map.tf` mixin is vendored as `providers.tf`
1. This mixin includes the `account_map_enabled` and `account_map` variables
To vendor (or re-vendor) the component:
```bash
atmos vendor pull -c
```
The vendored `providers.tf` handles all account map logic automatically. You don't need to manually add these variables to `variables.tf` — they're included in `providers.tf`.
## Migration Steps
### Step 1: Configure Atmos Auth
Set up Atmos Auth to handle authentication before Terraform runs. This replaces the dynamic role assumption that `account-map` previously provided.
See [Atmos Auth Configuration](/layers/identity/atmos-auth/) for details on configuring `atmos.yaml`.
### Step 2: Add Account Configuration to Stacks
Add the full account map configuration to your stack defaults as shown above in [Stack Defaults](#stack-defaults).
### Step 3: Vendor Component Providers
For each component, update the `component.yaml` to exclude the upstream `providers.tf` and vendor the mixin:
```bash
atmos vendor pull -c
```
### Step 4: Update remote-state.tf (If Present)
If a component has a `remote-state.tf` that references account-map, update it to use the `bypass` and `defaults` pattern:
```hcl
module "account_map" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.8.0"
component = "account-map"
tenant = var.account_map_enabled ? coalesce(var.account_map_tenant, module.this.tenant) : null
environment = var.account_map_enabled ? var.account_map_environment : null
stage = var.account_map_enabled ? var.account_map_stage : null
context = module.this.context
# When account_map is disabled, bypass remote state and use the static account_map variable
bypass = !var.account_map_enabled
defaults = var.account_map
}
```
Key points:
1. `bypass = !var.account_map_enabled` — Skips remote state lookup when disabled
1. `defaults = var.account_map` — Uses the static account_map variable instead
1. `module.account_map.outputs` works the same regardless of bypass — returns `defaults` when bypassed
### Step 5: Deploy IAM Roles
Deploy the `iam-role` component to each account to provide roles for Terraform execution:
```bash
atmos workflow deploy/iam-role -f identity
```
This creates IAM roles that Atmos Auth will assume when running Terraform.
### Step 6: Configure IAM Identity Center
Set up Permission Sets and group mappings in IAM Identity Center to replace `aws-teams`:
1. See [Deploy Permission Sets](/layers/identity/aws-sso/) for configuration details
1. See [How to Log into AWS](/layers/identity/how-to-log-into-aws/) for authentication workflows
### Step 7: Remove Legacy Components
Once the new approach is working, remove the legacy components:
1. Remove `account-map` component deployments from all accounts
1. Remove `aws-teams` component deployments
1. Remove `aws-team-roles` component deployments from all accounts
1. Clean up any remaining references in stack configurations
## Identifying Legacy References
Search for components still using the old pattern:
```bash
# Find remote-state references to account-map
grep -r "account-map" components/terraform/*/remote-state.tf
# Find components without account_map_enabled variable
for dir in components/terraform/*/; do
if ! grep -q "account_map_enabled" "$dir/variables.tf" 2>/dev/null; then
echo "Missing: $dir"
fi
done
```
## Migration Checklist
When migrating a component or creating a new one:
1. **Vendor providers.tf** — Run `atmos vendor pull -c ` to get the latest providers.tf with account map support
1. **Update remote-state.tf** — If the component has a `remote-state.tf` that references account-map, update it to use the bypass pattern
1. **Verify catalog** — Ensure `account_map_enabled: false` is set (inherited from `_defaults.yaml`)
1. **Test** — Run `atmos terraform plan` to verify
## Troubleshooting
### Authentication Errors
If you encounter authentication errors after migration:
1. Verify Atmos Auth is configured correctly in `atmos.yaml`
1. Check that `iam-role` components are deployed to target accounts
1. Ensure IAM Identity Center Permission Sets have the correct policies
1. Run `atmos auth login` to refresh credentials
### Provider Configuration Errors
If Terraform reports provider configuration issues:
1. Verify components are using the new provider mixin
1. Check that `account_map` variable is defined in stack defaults
1. Run `atmos vendor pull` to update component sources
## Getting Help
This migration path is still being refined. For assistance:
1. [Cloud Posse Support](https://cloudposse.com/support) — Professional support for migrations
1. [Slack Community](https://slack.cloudposse.com) — Community discussions
## See Also
1. [Legacy Account Map](/layers/accounts/tutorials/legacy-account-map/) — Why account-map was deprecated
1. [Atmos Auth](https://atmos.tools/cli/auth) — Authentication commands
1. [Atmos Functions](https://atmos.tools/core-concepts/stacks/templates/functions) — Dynamic value resolution
1. [IAM Identity Center](/layers/identity/aws-sso/) — Permission Sets configuration
---
## Poly-Repo Strategy with account-map
import Intro from '@site/src/components/Intro';
import Note from '@site/src/components/Note';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
When managing multiple repositories with Terraform and Atmos, you need a strategy to handle `account-map` configurations effectively. This tutorial shows you how to implement a poly-repo strategy that maintains consistency across your infrastructure while leveraging `account-map` for dynamic role selection.
## Problem
You want many repositories to deploy infrastructure as code with Terraform and Atmos. However, the standard `providers.tf` configuration with Cloud Posse components requires `account-map` to dynamically select Terraform roles and target accounts.
## Solution
The poly-repo strategy with `account-map` involves maintaining a central infrastructure repository as the source of truth for account mappings, while configuring service repositories to use static `account-map` configurations. This approach enables team autonomy while ensuring consistent account mappings across your infrastructure.
## Use the infrastructure Repo for core services
Use the primary infrastructure repository to deploy all accounts, Terraform state, and core services:
- Deploy `account-map` here
- Download the `account-map` Terraform output to use later
- Optional, highly recommended: Use a custom atmos command to download the `account-map` output as an artifact
```yaml
# atmos.yaml
commands:
- name: download-account-map
description: This command downloads the Terraform output for account-map.
env:
- key: AWS_PROFILE
value: cptest-core-gbl-root-admin # change this to a role you can assume with access to account-map
steps:
# initialize terraform and selects workspace
- "atmos terraform workspace account-map core-gbl-root -s core-gbl-root"
# download the terraform output for account-map. limit atmos to only the JSON output
- "atmos terraform output account-map -s core-gbl-root --logs-level Off --redirect-stderr=/dev/null --skip-init -- -json | jq 'map_values(.value)' | yq -P -oy > components/terraform/account-map/account-map-output.{{ now | date \"2006-01-02\" }}.yaml"
```
You will need to update this artifact each time `account-map` changes. This is fairly infrequent, and there are potential solutions for optimization yet to be considered.
## Mock `account-map` in each poly-repos
In each of the poly-repos, we need to configure `account-map` to use a static backend. We want remote-state to read a static YAML configuration when referring to `account-map`, rather than reading the terraform backend.
1. Download the `account-map` component (vendoring recommended). We do not need to apply this component, but we do need the submodules included with the component for the common `providers.tf` configuration in all other components
2. Create a component instance for `account-map` -- typically with the stack catalog `stacks/catalog/account-map.yaml`
3. Set `remote_state_backend_type: static`
4. Paste the `account-map` YAML output under `remote_state_backend.static`. I'd recommend using imports to manage the output artifact separately or other ways to make it easier to understand and update.
Done! Now when remote-state refers to the `account-map` component, it will instead check the static remote state backend rather than S3.
```yaml
components:
terraform:
account-map:
remote_state_backend_type: static
remote_state_backend:
static:
PASTE OR INCLUDE YAML HERE
(make sure to set indentation)
```
## Frequently Asked Questions
This tutorial is based on a [GitHub discussion](https://github.com/orgs/cloudposse/discussions/49) about splitting up Atmos and Terraform into service repos.
### How do you run the components used within the non-infrastructure repos? Is the directory format the same?
You can use components in the same way as the infrastructure repo, but this would be dependent on your atmos configuration. You will need to set the base paths for components and stacks, then you execute atmos as usual.
### Is it possible to reuse components in the infrastructure repo?
At the moment no. However, we are developing just-in-time vendoring for components where you would be able to specify a remote source for a component in stack config. It's not generally available yet -- stay tuned.
### How does the GitHub action run? Does it assume the same terraform IAM role?
We usually deploy an aws-teams role (typically called gitops) for GitHub actions. That team is able to plan and/or apply Terraform by assuming the standard terraform from aws-team-roles. The infrastructure repo assumes the gitops team using GitHub OIDC.
You have a few options. You could reuse the same gitops role by adding the new repos to that allowed repos list. However, you may want finer scoped privileges in the alternate repos. In which case you could create an additional AWS Team with limited access.
### What about guardrails? How do you only allow updates to specific components or resources?
Both by included components and AWS Team permission. The repo would only include components it is responsible for managing, and therefore would only be aware of that set of components.
However, you should still scope the AWS Team to least privilege required for that set of resources. You could separate the Terraform state backend as well to create an additional boundary between the sets of resources.
### How do you prevent conflicts with the same component deployed in the same stack across multiple repos?
We don't recommend deploying the same component from the same stack across repos. The idea with poly-repos is that an app team can manage their own infrastructure independently. We wouldn't recommend controlling the same infrastructure with 2 separate sources of code.
---
## Tutorials(8)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These are some additional tutorials that will help you along with the foundational components.
---
## AWS IAM Access Analyzer
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
AWS IAM Access Analyzer identifies resources shared with external entities and detects unused IAM permissions,
enabling you to implement least-privilege access and identify unintended access to your resources.
## Overview
Access Analyzer provides:
- **External Access Analysis**: Identifies resources shared with external principals outside your organization
- **Unused Access Analysis**: Detects unused IAM roles, users, and permissions
- **Policy Validation**: Validates IAM policies against best practices
- **Policy Generation**: Generates least-privilege policies based on CloudTrail activity
- **Multi-account Coverage**: Organization-wide analysis from a central account
## Analyzer Types
This component creates two types of organization-wide analyzers:
| Analyzer Type | Purpose | Findings |
|---------------|---------|----------|
| `ORGANIZATION` | External access analysis | Public access, cross-account access, cross-organization access |
| `ORGANIZATION_UNUSED_ACCESS` | Unused access analysis | Unused roles, users, permissions (configurable threshold) |
## Supported Resources
External access analyzer monitors:
- Amazon S3 buckets and access points
- IAM roles and policies
- AWS KMS keys
- AWS Lambda functions and layers
- Amazon SQS queues
- AWS Secrets Manager secrets
- Amazon SNS topics
- Amazon EBS volume snapshots
- Amazon RDS DB snapshots
- Amazon ECR repositories
- Amazon EFS file systems
## Architecture
```mermaid
flowchart LR
subgraph root["Root Account"]
step1["STEP 1: Delegate"]
end
subgraph security["Security Account"]
step2["STEP 2: Create Analyzers"]
dashboard["Access Analyzer Dashboard"]
end
subgraph members["Member Accounts"]
member["Auto-analyzed"]
end
root -->|"Delegation"| security
members -->|"Findings"| dashboard
```
## Deployment
Access Analyzer uses a **2-step delegated administrator** deployment model.
### Step 1: Deploy to Organization Management Account
This step requires root account access (such as with the `managers` profile).
```yaml
# core-gbl-root
components:
terraform:
aws-access-analyzer/root:
metadata:
component: aws-access-analyzer
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
organizations_delegated_administrator_enabled: true
service_linked_role_enabled: true
# Analyzers created in security account
accessanalyzer_organization_enabled: false
accessanalyzer_organization_unused_access_enabled: false
```
```bash
atmos terraform apply aws-access-analyzer/root -s core-gbl-root
```
### Step 2: Deploy Organization Analyzers
```yaml
# core-ue1-security
components:
terraform:
aws-access-analyzer/org-settings:
metadata:
component: aws-access-analyzer
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
# Create organization analyzers
accessanalyzer_organization_enabled: true
accessanalyzer_organization_unused_access_enabled: true
unused_access_age: 30
# Already delegated
organizations_delegated_administrator_enabled: false
```
```bash
atmos terraform apply aws-access-analyzer/org-settings -s core-ue1-security
```
## Multi-Region Deployment
Access Analyzer is a regional service. Deploy analyzers to each region:
```bash
# Delegation (once, globally)
atmos terraform apply aws-access-analyzer/root -s core-gbl-root
# Analyzers per region
atmos terraform apply aws-access-analyzer/org-settings -s core-ue1-security
atmos terraform apply aws-access-analyzer/org-settings -s core-uw2-security
```
## Unused Access Configuration
Configure the threshold for unused access findings:
```yaml
components:
terraform:
aws-access-analyzer/org-settings:
vars:
accessanalyzer_organization_unused_access_enabled: true
# Days without use before generating findings (default: 30)
unused_access_age: 30
```
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `organizations_delegated_administrator_enabled` | Enable delegation to security account | `false` |
| `service_linked_role_enabled` | Create the service-linked role | `true` |
| `accessanalyzer_organization_enabled` | Enable external access analyzer | `false` |
| `accessanalyzer_organization_unused_access_enabled` | Enable unused access analyzer | `false` |
| `unused_access_age` | Days without use before generating findings | `30` |
## Cost Considerations
- **External Access Analyzer**: No additional charge (included with AWS account)
- **Unused Access Analyzer**: Charged per IAM role or user analyzed per month
## Security Hub Integration
Access Analyzer findings are automatically sent to Security Hub when both services are enabled.
## See Also
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Aggregates Access Analyzer findings
- [AWS Config](/layers/security-and-compliance/aws-config/) - Monitors IAM policy configurations
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS IAM Access Analyzer Documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html)
- [aws-access-analyzer Component](https://github.com/cloudposse-terraform-components/aws-access-analyzer)
- [Access Analyzer Findings](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-findings.html)
- [Unused Access Analysis](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-unused-access.html)
---
## AWS Audit Manager
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
AWS Audit Manager helps you continuously audit your AWS usage to simplify compliance assessment with regulations
and industry standards. It automates evidence collection and generates audit-ready reports.
## Overview
Audit Manager provides:
- **Prebuilt Frameworks**: CIS, FedRAMP, GDPR, HIPAA, PCI DSS, SOC 2, NIST 800-53
- **Automated Evidence**: Collects evidence from CloudTrail, Config, Security Hub, and other services
- **Custom Controls**: Build custom frameworks and controls for specific requirements
- **Assessment Reports**: Cryptographically verified reports with organized evidence
- **Multi-account Support**: Assessments across multiple AWS accounts via Organizations
## Supported Compliance Frameworks
| Framework | Description |
|-----------|-------------|
| **PCI DSS** | Payment Card Industry Data Security Standard |
| **HIPAA** | Health Insurance Portability and Accountability Act |
| **SOC 2** | Service Organization Control 2 |
| **NIST 800-53** | National Institute of Standards and Technology (Rev 4 and Rev 5) |
| **FedRAMP** | Federal Risk and Authorization Management Program |
| **GDPR** | General Data Protection Regulation |
| **ISO 27001** | Information Security Management |
| **CIS** | Center for Internet Security benchmarks |
| **AWS Control Tower** | AWS Control Tower guardrails |
## Architecture
Audit Manager uses a **unique single-step deployment model**:
```mermaid
flowchart LR
subgraph root["Root Account"]
single_step["Enable + Delegate"]
end
subgraph security["Security Account"]
dashboard["Audit Manager Dashboard"]
end
subgraph members["Member Accounts"]
member["Auto-collected evidence"]
end
root -->|"Delegation"| security
members -->|"Evidence"| dashboard
```
## Deployment Model Comparison
| Aspect | AWS Audit Manager | Other Security Services |
|--------|-------------------|------------------------|
| **Deployment Steps** | 1 step (root only) | 2-3 steps |
| **Member Account Setup** | Automatic | Auto-enabled by admin |
| **Provisioning Location** | Root account only | Root + Security account |
## Deployment
Audit Manager uses a **single-step** deployment from the root account.
This deployment requires root account access (such as with the `managers` profile).
### Stack Configuration
```yaml
# core-ue1-root
components:
terraform:
aws-audit-manager/root:
metadata:
component: audit-manager
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
privileged: true
deregister_on_destroy: true
```
### Provisioning
```bash
atmos terraform apply aws-audit-manager/root -s core-ue1-root
```
This single deployment:
- Enables Audit Manager in the organization
- Delegates administration to the security account
- Begins automatic evidence collection from member accounts
## Multi-Region Deployment
Deploy to each region where you want to run compliance assessments:
```bash
# us-east-1
atmos terraform apply aws-audit-manager/root -s core-ue1-root
# us-west-2
atmos terraform apply aws-audit-manager/root -s core-uw2-root
```
## Assessment Report S3 Buckets
Create S3 buckets in the delegated administrator account for assessment reports:
```yaml
# core-ue1-security
components:
terraform:
audit-manager-reports-bucket:
metadata:
component: s3-bucket
vars:
enabled: true
name: audit-manager-reports
s3_object_ownership: "BucketOwnerEnforced"
versioning_enabled: false
```
```bash
atmos terraform apply audit-manager-reports-bucket -s core-ue1-security
```
## Creating Assessments
After deployment, create assessments in the delegated administrator account:
1. **Via Console** — AWS Audit Manager → Assessments → Create assessment
1. **Via CLI** — Use `aws auditmanager` CLI commands
1. **Via Terraform** — Use `aws_auditmanager_assessment` resource
### Assessment Components
| Component | Description |
|-----------|-------------|
| **Framework** | Choose prebuilt or custom framework |
| **Scope** | Select AWS accounts and services to assess |
| **Roles** | Define who can access the assessment |
| **Report Destination** | Specify S3 bucket for reports |
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `delegated_administrator_account_name` | Account to delegate administration | `core-security` |
| `deregister_on_destroy` | Deregister on terraform destroy | `true` |
| `privileged` | Required for root account deployment | `true` |
## Evidence Sources
Audit Manager collects evidence from:
- **AWS CloudTrail**: API activity logs
- **AWS Config**: Configuration compliance data
- **AWS Security Hub**: Security findings
- **AWS License Manager**: License compliance
- **Manual Evidence**: Policy documents, training records
## Cost Considerations
- **Assessment Price**: Based on number of evidence items collected per month
- **Evidence Storage**: S3 storage costs for assessment reports
- **Free Tier**: Limited free usage during first 13 months
- **Regional**: Costs are per region
## See Also
- [AWS CloudTrail](/layers/security-and-compliance/aws-cloudtrail/) - Primary evidence source for API activity
- [AWS Config](/layers/security-and-compliance/aws-config/) - Evidence source for configuration compliance
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Evidence source for security findings
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS Audit Manager Documentation](https://docs.aws.amazon.com/audit-manager/)
- [aws-audit-manager Component](https://github.com/cloudposse-terraform-components/aws-audit-manager)
- [Audit Manager Frameworks](https://docs.aws.amazon.com/audit-manager/latest/userguide/frameworks.html)
- [Evidence Collection](https://docs.aws.amazon.com/audit-manager/latest/userguide/evidence.html)
---
## AWS CloudTrail
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import TaskList from '@site/src/components/TaskList';
AWS CloudTrail records API activity across your AWS Organization, providing an audit trail for security analysis,
compliance auditing, and operational troubleshooting.
## Overview
AWS CloudTrail provides:
- **API Activity Logging**: Records all API calls made in your AWS accounts
- **Organization Trail**: Single trail that logs activity from all accounts automatically
- **Log File Validation**: Cryptographic signatures to verify log integrity
- **CloudWatch Integration**: Real-time analysis and alerting on API activity
- **Centralized Storage**: All logs stored in the audit account S3 bucket
## Architecture
```mermaid
flowchart LR
subgraph members["Member Accounts"]
member["All Accounts"]
end
subgraph audit["Audit Account"]
trail["Organization Trail"]
bucket["CloudTrail S3 Bucket"]
end
members -->|"API Logs"| trail
trail --> bucket
```
## Deployment
CloudTrail uses a simple deployment model - deploy the organization trail once, and it covers all accounts.
### Prerequisites
- Deploy `cloudtrail-bucket` component in the audit account
- Enable `cloudtrail.amazonaws.com` service access principal in AWS Organizations
### Stack Configuration
```yaml
# stacks/catalog/cloudtrail.yaml
components:
terraform:
cloudtrail:
vars:
enabled: true
cloudtrail_bucket_environment_name: ue1
cloudtrail_bucket_stage_name: audit
cloudwatch_logs_retention_in_days: 730
is_organization_trail: true
is_multi_region_trail: true
include_global_service_events: true
enable_log_file_validation: true
enable_logging: true
```
### Provisioning
Deploy the organization trail from the audit account:
```bash
atmos terraform apply aws-cloudtrail-s core-gbl-audit
```
For per-account trails (not recommended for most use cases):
```yaml
components:
terraform:
cloudtrail:
vars:
enabled: true
is_organization_trail: false
# ... other configuration
```
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `is_organization_trail` | Create trail for all accounts in organization | `false` |
| `is_multi_region_trail` | Create trail in all regions | `true` |
| `include_global_service_events` | Include global services (IAM, STS) | `true` |
| `enable_log_file_validation` | Enable log integrity validation | `true` |
| `cloudwatch_logs_retention_in_days` | Log retention period (CIS recommends 365) | `365` |
| `cloudtrail_bucket_environment_name` | Environment where bucket is deployed | - |
| `cloudtrail_bucket_stage_name` | Stage where bucket is deployed | - |
## CloudWatch Logs Integration
CloudTrail can send logs to CloudWatch for real-time analysis:
```yaml
components:
terraform:
cloudtrail:
vars:
enabled: true
cloudwatch_logs_retention_in_days: 730
cloudwatch_log_group_class: STANDARD
```
This enables:
- Real-time metric filters for specific API activities
- CloudWatch Alarms for security events
- Integration with SIEM systems
## KMS Encryption
For additional security, enable KMS encryption for CloudTrail logs:
```yaml
components:
terraform:
cloudtrail:
vars:
enabled: true
kms_key_enabled: true
kms_key_alias: "alias/cloudtrail"
```
## CIS Benchmark Compliance
CloudTrail configuration supports CIS AWS Foundations Benchmark requirements:
- **CIS 3.1-3.14**: CloudWatch Log Metric Filters and Alarms
- **CIS 3.x**: Log file validation enabled
- **CIS 3.x**: Multi-region trail enabled
- **CIS 3.x**: CloudTrail enabled in all regions
## See Also
- [AWS GuardDuty](/layers/security-and-compliance/aws-guardduty/) - Analyzes CloudTrail logs for threat detection
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Monitors CloudTrail CIS compliance
- [AWS Audit Manager](/layers/security-and-compliance/aws-audit-manager/) - Uses CloudTrail for compliance evidence
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS CloudTrail Documentation](https://docs.aws.amazon.com/cloudtrail/)
- [aws-cloudtrail Component](https://github.com/cloudposse-terraform-components/aws-cloudtrail)
- [CIS AWS Foundations Benchmark](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls.html)
- [CloudTrail Log File Validation](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html)
---
## AWS Config
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import TaskList from '@site/src/components/TaskList';
AWS Config provides configuration compliance monitoring and resource inventory across your AWS Organization.
It continuously evaluates resources against compliance rules and maintains configuration history for auditing.
## Overview
AWS Config is responsible for:
- **Configuration Recording**: Deploys Configuration Recorders in each account and region to track resource configurations
- **Centralized Aggregation**: Configures the security account as the central aggregation point for all AWS Config data
- **Compliance Monitoring**: Deploys conformance packs to monitor resources for compliance (CMMC, CIS, HIPAA)
- **Configuration Storage**: Delivers configuration snapshots and history to a centralized S3 bucket in the audit account
- **Organization Conformance Packs**: Deploys organization-wide conformance packs from the management account
## Architecture
```mermaid
flowchart LR
subgraph mgmt["Root Account"]
mgmt_packs["Conformance Packs"]
end
subgraph security["Security Account"]
aggregator["Config Aggregator"]
end
subgraph audit["Audit Account"]
bucket["Config S3 Bucket"]
end
subgraph members["Member Accounts"]
recorder["Config Recorders"]
end
mgmt -->|"Applies packs"| members
members -->|"Aggregates"| security
members -->|"Snapshots"| audit
```
## Deployment
AWS Config uses a **per-account** deployment model with organization conformance packs.
### Prerequisites
- Deploy `config-bucket` component in the audit account
- Enable `config.amazonaws.com` and `config-multiaccountsetup.amazonaws.com` service access principals
### Stack Configuration
#### Defaults Configuration
```yaml
# stacks/catalog/aws-config/defaults.yaml
components:
terraform:
aws-config/defaults:
metadata:
type: abstract
component: aws-config
vars:
enabled: true
default_scope: account
create_iam_role: true
account_map_tenant: core
root_account_stage: root
global_environment: gbl
global_resource_collector_region: us-east-1
central_resource_collector_account: security
config_bucket_component_name: config-bucket
config_bucket_tenant: core
config_bucket_env: ue1
config_bucket_stage: audit
sns_encryption_key_id: "alias/aws/sns"
conformance_packs: []
```
#### Member Account Configuration
```yaml
# stacks/catalog/aws-config/member-account.yaml
import:
- catalog/aws-config/defaults
components:
terraform:
aws-config:
metadata:
component: aws-config
inherits:
- aws-config/defaults
```
#### Organization Configuration (Root Account)
```yaml
# stacks/catalog/aws-config/organization.yaml
import:
- catalog/aws-config/defaults
components:
terraform:
aws-config:
metadata:
component: aws-config
inherits:
- aws-config/defaults
vars:
default_scope: organization
conformance_packs:
- name: Operational-Best-Practices-for-CIS-AWS-v1.4-Level2
conformance_pack: "https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level2.yaml"
parameter_overrides: {}
```
### Provisioning Order
:::caution Important
Deploy member accounts **BEFORE** the organization account. Organization conformance packs require all member accounts
to have configuration recorders already set up.
:::
**Step 1: Deploy to Member Accounts (Global Collector Region First)**
```bash
# Core tenant accounts
atmos terraform apply aws-config -s core-ue1-audit
atmos terraform apply aws-config -s core-ue1-security
atmos terraform apply aws-config -s core-ue1-network
# Platform tenant accounts
atmos terraform apply aws-config -s plat-ue1-dev
atmos terraform apply aws-config -s plat-ue1-staging
atmos terraform apply aws-config -s plat-ue1-prod
```
**Step 2: Deploy to Organization Account (Last)**
```bash
atmos terraform apply aws-config -s core-ue1-root
```
## Conformance Packs
Conformance packs define AWS Config rules for compliance monitoring. This component supports:
- **Remote URLs**: AWS-managed packs from GitHub
- **Local Files**: Custom packs from your component directory
### Example Configuration
```yaml
conformance_packs:
# Remote URL (AWS Labs managed packs)
- name: CIS-AWS-v1.4-Level2
conformance_pack: "https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level2.yaml"
parameter_overrides:
AccessKeysRotatedParamMaxAccessKeyAge: "45"
# Local file (relative to component directory)
- name: Custom-CMMC-Pack
conformance_pack: "conformance-packs/custom-cmmc-pack.yaml"
parameter_overrides: {}
# Override scope for specific pack
- name: Org-Wide-Security-Pack
conformance_pack: "https://example.com/pack.yaml"
scope: organization
parameter_overrides: {}
```
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `global_resource_collector_region` | Region that tracks global resources (IAM, Route53) | `us-east-1` |
| `central_resource_collector_account` | Account that aggregates all Config data | `security` |
| `default_scope` | Scope for conformance packs (`account` or `organization`) | `account` |
| `create_iam_role` | Create IAM role for Config recorder | `true` |
| `sns_encryption_key_id` | KMS key for SNS topic encryption (CMMC compliance) | `alias/aws/sns` |
## Known Issues
### IAM Inline Policy Check - Service-Linked Roles
The `IAM_NO_INLINE_POLICY_CHECK` rule flags AWS Service-Linked Roles (SLRs) as NON_COMPLIANT. This is a **known false
positive** because AWS Service-Linked Roles must have inline policies by design.
**Common SLRs that trigger this finding:**
- `AWSServiceRoleForAmazonGuardDuty`
- `AWSServiceRoleForConfig`
- `AWSServiceRoleForSecurityHub`
- `AWSServiceRoleForAccessAnalyzer`
- `AWSServiceRoleForAmazonMacie`
- `AWSServiceRoleForInspector2`
**Recommended Action**: Document these as accepted false positives and focus remediation on user-created roles.
## See Also
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Aggregates Config compliance findings
- [AWS Audit Manager](/layers/security-and-compliance/aws-audit-manager/) - Uses Config for compliance evidence
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS Config Documentation](https://docs.aws.amazon.com/config/)
- [aws-config Component](https://github.com/cloudposse-terraform-components/aws-config)
- [AWS Conformance Packs](https://github.com/awslabs/aws-config-rules/tree/master/aws-config-conformance-packs)
- [CIS AWS Foundations Benchmark](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls.html)
---
## AWS GuardDuty
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import TaskList from '@site/src/components/TaskList';
AWS GuardDuty is a managed threat detection service that continuously monitors for malicious activity and unauthorized
behavior across your AWS accounts using ML-based analysis.
## Overview
GuardDuty provides:
- **Threat Detection**: ML-based analysis of CloudTrail, VPC Flow Logs, and DNS logs
- **Threat Intelligence**: Integration with AWS and partner threat intelligence feeds
- **Real-time Alerts**: Notifications through CloudWatch Events and SNS
- **Multi-account Support**: Centralized management across your organization
- **Protection Features**: S3, EKS, Lambda, Malware, and Runtime monitoring
## Supported Protection Features
| Feature | Description |
|---------|-------------|
| **S3 Protection** | Monitors S3 data events for suspicious activities |
| **EKS Audit Log Monitoring** | Analyzes Kubernetes audit logs from EKS clusters |
| **Malware Protection** | Scans EBS volumes for malware |
| **Lambda Protection** | Monitors Lambda function network activity |
| **Runtime Monitoring** | Runtime threat detection for EC2, ECS, and EKS with automatic agent management |
## Architecture
```mermaid
flowchart LR
subgraph root["Root Account"]
step2["STEP 2: Delegate"]
end
subgraph security["Security Account"]
step1["STEP 1: Create Detector"]
step3["STEP 3: Configure Org"]
end
subgraph members["Member Accounts"]
member["Auto-enrolled"]
end
root -->|"Delegation"| security
members -->|"Findings"| security
```
## Deployment
GuardDuty uses a **3-step delegated administrator** deployment model.
### Prerequisites
- Enable GuardDuty trusted access in AWS Organizations by adding `guardduty.amazonaws.com` to `aws_service_access_principals` in your `account` component
Or enable via AWS CLI:
```bash
aws organizations enable-aws-service-access --service-principal guardduty.amazonaws.com
```
### Step 1: Deploy to Delegated Administrator Account
```yaml
# core-ue1-security
components:
terraform:
aws-guardduty/delegated-administrator:
metadata:
component: guardduty
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
```
```bash
atmos terraform apply aws-guardduty/delegated-administrator -s core-ue1-security
```
### Step 2: Deploy to Organization Management Account
This step requires root account access (such as with the `managers` profile).
```yaml
# core-ue1-root
components:
terraform:
aws-guardduty/root:
metadata:
component: guardduty
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
privileged: true
```
```bash
atmos terraform apply aws-guardduty/root -s core-ue1-root
```
### Step 3: Deploy Organization Settings
```yaml
# core-ue1-security
components:
terraform:
aws-guardduty/org-settings:
metadata:
component: guardduty
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
admin_delegated: true
# Protection features
s3_protection_enabled: true
kubernetes_audit_logs_enabled: true
malware_protection_scan_ec2_ebs_volumes_enabled: true
lambda_network_logs_enabled: true
# Runtime Monitoring
runtime_monitoring_enabled: true
runtime_monitoring_additional_config:
eks_addon_management_enabled: true
ecs_fargate_agent_management_enabled: true
ec2_agent_management_enabled: true
```
```bash
atmos terraform apply aws-guardduty/org-settings -s core-ue1-security
```
## Multi-Region Deployment
Repeat all 3 steps for each region:
```bash
# us-east-1
atmos terraform apply aws-guardduty/delegated-administrator -s core-ue1-security
atmos terraform apply aws-guardduty/root -s core-ue1-root
atmos terraform apply aws-guardduty/org-settings -s core-ue1-security
# us-west-2
atmos terraform apply aws-guardduty/delegated-administrator -s core-uw2-security
atmos terraform apply aws-guardduty/root -s core-uw2-root
atmos terraform apply aws-guardduty/org-settings -s core-uw2-security
```
## SNS Notifications
Enable SNS notifications for GuardDuty findings:
```yaml
components:
terraform:
aws-guardduty/delegated-administrator:
vars:
enabled: true
create_sns_topic: true
cloudwatch_enabled: true
```
This creates:
- KMS key with permissions for EventBridge, SNS, and SQS
- Encrypted SNS topic for findings
- SQS queue subscribed to the SNS topic
- CloudWatch Event Rules to route findings
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `admin_delegated` | Set to `true` after delegation is complete | `false` |
| `auto_enable_organization_members` | Auto-enroll members (`ALL`, `NEW`, `NONE`) | `NEW` |
| `s3_protection_enabled` | Enable S3 data event monitoring | `true` |
| `kubernetes_audit_logs_enabled` | Enable EKS audit log monitoring | `false` |
| `malware_protection_scan_ec2_ebs_volumes_enabled` | Enable EBS malware scanning | `false` |
| `lambda_network_logs_enabled` | Enable Lambda network monitoring | `false` |
| `runtime_monitoring_enabled` | Enable runtime monitoring | `false` |
## See Also
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Aggregates GuardDuty findings
- [AWS CloudTrail](/layers/security-and-compliance/aws-cloudtrail/) - Provides API logs analyzed by GuardDuty
- [Enable GuardDuty for EKS](/layers/security-and-compliance/tutorials/enable-guardduty-for-eks-protection/) - EKS protection tutorial
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS GuardDuty Documentation](https://docs.aws.amazon.com/guardduty/)
- [aws-guardduty Component](https://github.com/cloudposse-terraform-components/aws-guardduty)
- [GuardDuty Protection Features](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-features-activation-model.html)
- [Runtime Monitoring](https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring.html)
---
## AWS Inspector 2
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
AWS Inspector 2 provides automated vulnerability scanning for EC2 instances, container images in ECR, and Lambda
functions across your AWS Organization using machine learning and pattern matching.
## Overview
AWS Inspector 2 provides:
- **EC2 Scanning**: Vulnerability assessment of EC2 instances using SSM Agent
- **ECR Scanning**: Container image scanning for vulnerabilities in your registries
- **Lambda Scanning**: Function code and dependency vulnerability detection
- **Continuous Monitoring**: Real-time vulnerability detection as CVEs are published
- **Multi-account Support**: Centralized management across your organization
## Scan Types
| Scan Type | Description |
|-----------|-------------|
| **EC2** | Scans for software vulnerabilities and network reachability issues |
| **ECR** | Scans container images for OS and programming language package vulnerabilities |
| **Lambda** | Scans function code and dependencies for vulnerabilities |
## Architecture
```mermaid
flowchart LR
subgraph root["Root Account"]
step1["STEP 1: Delegate"]
end
subgraph security["Security Account"]
step2["STEP 2: Configure Org"]
dashboard["Inspector Dashboard"]
end
subgraph members["Member Accounts"]
member["Auto-enrolled"]
end
root -->|"Delegation"| security
members -->|"Findings"| dashboard
```
## Deployment
Inspector uses a **2-step delegated administrator** deployment model.
### Step 1: Deploy to Organization Management Account
This step requires root account access (such as with the `managers` profile).
```yaml
# core-ue1-root
components:
terraform:
aws-inspector2/root:
metadata:
component: aws-inspector2
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
```
```bash
atmos terraform apply aws-inspector2/root -s core-ue1-root
```
### Step 2: Deploy Organization Settings
```yaml
# core-ue1-security
components:
terraform:
aws-inspector2/org-settings:
metadata:
component: aws-inspector2
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
admin_delegated: true
# Scan types to enable
auto_enable_ec2: true
auto_enable_ecr: true
auto_enable_lambda: true
```
```bash
atmos terraform apply aws-inspector2/org-settings -s core-ue1-security
```
## Multi-Region Deployment
Repeat both steps for each region:
```bash
# us-east-1
atmos terraform apply aws-inspector2/root -s core-ue1-root
atmos terraform apply aws-inspector2/org-settings -s core-ue1-security
# us-west-2
atmos terraform apply aws-inspector2/root -s core-uw2-root
atmos terraform apply aws-inspector2/org-settings -s core-uw2-security
```
## Scan Configuration
Configure which resource types to scan:
```yaml
components:
terraform:
aws-inspector2/org-settings:
vars:
enabled: true
admin_delegated: true
# Enable/disable specific scan types
auto_enable_ec2: true
auto_enable_ecr: true
auto_enable_lambda: true
```
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `admin_delegated` | Set to `true` after delegation is complete | `false` |
| `auto_enable_ec2` | Auto-enable EC2 scanning for new members | `true` |
| `auto_enable_ecr` | Auto-enable ECR scanning for new members | `true` |
| `auto_enable_lambda` | Auto-enable Lambda scanning for new members | `true` |
| `member_association_excludes` | List of accounts to exclude from scanning | `[]` |
## Security Hub Integration
Inspector automatically sends findings to Security Hub when both services are enabled. No additional configuration required.
## See Also
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Aggregates Inspector findings
- [AWS GuardDuty](/layers/security-and-compliance/aws-guardduty/) - Complementary threat detection
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS Inspector 2 Documentation](https://docs.aws.amazon.com/inspector/)
- [aws-inspector2 Component](https://github.com/cloudposse-terraform-components/aws-inspector2)
- [Inspector Scanning Types](https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html)
- [Inspector Findings](https://docs.aws.amazon.com/inspector/latest/user/findings.html)
---
## AWS Macie
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
AWS Macie is a data security service that discovers sensitive data in Amazon S3 using machine learning and pattern
matching, providing visibility into data security risks and enabling automated protection.
## Overview
Macie provides:
- **Sensitive Data Discovery**: Automatically discovers PII, financial data, credentials, and other sensitive information
- **S3 Bucket Inventory**: Comprehensive inventory of S3 buckets with security and access control evaluation
- **Policy Findings**: Detects security issues like publicly accessible buckets, disabled encryption, external sharing
- **Sensitive Data Findings**: Reports discovered sensitive data including location and data type
- **Multi-account Coverage**: Monitors S3 data across all accounts in the AWS Organization
## Key Features
| Feature | Description |
|---------|-------------|
| **Data Discovery** | ML-based detection of PII, PHI, financial data, and credentials |
| **Bucket Monitoring** | Continuous evaluation of bucket security posture |
| **Custom Identifiers** | Define custom patterns for sensitive data detection |
| **Security Hub Integration** | Findings published to AWS Security Hub |
| **EventBridge Integration** | Findings published to EventBridge for automation |
## Architecture
```mermaid
flowchart LR
subgraph root["Root Account"]
step2["STEP 2: Delegate"]
end
subgraph security["Security Account"]
step1["STEP 1: Create Macie"]
step3["STEP 3: Configure Org"]
dashboard["Macie Dashboard"]
end
subgraph members["Member Accounts"]
member["Auto-enabled"]
end
root -->|"Delegation"| security
members -->|"S3 Findings"| dashboard
```
## Deployment
Macie uses a **3-step delegated administrator** deployment model.
### Step 1: Deploy to Delegated Administrator Account
```yaml
# core-ue1-security
components:
terraform:
aws-macie/delegated-administrator:
metadata:
component: macie
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
# Not yet delegated - creates Macie account only
admin_delegated: false
```
```bash
atmos terraform apply aws-macie/delegated-administrator -s core-ue1-security
```
### Step 2: Deploy to Organization Management Account
This step requires root account access (such as with the `managers` profile).
```yaml
# core-ue1-root
components:
terraform:
aws-macie/root:
metadata:
component: macie
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
privileged: true
```
```bash
atmos terraform apply aws-macie/root -s core-ue1-root
```
### Step 3: Deploy Organization Settings
```yaml
# core-ue1-security
components:
terraform:
aws-macie/org-settings:
metadata:
component: macie
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
admin_delegated: true
finding_publishing_frequency: FIFTEEN_MINUTES
```
```bash
atmos terraform apply aws-macie/org-settings -s core-ue1-security
```
## Multi-Region Deployment
Macie is a regional service. Deploy to each region where you have S3 buckets:
```bash
# us-east-1
atmos terraform apply aws-macie/delegated-administrator -s core-ue1-security
atmos terraform apply aws-macie/root -s core-ue1-root
atmos terraform apply aws-macie/org-settings -s core-ue1-security
# us-west-2
atmos terraform apply aws-macie/delegated-administrator -s core-uw2-security
atmos terraform apply aws-macie/root -s core-uw2-root
atmos terraform apply aws-macie/org-settings -s core-uw2-security
```
## Finding Publishing Frequency
Configure how often Macie publishes findings to Security Hub and EventBridge:
| Value | Description |
|-------|-------------|
| `FIFTEEN_MINUTES` | Publish every 15 minutes (default, recommended) |
| `ONE_HOUR` | Publish every hour |
| `SIX_HOURS` | Publish every 6 hours |
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `admin_delegated` | Set to `true` after delegation is complete | `false` |
| `finding_publishing_frequency` | How often to publish findings | `FIFTEEN_MINUTES` |
| `member_accounts` | List of member account names to enable | `[]` |
## Sensitive Data Types
Macie can detect:
- **PII**: Names, addresses, phone numbers, SSN, passport numbers
- **Financial**: Credit card numbers, bank account numbers
- **Health**: PHI, medical record numbers
- **Credentials**: API keys, passwords, private keys
- **Custom**: User-defined patterns using regex
## See Also
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Aggregates Macie findings
- [AWS Config](/layers/security-and-compliance/aws-config/) - Monitors S3 bucket configurations
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS Macie Documentation](https://docs.aws.amazon.com/macie/)
- [aws-macie Component](https://github.com/cloudposse-terraform-components/aws-macie)
- [Macie Discovery Jobs](https://docs.aws.amazon.com/macie/latest/user/discovery-jobs.html)
- [Custom Data Identifiers](https://docs.aws.amazon.com/macie/latest/user/custom-data-identifiers.html)
---
## AWS Security Hub
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
AWS Security Hub provides a centralized dashboard for aggregating, organizing, and prioritizing security findings
from AWS services and third-party tools across your organization.
## Overview
Security Hub provides:
- **Centralized Security Management**: Unified dashboard for security findings from multiple accounts and regions
- **Product Subscriptions**: Integration with GuardDuty, Inspector, Macie, Config, and Access Analyzer
- **Compliance Standards**: CIS AWS Foundations, PCI DSS, AWS Foundational Security Best Practices
- **Finding Aggregation**: Cross-region aggregation for centralized visibility
- **Automated Remediation**: EventBridge integration for automated response
## Key Features
| Feature | Description |
|---------|-------------|
| **Product Subscriptions** | Automatically receive findings from AWS security services |
| **Security Standards** | Compliance checks against industry frameworks |
| **Custom Insights** | Create custom views of security data |
| **Finding Aggregation** | Aggregate findings from all regions into one |
| **SNS Notifications** | Alert on new findings via SNS |
## Product Subscriptions
Security Hub integrates with these AWS services:
| Product | Default | Description |
|---------|---------|-------------|
| GuardDuty | `true` | Threat detection findings |
| Inspector | `true` | Vulnerability scanning findings |
| Macie | `true` | Sensitive data discovery findings |
| Config | `true` | Configuration compliance findings |
| Access Analyzer | `true` | External access findings |
| Firewall Manager | `false` | Firewall policy compliance |
## Architecture
```mermaid
flowchart LR
subgraph root["Root Account"]
step2["STEP 2: Delegate"]
end
subgraph security["Security Account"]
step1["STEP 1: Enable"]
step3["STEP 3: Configure"]
dashboard["Security Hub Dashboard"]
end
subgraph members["Member Accounts"]
member["Auto-enrolled"]
end
root -->|"Delegation"| security
members -->|"Findings"| dashboard
```
## Deployment
Security Hub uses a **3-step delegated administrator** deployment model.
### Step 1: Deploy to Delegated Administrator Account
```yaml
# core-ue1-security
components:
terraform:
aws-security-hub/delegated-administrator:
metadata:
component: security-hub
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
# Product subscriptions for AWS security service integrations
product_subscriptions:
guardduty: true
inspector: true
macie: true
config: true
access_analyzer: true
firewall_manager: false
```
```bash
atmos terraform apply aws-security-hub/delegated-administrator -s core-ue1-security
```
### Step 2: Deploy to Organization Management Account
This step requires root account access (such as with the `managers` profile).
```yaml
# core-ue1-root
components:
terraform:
aws-security-hub/root:
metadata:
component: security-hub
backend:
s3:
role_arn: null
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
privileged: true
```
```bash
atmos terraform apply aws-security-hub/root -s core-ue1-root
```
### Step 3: Deploy Organization Settings
```yaml
# core-ue1-security
components:
terraform:
aws-security-hub/org-settings:
metadata:
component: security-hub
vars:
enabled: true
delegated_administrator_account_name: core-security
environment: ue1
region: us-east-1
admin_delegated: true
```
```bash
atmos terraform apply aws-security-hub/org-settings -s core-ue1-security
```
## Compliance Standards
Enable security standards for compliance monitoring:
```yaml
components:
terraform:
aws-security-hub/delegated-administrator:
vars:
enabled_standards:
- standards/aws-foundational-security-best-practices/v/1.0.0
- standards/cis-aws-foundations-benchmark/v/1.4.0
# Optional additional standards:
# - standards/pci-dss/v/3.2.1
```
## Finding Aggregation
Enable cross-region finding aggregation:
```yaml
components:
terraform:
aws-security-hub/delegated-administrator:
vars:
finding_aggregator_enabled: true
finding_aggregator_linking_mode: ALL_REGIONS
# Or aggregate from specific regions:
# finding_aggregator_linking_mode: SPECIFIED_REGIONS
# finding_aggregator_regions:
# - us-east-1
# - us-west-2
```
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `admin_delegated` | Set to `true` after delegation | `false` |
| `auto_enable_organization_members` | Auto-enroll new members | `true` |
| `product_subscriptions` | Map of product subscription settings | See above |
| `enabled_standards` | List of compliance standards to enable | `[]` |
| `finding_aggregator_enabled` | Enable cross-region aggregation | `false` |
| `create_sns_topic` | Create SNS topic for notifications | `false` |
## Verification
After deployment, verify product subscriptions:
```bash
# Via Terraform output
atmos terraform output aws-security-hub/delegated-administrator -s core-ue1-security
# Via AWS CLI
aws securityhub list-enabled-products-for-import --region us-east-1
```
## See Also
- [AWS GuardDuty](/layers/security-and-compliance/aws-guardduty/) - Threat detection findings source
- [AWS Inspector 2](/layers/security-and-compliance/aws-inspector2/) - Vulnerability scanning findings source
- [AWS Macie](/layers/security-and-compliance/aws-macie/) - Sensitive data findings source
- [AWS Config](/layers/security-and-compliance/aws-config/) - Configuration compliance findings source
- [AWS Access Analyzer](/layers/security-and-compliance/aws-access-analyzer/) - External access findings source
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS Security Hub Documentation](https://docs.aws.amazon.com/securityhub/)
- [aws-security-hub Component](https://github.com/cloudposse-terraform-components/aws-security-hub)
- [Security Hub Product Integrations](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-providers.html)
- [Security Hub Standards](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards.html)
---
## AWS Shield Advanced
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
AWS Shield Advanced provides enhanced DDoS protection for your AWS resources, including ALBs, CloudFront distributions,
Route53 hosted zones, and Elastic IPs, with access to the AWS DDoS Response Team (DRT).
## Overview
AWS Shield has two tiers:
| Feature | Shield Standard | Shield Advanced |
|---------|-----------------|-----------------|
| **Cost** | Free (included with AWS) | $3,000/month per organization |
| **Protection** | Layer 3/4 (network/transport) | Layer 3/4/7 (includes application layer) |
| **DRT Access** | No | Yes (24/7 DDoS Response Team) |
| **Cost Protection** | No | Yes (credits for DDoS-related scaling) |
| **Advanced Metrics** | No | Yes (CloudWatch metrics) |
| **WAF Integration** | Basic | Advanced (custom rules during attacks) |
## Protected Resources
Shield Advanced protects:
| Resource Type | Stack Level | Description |
|---------------|-------------|-------------|
| **Route53 Hosted Zones** | Global | DNS infrastructure protection |
| **CloudFront Distributions** | Global | CDN and web application protection |
| **Application Load Balancers** | Regional | Application endpoint protection |
| **Elastic IPs** | Regional | NAT Gateway and EC2 protection |
## Architecture
```mermaid
flowchart LR
subgraph subscription["Shield Subscription"]
sub_info["$3,000/month"]
end
subgraph global["Global Resources"]
route53["Route53"]
cloudfront["CloudFront"]
end
subgraph regional["Regional Resources"]
albs["ALBs"]
eips["Elastic IPs"]
end
subscription -->|"Protects"| global
subscription -->|"Protects"| regional
```
## Deployment
Shield Advanced uses a **per-resource** deployment model (no delegated administrator pattern).
### Prerequisites
Shield Advanced subscription must be activated before deploying this component.
```bash
# Subscribe via AWS CLI
aws shield create-subscription
# Or subscribe via AWS Console:
# AWS Shield → Getting started → Subscribe to Shield Advanced
```
### Global Resources Configuration
```yaml
# plat-gbl-prod
components:
terraform:
aws-shield:
metadata:
component: aws-shield
vars:
enabled: true
# Route53 hosted zones
route53_zone_names:
- example.com
- api.example.com
# CloudFront distributions
cloudfront_distribution_ids:
- E1ABCDEFG12345
- E2BCDEFGH23456
```
```bash
atmos terraform apply aws-shield -s plat-gbl-prod
```
### Regional Resources Configuration
```yaml
# plat-ue1-prod
components:
terraform:
aws-shield:
metadata:
component: aws-shield
vars:
enabled: true
region: us-east-1
# Application Load Balancers
alb_protection_enabled: true
alb_names:
- k8s-common-2c5f23ff99
- api-gateway-alb
# Elastic IPs (NAT Gateways, EC2 instances)
eips:
- 3.214.128.240 # NAT Gateway AZ-a
- 35.172.208.150 # NAT Gateway AZ-b
```
```bash
atmos terraform apply aws-shield -s plat-ue1-prod
```
### Complete Example (All Resources)
```yaml
components:
terraform:
aws-shield:
metadata:
component: aws-shield
vars:
enabled: true
# Global resources
route53_zone_names:
- example.com
- api.example.com
cloudfront_distribution_ids:
- E1ABCDEFG12345
# Regional resources
alb_protection_enabled: true
alb_names:
- k8s-common-2c5f23ff99
eips:
- 3.214.128.240
- 35.172.208.150
```
## Auto-Discovery from EKS
When `alb_protection_enabled: true` and `alb_names` is empty, the component auto-discovers ALBs from the EKS ALB controller:
```yaml
components:
terraform:
aws-shield:
vars:
enabled: true
alb_protection_enabled: true
# alb_names is empty - auto-discovers from EKS ALB controller
```
## Key Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `route53_zone_names` | List of Route53 hosted zone names to protect | `[]` |
| `cloudfront_distribution_ids` | List of CloudFront distribution IDs to protect | `[]` |
| `alb_protection_enabled` | Enable ALB protection | `false` |
| `alb_names` | List of ALB names to protect | `[]` |
| `eips` | List of Elastic IPs to protect | `[]` |
## Finding Resources
Use these AWS CLI commands to find resource identifiers:
```bash
# List ALB names
aws elbv2 describe-load-balancers --query 'LoadBalancers[*].LoadBalancerName' --output table
# List Elastic IPs
aws ec2 describe-addresses --query 'Addresses[*].[PublicIp,AllocationId]' --output table
# List Route53 hosted zones
aws route53 list-hosted-zones --query 'HostedZones[*].[Name,Id]' --output table
# List CloudFront distributions
aws cloudfront list-distributions --query 'DistributionList.Items[*].[Id,DomainName]' --output table
```
## Verifying Protection
```bash
# List all protected resources
aws shield list-protections --query 'Protections[*].[Name,ResourceArn]' --output table
# Check subscription status
aws shield describe-subscription
```
## See Also
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Centralized security dashboard
- [AWS GuardDuty](/layers/security-and-compliance/aws-guardduty/) - Complementary threat detection
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS Shield Documentation](https://docs.aws.amazon.com/shield/)
- [aws-shield Component](https://github.com/cloudposse-terraform-components/aws-shield)
- [Subscribing to Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/enable-ddos-prem.html)
- [DDoS Response Team Support](https://docs.aws.amazon.com/waf/latest/developerguide/ddos-srt-support.html)
---
## Decide on Infrastructure & Software Static Analysis Tools
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Infrastructure Considerations: (terraform, docker)
- checkov (open source alternative by bridgecrew; works with github actions)
- bridgecrew (managed service - acquired by Paloalto Networks)
- tflint
- tfsec
- conftest
## Software Static Analysis
- Sonatype
- Sonarqube
- Snyk
- WhiteSource
- JFrog
---
## Decide on Kubernetes Platform Compliance Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Decide on a strategy for CIS Compliance/hardening on EKS.
## Considerations
- [https://github.com/aquasecurity/kube-bench](https://github.com/aquasecurity/kube-bench) (integrates with Security Hub [https://aws.amazon.com/about-aws/whats-new/2020/12/aws-security-hub-adds-open-source-tool-integration-with-kube-bench-and-cloud-custodian/](https://aws.amazon.com/about-aws/whats-new/2020/12/aws-security-hub-adds-open-source-tool-integration-with-kube-bench-and-cloud-custodian/))
- [https://aws.amazon.com/about-aws/whats-new/2022/01/amazon-guardduty-elastic-kubernetes-service-clusters/](https://aws.amazon.com/about-aws/whats-new/2022/01/amazon-guardduty-elastic-kubernetes-service-clusters/)
- [https://snyk.io/](https://snyk.io/)
- [https://www.rapid7.com/](https://www.rapid7.com/)
---
## Decide on Log Retention and Durability Architecture
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Not all logs were created equal. Some may contain PHI (Protected Health Information) or CHD (Card Holder Data) while others are simply HTTP request logs. Depending on regional jurisdiction (E.g. Europe), there can be other requirements (E.g. [GDPR on AWS](https://docs.aws.amazon.com/whitepapers/latest/navigating-gdpr-compliance/monitoring-and-logging.html)).
We need to identify the log destinations to discuss how to handle them.
### Recommendations
- Use 90 days unless the compliance framework stipulates differently (e.g. PCI/DSS)
- Use 60 days in the standard S3 storage tier and then transfer to glacier for the last 30 days.
## Considerations
### Which Logs are in Scope?
- **Cloud Trails** (AWS API logs)
- **CloudWatch** Logs (platform logs)
- **Datadog** Logs (logs stored in datadog)
- **Web Access Logs** (e.g. ALBs)
- **WAF** Logs
- **Shield** Logs
- **VPC flow logs** (these are huge - every packet that flows through the VPC)
- **Application logs** (the events emitted from your applications)
### How are logs handled?
For everything in scope, we need to address:
- Where should logs be aggregated (e.g. S3 bucket in audit account, datadog)?
- Do logs need to be replicated to any backup region?
- Should logs be versioned?
- Which logs should get forwarded to Datadog? (e.g. ALBs, flow logs, cloud trails, etc)
- How long is the log data online (e.g. easily accessible vs cold storage like [Glacier](https://aws.amazon.com/s3/storage-classes/glacier/))?
- Does any of the data contain PHI, PII, CHD, etc.?
- Are there any data locality, data residency restrictions on logs? e.g. Can logs pass regional boundaries (E.g. for GDPR compliance logs may need to stay in EU)?
- What’s the tolerance for the latency of accessing log events (e.g. can it be hours or needs to be within seconds)?
### References
- [https://docs.aws.amazon.com/whitepapers/latest/navigating-gdpr-compliance/monitoring-and-logging.html](https://docs.aws.amazon.com/whitepapers/latest/navigating-gdpr-compliance/monitoring-and-logging.html)
- [https://www.pcidssguide.com/what-are-the-pci-dss-log-retention-requirements/](https://www.pcidssguide.com/what-are-the-pci-dss-log-retention-requirements/)
- [https://aws.amazon.com/s3/storage-classes/glacier/](https://aws.amazon.com/s3/storage-classes/glacier/)
---
## Decide on Strategy for Hardened Base AMIs
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
When critical CVEs come out, they should be promptly remediated. If we rely strictly on upstream AMIs, we have little control over the time to remediate as we’re dependent on third parties to remediate the vulnerabilities and their failure to do so will jeopardize our ability to meet commitments.
Many compliance frameworks require host-level hardening and the ability to demonstrate something along the lines of [https://www.cisecurity.org/controls/continuous-vulnerability-management/](https://www.cisecurity.org/controls/continuous-vulnerability-management/). Laws, regulations, standards, or contractual agreements may require an even higher priority or shorter timeline for remediation. For example, to comply with the [Payment Card Industry Data Security Standard (PCI DSS)](https://www.pcisecuritystandards.org/document_library), vulnerabilities in any PCI environment:
- CVSS scores of 4 or higher must be remediated within 30 days of notification.
- CVSS scores of less than 4 must be remediated within two to three months.
See [Decide on Technical Benchmark Framework](/layers/security-and-compliance/design-decisions/decide-on-technical-benchmark-framework)
## Solution
We need a solution that covers both EKS (for customers using it) and for standalone EC2 instances where applicable. Additionally, regardless of the solution, we'll also need to instrument the process for rolling out the changes. See [How to Enable Spacelift Drift Detection](/resources/deprecated/spacelift/tutorials/how-to-enable-spacelift-drift-detection) for a nice way to automatically update AMIs using data sources.
### Use CIS or Not?
> CIS benchmarks are internationally recognized as security standards for defending IT systems and data against cyberattacks. Used by thousands of businesses, they offer prescriptive guidance for establishing a secure baseline configuration.The CIS Foundation is the most recognized industry standard for hardening OS images, however, they have not yet published the CIS standard for container-optimized OS. The traditional CIS benchmarks are for full-blown OSs with a different set of concerns that do not apply to a container-optimized OS. What CIS has defined are [the best practices for hardening EKS as a platform](https://aws.amazon.com/de/blogs/containers/introducing-cis-amazon-eks-benchmark/) and that standard is covered by `kube-bench`. So by running `kube-bench` on a cluster we would be able to validate if Bottlerocket meets the CIS standard for nodes managed by Kubernetes. While this is not the same as "certification", it might be good enough for benchmark compliance.
### Use Existing Hardened Image
- AWS does not provide turnkey CIS-compliant base AMIs (third-party vendors only).
- Bottlerocket is more secure but is still not _technically_ CIS-compliant out of the box
[https://github.com/bottlerocket-os/bottlerocket/issues/1297](https://github.com/bottlerocket-os/bottlerocket/issues/1297)
- [https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami-bottlerocket.html](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami-bottlerocket.html)
- [https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-eks-nodes-groups-bottlerocket/](https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-eks-nodes-groups-bottlerocket/)
- [CIS provides marketplace images](https://aws.amazon.com/marketplace/seller-profile?id=dfa1e6a8-0b7b-4d35-a59c-ce272caee4fc), but these add $0.02/hour.
### DIY Hardened Images
- Build our own AMIs based on something like Bottlerocket or Amazon Linux and do our own hardening.
- Any hardening we do would necessitate the implementation of the packer configurations and pipelines for managing it.
- Create GitHub Action pipeline to build packer images and distribute them to enabled regions
### Cloud-Init Patching
With `cloud-init` we can patch the system at runtime. This has the benefit of not requiring us to manage any complicated factory for building AMIs for multiple regions but violates the principle of immutable infrastructure.
### AWS Systems Manager Patch Manager
With AWS Systems Manager can apply patch documents to running systems based on policies, but violates the principle of immutable infrastructure.
---
## Decide on a Technical Benchmark Framework for Compliance
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Benchmark Considerations
- SOC2 Type II
- HIPAA
- HITRUST
- PCI/DSS
- CIS
- NIST
- ISO27001
- AWS Well-Architected
## SOC2 Considerations
SOC2 defines a set of high-level expectations, but it’s up to the responsible party (e.g. Customer) to assert what controls are in place for each pillar.
1. **Logical and physical access controls**
2. **System operations**
3. **Change management**
4. **Risk mitigation**
Using a combination of one or more of the compliance standards such as CIS, HITRUST, NIST, ISO27001, etc is the typical approach. Organizationally, this is a decision that has both technical and procedural impacts.
The Technical Benchmark Framework should satisfy the vast majority of requirements for both HIPAA and SOC2, which means most likely selecting more than one.
### Questions
- Has the team already started mapping out any of SOC2 controls that would influence technical controls or configurations?
---
## Decide on WAF Requirements/Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
ALBs have very limited capabilities to fend off attacks by themselves. Using Security Groups is not a scalable solution. [The number of inbound/outbound rules is limited to a max of 120 (60 ea)](https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-security-groups). To thwart any sort of Denial of Service (DoS) attack, more tools are required. Moreover, not all attacks are as easily identified as DoS attacks. Other threat vectors include SQL injection, XSS, etc. The older your applications, the more external dependencies you have, the greater the attack surface area.
## Solution
Deploy a Web Application Firewall (WAF) capable of performing Layer-7 inspection and mitigation.
:::info
Our recommendation is to deploy the AWS WAF with the AWS Managed Rules for the [https://owasp.org/www-project-top-ten/](https://owasp.org/www-project-top-ten/).
[https://aws.amazon.com/marketplace/solutions/security/waf-managed-rules](https://aws.amazon.com/marketplace/solutions/security/waf-managed-rules)
:::
## Considerations
- ALB/NLB won’t provide TLS in-transit with nitro instances
- AWS WAF only works with ALBs
- We need to terminate TLS at the Kubernetes ingress layer (e.g. with cert-manager and ZeroSSL) in order to deliver end to end encryption
- Cloudflare Argo tunnel will work without exposing the cluster endpoints directly to the internet
Our recommendation is to use AWS WAF with ALB load balancers, then use AWS Nitro instances for e2e encryption inside EKS, and self-signed certs between the ALB and the pods.
## References
- [https://github.com/cloudposse/terraform-aws-waf](https://github.com/cloudposse/terraform-aws-waf)
---
## Design Decisions(6)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions for how you'll monitor for security and
compliance by leveraging the full suite of AWS security-oriented services.
---
## FAQ(Security-and-compliance)
import Steps from '@site/src/components/Steps';
import TaskList from '@site/src/components/TaskList';
## Deployment Order Issues
### Q: What is the correct deployment order for security services?
**A:** Security services with delegated administrator patterns must be deployed in a specific order. See the
[Setup Guide](/layers/security-and-compliance/setup) for the complete deployment sequence.
**General pattern for 3-step services (GuardDuty, Security Hub, Macie):**
1. Deploy to security account first (`admin_delegated: false`)
1. Deploy to root account (`privileged: true`)
1. Deploy org settings to security account (`admin_delegated: true`)
**General pattern for 2-step services (Inspector, Access Analyzer):**
1. Deploy to root account first
1. Deploy org settings to security account
### Q: Why must I deploy to the security account before the root account?
**A:** For 3-step services (GuardDuty, Security Hub, Macie), the security account must have the service enabled
before it can be designated as a delegated administrator. Deploying to root first will fail because there's
nothing to delegate to.
---
## Security Hub Errors
### Q: "Account is not subscribed to AWS Security Hub"
```text
Error: error disabling security hub control: InvalidAccessException: Account 123456789012
is not subscribed to AWS Security Hub
```
**A:** This error occurs when trying to configure Security Hub before it's fully enabled. Ensure you've completed
all three deployment steps:
1. `aws-security-hub/delegated-administrator` to security account
1. `aws-security-hub/root` to root account
1. `aws-security-hub/org-settings` to security account
If you skipped Step 3, the organization settings (including control configurations) won't work.
### Q: "Account is not an administrator for this organization"
```text
Error: error updating security hub administrator account settings: InvalidAccessException:
Account 123456789012 is not an administrator for this organization
```
**A:** This means the delegation from root to security hasn't been completed. Check:
1. Verify you deployed `aws-security-hub/root` to the root account with `privileged: true`
1. Ensure the `delegated_administrator_account_name` matches your security account name
1. Re-deploy the root component if needed
### Q: "Organization master must first enable SecurityHub"
```text
Error: error designating security hub administrator account members:
"Operation failed because your organization master must first enable SecurityHub to be added as a member"
```
**A:** Security Hub service access must be enabled in AWS Organizations. Add `securityhub.amazonaws.com` to
`aws_service_access_principals` in your `account` component:
```yaml
# In your account component configuration
components:
terraform:
account:
vars:
aws_service_access_principals:
- securityhub.amazonaws.com
- guardduty.amazonaws.com
- macie.amazonaws.com
# ... other principals
```
---
## GuardDuty Errors
### Q: "The input detectorId is not owned by the current account"
```text
Error: error designating guardduty administrator account members: BadRequestException:
The request is rejected because the input detectorId is not owned by the current account.
```
**A:** This typically indicates a provider configuration issue. The `awsutils` provider must be configured to
assume the correct role. Check your provider configuration:
```hcl
provider "awsutils" {
region = var.region
assume_role {
role_arn = module.iam_roles.terraform_role_arn
}
}
```
Also verify you're deploying to the correct account (security, not root) for Step 3.
---
## AWS Config Errors
### Q: "Blank spaces are not acceptable for input parameter: policyARN"
```text
Error: Error creating AWSConfig rule: InvalidParameterValueException:
Blank spaces are not acceptable for input parameter: policyARN.
```
**A:** This error typically occurs when the `support` IAM role doesn't exist in the account. The CIS conformance
pack requires this role. Deploy `aws-team-roles` with the `support` role enabled before deploying AWS Config
conformance packs.
### Q: Organization conformance packs fail to deploy
**A:** Organization conformance packs require:
1. All member accounts must have configuration recorders already set up
1. Deploy `aws-config` to **all member accounts first**, then deploy to root account last
```bash
# Deploy to members first
atmos terraform apply aws-config -s core-ue1-audit
atmos terraform apply aws-config -s core-ue1-security
atmos terraform apply aws-config -s plat-ue1-dev
# Deploy to root last (with organization conformance packs)
atmos terraform apply aws-config -s core-ue1-root
```
**A:** The `support` role may not be deployed into the given account. Check your IAM role configuration for the `support` role.
---
## Service Principal Issues
### Q: Which service principals are required for security services?
**A:** Add these to `aws_service_access_principals` in your `account` component:
| Service | Service Principal |
|---------|-------------------|
| AWS Config | `config.amazonaws.com`, `config-multiaccountsetup.amazonaws.com` |
| CloudTrail | `cloudtrail.amazonaws.com` |
| GuardDuty | `guardduty.amazonaws.com` |
| Security Hub | `securityhub.amazonaws.com` |
| Inspector | `inspector2.amazonaws.com` |
| Macie | `macie.amazonaws.com` |
| Access Analyzer | `access-analyzer.amazonaws.com` |
---
## Multi-Region Deployment
### Q: Do I need to deploy security services to every region?
**A:** It depends on the service:
| Service | Regional? | Recommendation |
|---------|-----------|----------------|
| AWS Config | Yes | Deploy to all regions with resources |
| CloudTrail | No | Single organization trail covers all regions |
| GuardDuty | Yes | Deploy to all enabled regions |
| Security Hub | Yes | Deploy to primary region, enable aggregation |
| Inspector | Yes | Deploy to regions with EC2/Lambda/ECR |
| Macie | Yes | Deploy to regions with S3 buckets |
| Access Analyzer | Yes | Deploy to all regions |
### Q: How do I aggregate findings from multiple regions?
**A:** Security Hub supports cross-region finding aggregation. Enable it in your primary region:
```yaml
components:
terraform:
aws-security-hub/delegated-administrator/ue1:
vars:
finding_aggregator_enabled: true
finding_aggregator_linking_mode: ALL_REGIONS
```
---
## Shield Advanced
### Q: Shield Advanced deployment fails
**A:** Shield Advanced requires a manual subscription before deploying the component:
```bash
# Subscribe via AWS CLI (or use AWS Console)
aws shield create-subscription
```
The subscription is $3,000/month per organization and covers all accounts.
---
## IAM Inline Policy False Positives
### Q: Why are Service-Linked Roles flagged for inline policies?
**A:** The `IAM_NO_INLINE_POLICY_CHECK` AWS Config rule flags AWS Service-Linked Roles (SLRs) as NON_COMPLIANT.
This is a **known false positive** because SLRs must have inline policies by design.
Common SLRs that trigger this:
- `AWSServiceRoleForAmazonGuardDuty`
- `AWSServiceRoleForConfig`
- `AWSServiceRoleForSecurityHub`
- `AWSServiceRoleForAccessAnalyzer`
**Recommendation:** Document these as accepted false positives in your compliance documentation.
---
## Security and Compliance
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
Learn Cloud Posse's approach to deploying AWS security services and compliance monitoring across your AWS Organization.
We cover essential topics such as threat detection, vulnerability scanning, configuration compliance, sensitive data
discovery, and centralized security findings aggregation.
## The Problem
Ensuring your AWS Organization meets compliance benchmarks (e.g., SOC2, HIPAA, PCI/DSS, CIS) requires comprehensive
security monitoring across all accounts and regions. Your AWS accounts contain thousands of resources, making manual
auditing impossible. Each security service has its own deployment model, regional requirements, and configuration
complexity. Setting up this automation by hand is tedious, error-prone, and difficult to maintain at scale.
## Our Solution
Deploy a comprehensive set of Cloud Posse components to enable security monitoring and compliance across your entire
AWS Organization. Our approach uses the **delegated administrator** pattern, where a central `security` account manages
organization-wide security services, while logs are stored in an isolated `audit` account.
```mermaid
flowchart LR
subgraph root["Root Account"]
root_tasks["Delegates admin"]
end
subgraph security["Security Account"]
sec_services["GuardDuty, Security HubInspector, Macie, Access Analyzer"]
end
subgraph audit["Audit Account"]
audit_storage["CloudTrail logsConfig snapshots"]
end
subgraph members["Member Accounts"]
member_info["Auto-enrolled"]
end
root -->|"Delegates"| security
members -->|"Findings"| security
members -->|"Logs"| audit
```
## Security Components Overview
Cloud Posse provides 9 Terraform components for comprehensive AWS security and compliance:
| Component | Purpose | Deployment Model |
|-----------|---------|------------------|
| [AWS Config](/layers/security-and-compliance/aws-config/) | Configuration compliance and resource inventory | Per-account + Org conformance packs |
| [AWS CloudTrail](/layers/security-and-compliance/aws-cloudtrail/) | API activity logging and audit trail | Organization trail |
| [AWS GuardDuty](/layers/security-and-compliance/aws-guardduty/) | Intelligent threat detection | 3-step delegated admin |
| [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) | Centralized security findings aggregation | 3-step delegated admin |
| [AWS Inspector 2](/layers/security-and-compliance/aws-inspector2/) | Automated vulnerability scanning | 2-step delegated admin |
| [Amazon Macie](/layers/security-and-compliance/aws-macie/) | Sensitive data discovery in S3 | 3-step delegated admin |
| [IAM Access Analyzer](/layers/security-and-compliance/aws-access-analyzer/) | External and unused access detection | 2-step delegated admin |
| [AWS Shield](/layers/security-and-compliance/aws-shield/) | DDoS protection | Per-resource |
| [AWS Audit Manager](/layers/security-and-compliance/aws-audit-manager/) | Compliance evidence collection | Single-step (root) |
### AWS Config
[**AWS Config**](/layers/security-and-compliance/aws-config/) provides configuration compliance monitoring and resource
inventory across your AWS Organization. It continuously evaluates resources against compliance rules (conformance packs)
and maintains a configuration history for auditing.
**Key Features:**
- Continuous configuration recording and compliance evaluation
- CMMC Level 2, CIS, and custom conformance packs
- Central aggregation in the security account
- Configuration snapshots stored in the audit account
### AWS CloudTrail
[**AWS CloudTrail**](/layers/security-and-compliance/aws-cloudtrail/) records API activity across your AWS Organization,
providing an audit trail for security analysis, compliance auditing, and operational troubleshooting.
**Key Features:**
- Organization-wide trail covering all accounts automatically
- Log file validation with cryptographic signatures
- CloudWatch Logs integration for real-time analysis
- Centralized storage in the audit account with lifecycle policies
### AWS GuardDuty
[**AWS GuardDuty**](/layers/security-and-compliance/aws-guardduty/) is an intelligent threat detection service that
continuously monitors for malicious activity and unauthorized behavior across your AWS accounts.
**Key Features:**
- ML-based threat detection for account compromise, instance compromise, and reconnaissance
- S3 data event protection and EKS audit log monitoring
- EBS malware scanning and Lambda network activity analysis
- Runtime monitoring with agent management for EKS, ECS, and EC2
### AWS Security Hub
[**AWS Security Hub**](/layers/security-and-compliance/aws-security-hub/) provides a centralized dashboard for
aggregating, organizing, and prioritizing security findings from AWS services and third-party tools.
**Key Features:**
- Aggregates findings from GuardDuty, Inspector, Macie, Config, and Access Analyzer
- Compliance checks against CIS, PCI DSS, AWS Foundational Security Best Practices
- Cross-region finding aggregation
- Custom insights and automated remediation via EventBridge
### AWS Inspector 2
[**AWS Inspector 2**](/layers/security-and-compliance/aws-inspector2/) provides automated vulnerability scanning for
EC2 instances, container images in ECR, and Lambda functions.
**Key Features:**
- Continuous CVE detection with risk-based prioritization
- EC2 scanning (requires SSM Agent), ECR image scanning, Lambda function scanning
- Network reachability analysis
- Automatic enablement for new organization members
### Amazon Macie
[**Amazon Macie**](/layers/security-and-compliance/aws-macie/) is a data security service that uses machine learning
to discover, classify, and protect sensitive data stored in Amazon S3.
**Key Features:**
- ML-based detection of PII, financial data, credentials, and other sensitive information
- Automated S3 bucket inventory with security posture assessment
- Policy findings for publicly accessible buckets and encryption issues
- Custom data identifiers with regex patterns
### IAM Access Analyzer
[**IAM Access Analyzer**](/layers/security-and-compliance/aws-access-analyzer/) helps identify resources shared with
external entities and detects unused access permissions.
**Key Features:**
- External access detection for S3, IAM, KMS, Lambda, SQS, Secrets Manager
- Unused access analysis for identifying overly permissive policies
- Policy validation before deployment
- Archive rules for known external access patterns
### AWS Shield
[**AWS Shield**](/layers/security-and-compliance/aws-shield/) provides DDoS protection for your applications.
Shield Advanced offers enhanced protection with 24/7 DDoS Response Team access.
**Key Features:**
- Layer 3/4/7 DDoS protection for ALBs, CloudFront, Elastic IPs, Route53
- DDoS cost protection (credits for scaling during attacks)
- 24/7 DDoS Response Team (DRT) access
- Advanced CloudWatch metrics and WAF integration
### AWS Audit Manager
[**AWS Audit Manager**](/layers/security-and-compliance/aws-audit-manager/) automates evidence collection for
compliance audits using prebuilt frameworks.
**Key Features:**
- Prebuilt frameworks for SOC 2, HIPAA, PCI DSS, GDPR, FedRAMP
- Automated evidence collection from AWS services
- Assessment reports for auditors
- Custom framework support
AWS Audit Manager has limited framework availability in GovCloud. Many prebuilt frameworks (HIPAA, PCI DSS, GDPR, SOC 2)
are not available in the GovCloud partition. Consider using AWS Config conformance packs as an alternative.
## Deployment Models
The components use different deployment patterns based on AWS service requirements:
### 3-Step Delegated Administrator
Used by: **GuardDuty**, **Security Hub**, **Macie**
```mermaid
flowchart LR
step1["Step 1: Security AccountCreate service account(admin_delegated: false)"]
step2["Step 2: Root AccountDelegate administration(privileged: true)"]
step3["Step 3: Security AccountConfigure org settings(admin_delegated: true)"]
step1 --> step2 --> step3
```
### 2-Step Delegated Administrator
Used by: **Inspector**, **Access Analyzer**
```mermaid
flowchart LR
step1["Step 1: Root AccountDelegate administration(privileged: true)"]
step2["Step 2: Security AccountConfigure org settings(admin_delegated: true)"]
step1 --> step2
```
### Per-Account Deployment
Used by: **Config**, **CloudTrail**
Deploy to each account with central aggregation in security/audit accounts.
### Per-Resource Deployment
Used by: **Shield**
Deploy to each account/resource that needs protection (not organization-wide).
## Integration Architecture
All security services integrate through AWS Security Hub for centralized visibility:
```mermaid
flowchart LR
cloudtrail["CloudTrail"]
config["Config"]
guardduty["GuardDuty"]
inspector["Inspector"]
macie["Macie"]
analyzer["Access Analyzer"]
subgraph audit["Audit Account"]
s3["S3 Storage"]
aggregator["Config Aggregator"]
end
subgraph security["Security Account"]
hub["Security Hub"]
end
cloudtrail --> s3
config -->|"Snapshots"| s3
config --> aggregator
guardduty -->|"Findings"| hub
inspector -->|"Findings"| hub
macie -->|"Findings"| hub
analyzer -->|"Findings"| hub
```
## Regional Deployment
All AWS security services are regional. You must deploy to:
1. **All regions enabled by default** (cannot be disabled):
| | | | |
| -------------- | -------------- | --------- | --------- |
| ap-northeast-1 | ap-southeast-2 | eu-west-2 | us-west-1 |
| ap-northeast-2 | ca-central-1 | eu-west-3 | us-west-2 |
| ap-northeast-3 | eu-central-1 | sa-east-1 | |
| ap-south-1 | eu-north-1 | us-east-1 | |
| ap-southeast-1 | eu-west-1 | us-east-2 | |
1. **Any additional regions** you have opted into
## Getting Started
1. Review the [Setup Guide](/layers/security-and-compliance/setup/) for step-by-step deployment instructions
1. Explore individual component documentation for detailed configuration options
1. Check the [Design Decisions](/layers/security-and-compliance/design-decisions/) for architectural considerations
1. Review the [FAQ](/layers/security-and-compliance/faq/) for common issues and solutions
:::caution Important
Follow the deployment steps carefully and in order. Incorrect deployment order may result in a condition that requires
manual cleanup across multiple regions in each of your accounts.
:::
---
## Setup Security and Compliance
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Admonition from '@theme/Admonition';
import Note from '@site/src/components/Note';
import TaskList from '@site/src/components/TaskList';
This guide walks you through deploying AWS security and compliance services across your AWS Organization.
Follow the steps in order to ensure proper configuration of all services.
- AWS Organizations is configured with your account structure
- Account baseline has been deployed (see [Deploy Accounts](/layers/accounts/deploy-accounts/))
- Root account access is available for root account deployments (such as with the `managers` profile)
## Deployment Order
The security components have dependencies and must be deployed in a specific order:
```mermaid
flowchart LR
subgraph phase1["Phase 1: Organization-Level Configuration"]
p1["Service PrincipalsConfig BucketCloudTrail Bucket"]
end
subgraph phase2["Phase 2: Foundational Services"]
p2["CloudTrailConfig"]
end
subgraph phase3["Phase 3: Threat Detection & Vulnerability Scanning"]
p3["GuardDutyInspectorMacieAccess Analyzer"]
end
subgraph phase4["Phase 4: Aggregation & Protection"]
p4["Security HubShieldAudit Manager"]
end
phase1 --> phase2 --> phase3 --> phase4
```
---
## Phase 1: Organization-Level Configuration
These steps are required once for the entire organization.
### Vendor Components
Vendor all security and compliance components:
```bash
atmos vendor pull --component aws-config
atmos vendor pull --component aws-cloudtrail
atmos vendor pull --component aws-guardduty
atmos vendor pull --component aws-security-hub
atmos vendor pull --component aws-inspector2
atmos vendor pull --component aws-macie
atmos vendor pull --component aws-access-analyzer
atmos vendor pull --component aws-shield
atmos vendor pull --component aws-audit-manager
```
### Add Service Principals
Add the following service principals to the `aws_service_access_principals` variable of the `account` component
in `stacks/catalog/account.yaml`:
```yaml
# stacks/catalog/account.yaml
components:
terraform:
account:
vars:
aws_service_access_principals:
# Existing principals...
- access-analyzer.amazonaws.com
- cloudtrail.amazonaws.com
- config.amazonaws.com
- config-multiaccountsetup.amazonaws.com
- guardduty.amazonaws.com
- inspector2.amazonaws.com
- macie.amazonaws.com
- securityhub.amazonaws.com
```
This requires root account access (such as with the `managers` profile). Ensure the `plan` output
only modifies service principals.
```bash
atmos terraform plan account -s core-gbl-root
atmos terraform apply account -s core-gbl-root
```
### Deploy Config Bucket
Deploy the S3 bucket for AWS Config data storage. This bucket stores configuration snapshots and history
for compliance auditing.
Deploy only one config-bucket per organization. It stores data from all accounts and regions.
```bash
atmos terraform apply config-bucket -s core-ue1-audit
```
### Deploy CloudTrail Bucket
Deploy the S3 bucket for CloudTrail logs. This bucket stores API activity logs from all accounts
in the organization.
Deploy only one cloudtrail-bucket per organization. It may already exist from the
[Deploy Accounts](/layers/accounts/deploy-accounts/).
```bash
# Verify bucket exists or create it
atmos terraform plan cloudtrail-bucket -s core-ue1-audit
atmos terraform apply cloudtrail-bucket -s core-ue1-audit
```
### Deploy CIS Benchmark IAM Role
CIS AWS Foundations Benchmark requires a support role for managing incidents with AWS Support.
See [CIS Benchmark 1.20](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls.html#securityhub-cis-controls-1.20).
This role is managed through the [Identity Layer](/layers/identity/deploy/) using `aws-teams` and `aws-team-roles`.
---
## Phase 2: Foundational Services
### AWS CloudTrail
[AWS CloudTrail](/layers/security-and-compliance/aws-cloudtrail/) records API activity across your organization.
Deploy the organization trail to capture events from all accounts.
### Deploy Organization Trail
Deploy CloudTrail to create an organization-wide trail that automatically logs API activity from all accounts.
```bash
atmos terraform apply aws-cloudtrail-s core-gbl-audit
```
### AWS Config
[AWS Config](/layers/security-and-compliance/aws-config/) provides configuration compliance monitoring and resource
inventory. It must be deployed to every account and region.
### Deploy AWS Config Globally
Deploy AWS Config to each region to collect data for global resources (IAM, etc.) and regional resources.
```bash
atmos terraform apply aws-config -s core-ue1-security
atmos terraform apply aws-config -s core-ue2-security
atmos terraform apply aws-config -s core-uw2-security
# ... repeat for each region
```
### Deploy AWS Config for Root Accounts
Deploy AWS Config to accounts that require root access to apply (root, security).
This requires root account access (such as with the `managers` profile).
```bash
atmos terraform apply aws-config -s core-ue1-root
atmos terraform apply aws-config -s core-ue2-root
atmos terraform apply aws-config -s core-uw2-root
# ... repeat for each region
```
---
## Phase 3: Threat Detection & Vulnerability Scanning
### AWS GuardDuty
[AWS GuardDuty](/layers/security-and-compliance/aws-guardduty/) provides intelligent threat detection using ML-based
analysis. It uses a 3-step delegated administrator deployment model.
### Deploy to Delegated Administrator (Step 1)
First, deploy to the security account to create the GuardDuty detector.
```bash
atmos terraform apply aws-guardduty/delegated-administrator -s core-ue1-security
```
### Delegate from Organization Management (Step 2)
Deploy to the root account to designate the security account as the delegated administrator.
This requires root account access (such as with the `managers` profile).
```bash
atmos terraform apply aws-guardduty/root -s core-ue1-root
```
### Configure Organization Settings (Step 3)
Deploy to the security account again to enable GuardDuty organization-wide with all protection features.
```bash
atmos terraform apply aws-guardduty/org-settings -s core-ue1-security
```
### AWS Inspector 2
[AWS Inspector 2](/layers/security-and-compliance/aws-inspector2/) provides automated vulnerability scanning for EC2,
ECR, and Lambda. It uses a 2-step delegated administrator deployment model.
### Delegate from Organization Management (Step 1)
Deploy to the root account to designate the security account as the delegated administrator.
This requires root account access (such as with the `managers` profile).
```bash
atmos terraform apply aws-inspector2/root -s core-ue1-root
```
### Configure Organization Settings (Step 2)
Deploy to the security account to enable Inspector organization-wide.
```bash
atmos terraform apply aws-inspector2/org-settings -s core-ue1-security
```
### Amazon Macie
[Amazon Macie](/layers/security-and-compliance/aws-macie/) discovers sensitive data in S3 using ML-based classification.
It uses a 3-step delegated administrator deployment model.
### Deploy to Delegated Administrator (Step 1)
First, deploy to the security account to create the Macie account.
```bash
atmos terraform apply aws-macie/delegated-administrator -s core-ue1-security
```
### Delegate from Organization Management (Step 2)
Deploy to the root account to designate the security account as the delegated administrator.
This requires root account access (such as with the `managers` profile).
```bash
atmos terraform apply aws-macie/root -s core-ue1-root
```
### Configure Organization Settings (Step 3)
Deploy to the security account again to enable Macie organization-wide.
```bash
atmos terraform apply aws-macie/org-settings -s core-ue1-security
```
### IAM Access Analyzer
[IAM Access Analyzer](/layers/security-and-compliance/aws-access-analyzer/) identifies resources shared with external
entities and unused access. It uses a 2-step delegated administrator deployment model.
### Delegate from Organization Management (Step 1)
Deploy to the root account to designate the security account as the delegated administrator.
This requires root account access (such as with the `managers` profile).
```bash
atmos terraform apply aws-access-analyzer/root -s core-gbl-root
```
### Configure Organization Settings (Step 2)
Deploy to the security account to create organization and account analyzers.
```bash
atmos terraform apply aws-access-analyzer/org-settings -s core-ue1-security
```
---
## Phase 4: Aggregation & Protection
### AWS Security Hub
[AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) aggregates findings from all security services
into a centralized dashboard. It uses a 3-step delegated administrator deployment model.
### Deploy to Delegated Administrator (Step 1)
First, deploy to the security account to enable Security Hub and configure product subscriptions.
```bash
atmos terraform apply aws-security-hub/delegated-administrator -s core-ue1-security
```
### Delegate from Organization Management (Step 2)
Deploy to the root account to designate the security account as the delegated administrator.
This requires root account access (such as with the `managers` profile).
```bash
atmos terraform apply aws-security-hub/root -s core-ue1-root
```
### Assume Identity Role
Switch back to your default identity role:
```bash
assume-role acme-identity
```
### Configure Organization Settings (Step 3)
Deploy to the security account again to enable Security Hub organization-wide with compliance standards.
```bash
atmos terraform apply aws-security-hub/org-settings -s core-ue1-security
```
### AWS Shield
[AWS Shield](/layers/security-and-compliance/aws-shield/) provides DDoS protection for critical resources.
Unlike other services, Shield is deployed per-resource rather than organization-wide.
AWS Shield Advanced requires a subscription ($3,000/month per organization) in each account before deployment.
See [Subscribing to Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/enable-ddos-prem.html).
### Deploy AWS Shield Advanced
Deploy Shield protection to accounts and resources that need DDoS protection.
```bash
# Global resources (Route53, CloudFront)
atmos terraform apply aws-shield -s plat-gbl-prod
# Regional resources (ALBs, Elastic IPs)
atmos terraform apply aws-shield -s plat-ue1-prod
```
### AWS Audit Manager (Optional)
[AWS Audit Manager](/layers/security-and-compliance/aws-audit-manager/) automates compliance evidence collection.
It is deployed only to the root account.
AWS Audit Manager has limited framework availability in GovCloud. Consider using AWS Config conformance packs
as an alternative for compliance monitoring.
### Deploy AWS Audit Manager
Deploy Audit Manager to the root account to enable compliance evidence collection.
```bash
atmos terraform apply aws-audit-manager/root -s core-ue1-root
```
---
## Optional: DNS Firewall
Route53 DNS Resolver Firewall provides DNS-level security to block malicious domains.
### Deploy DNS Firewall Buckets
Deploy S3 buckets for DNS Firewall logging.
```bash
atmos terraform apply route53-resolver-dns-firewall-logs -s plat-ue1-dev
atmos terraform apply route53-resolver-dns-firewall-logs -s plat-ue1-prod
atmos terraform apply route53-resolver-dns-firewall-logs -s plat-ue1-sandbox
atmos terraform apply route53-resolver-dns-firewall-logs -s plat-ue1-staging
```
### Configure DNS Firewall
Deploy and configure the Route53 DNS Resolver Firewall.
```bash
atmos terraform apply route53-resolver-dns-firewall/dev -s plat-ue1-dev
atmos terraform apply route53-resolver-dns-firewall/prod -s plat-ue1-prod
atmos terraform apply route53-resolver-dns-firewall/sandbox -s plat-ue1-sandbox
atmos terraform apply route53-resolver-dns-firewall/staging -s plat-ue1-staging
```
---
## Verification
After deployment, verify all services are properly configured:
### Check Security Hub Dashboard
1. Open the AWS Console in the security account
1. Navigate to Security Hub
1. Verify findings are being aggregated from all services
### Verify Service Status
```bash
# Check GuardDuty status
aws guardduty list-detectors --region us-east-1
# Check Security Hub status
aws securityhub describe-hub --region us-east-1
# Check Inspector status
aws inspector2 list-delegated-admin-accounts --region us-east-1
# Check Config status
aws configservice describe-configuration-recorders --region us-east-1
```
### Review Compliance
1. Open Security Hub in the security account
1. Navigate to "Security standards"
1. Review compliance scores for enabled standards (CIS, PCI DSS, AWS Foundational)
---
## Troubleshooting
See the [FAQ](/layers/security-and-compliance/faq/) for common issues and solutions, or consult the individual
component documentation for service-specific troubleshooting.
---
## Configure Custom AWS Config Rules
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
AWS Config rules evaluate resource configurations for compliance. While AWS provides many managed rules, you may need
custom rules for organization-specific requirements. This tutorial covers creating and deploying custom Config rules.
## Overview
AWS Config supports two types of rules:
| Type | Description | Use Case |
|------|-------------|----------|
| **Managed Rules** | AWS-provided, ready to use | Standard compliance checks |
| **Custom Rules** | User-defined using Lambda | Organization-specific requirements |
## Prerequisites
- AWS Config deployed — Follow the [AWS Config setup guide](/layers/security-and-compliance/aws-config/)
- Conformance packs configured — Organization-level Config enabled
- Lambda permissions — Ability to deploy Lambda functions (for custom rules)
## Using AWS Managed Rules
AWS provides 300+ managed rules for common compliance scenarios.
### Add Managed Rules to Conformance Pack
### Create Custom Conformance Pack File
Create a conformance pack YAML file in your component directory:
```yaml
# components/terraform/aws-config/conformance-packs/custom-security-pack.yaml
Resources:
S3BucketPublicReadProhibited:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: s3-bucket-public-read-prohibited
Description: Checks that S3 buckets do not allow public read access
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_PUBLIC_READ_PROHIBITED
Scope:
ComplianceResourceTypes:
- AWS::S3::Bucket
S3BucketSSLRequestsOnly:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: s3-bucket-ssl-requests-only
Description: Checks that S3 bucket policies require SSL
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_SSL_REQUESTS_ONLY
Scope:
ComplianceResourceTypes:
- AWS::S3::Bucket
RDSEncryptionEnabled:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: rds-storage-encrypted
Description: Checks that RDS instances have encryption enabled
Source:
Owner: AWS
SourceIdentifier: RDS_STORAGE_ENCRYPTED
Scope:
ComplianceResourceTypes:
- AWS::RDS::DBInstance
IAMPasswordPolicy:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: iam-password-policy
Description: Checks that IAM password policy meets requirements
Source:
Owner: AWS
SourceIdentifier: IAM_PASSWORD_POLICY
InputParameters:
RequireUppercaseCharacters: "true"
RequireLowercaseCharacters: "true"
RequireSymbols: "true"
RequireNumbers: "true"
MinimumPasswordLength: "14"
PasswordReusePrevention: "24"
MaxPasswordAge: "90"
```
### Reference in Stack Configuration
Add the custom conformance pack to your AWS Config stack:
```yaml
# stacks/catalog/aws-config/organization.yaml
components:
terraform:
aws-config:
vars:
default_scope: organization
conformance_packs:
# AWS-managed CIS pack
- name: CIS-AWS-v1.4-Level2
conformance_pack: "https://raw.githubusercontent.com/awslabs/aws-config-rules/master/aws-config-conformance-packs/Operational-Best-Practices-for-CIS-AWS-v1.4-Level2.yaml"
parameter_overrides: {}
# Custom pack from local file
- name: Custom-Security-Pack
conformance_pack: "conformance-packs/custom-security-pack.yaml"
scope: organization
parameter_overrides: {}
```
### Deploy the Configuration
Apply the AWS Config changes:
```bash
atmos terraform apply aws-config -s core-ue1-root
```
## Creating Custom Lambda Rules
For requirements not covered by managed rules, create custom rules with Lambda.
### Example: Require Specific Tags
### Create Lambda Function
Create a Lambda function to check for required tags:
```python
# lambda/config-rules/required-tags/handler.py
import json
import boto3
def lambda_handler(event, context):
"""Check that resources have required tags."""
config = boto3.client('config')
# Parse the invoking event
invoking_event = json.loads(event['invokingEvent'])
configuration_item = invoking_event['configurationItem']
# Get rule parameters
rule_parameters = json.loads(event.get('ruleParameters', '{}'))
required_tags = rule_parameters.get('requiredTags', 'Environment,Owner,CostCenter').split(',')
# Get resource tags
resource_tags = configuration_item.get('tags', {})
tag_keys = list(resource_tags.keys())
# Check for missing required tags
missing_tags = [tag for tag in required_tags if tag not in tag_keys]
if missing_tags:
compliance_type = 'NON_COMPLIANT'
annotation = f'Missing required tags: {", ".join(missing_tags)}'
else:
compliance_type = 'COMPLIANT'
annotation = 'All required tags present'
# Report evaluation result
config.put_evaluations(
Evaluations=[
{
'ComplianceResourceType': configuration_item['resourceType'],
'ComplianceResourceId': configuration_item['resourceId'],
'ComplianceType': compliance_type,
'Annotation': annotation,
'OrderingTimestamp': configuration_item['configurationItemCaptureTime']
},
],
ResultToken=event['resultToken']
)
return {
'compliance_type': compliance_type,
'annotation': annotation
}
```
### Deploy Lambda Function
Deploy the Lambda function using Terraform:
```hcl
# components/terraform/config-rules-lambda/main.tf
resource "aws_lambda_function" "required_tags" {
function_name = "${var.namespace}-config-required-tags"
role = aws_iam_role.config_rule_lambda.arn
handler = "handler.lambda_handler"
runtime = "python3.11"
timeout = 60
filename = data.archive_file.required_tags.output_path
source_code_hash = data.archive_file.required_tags.output_base64sha256
}
resource "aws_lambda_permission" "config" {
statement_id = "AllowConfigInvoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.required_tags.function_name
principal = "config.amazonaws.com"
}
```
### Create Custom Config Rule
Add the custom rule to your conformance pack:
```yaml
# conformance-packs/custom-tagging-pack.yaml
Resources:
RequiredTagsRule:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: required-tags-check
Description: Checks that resources have required tags
Source:
Owner: CUSTOM_LAMBDA
SourceIdentifier: !Sub "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${Namespace}-config-required-tags"
SourceDetails:
- EventSource: aws.config
MessageType: ConfigurationItemChangeNotification
Scope:
ComplianceResourceTypes:
- AWS::EC2::Instance
- AWS::RDS::DBInstance
- AWS::S3::Bucket
InputParameters:
requiredTags: "Environment,Owner,CostCenter"
```
## Common Rule Patterns
### Check S3 Encryption
```yaml
S3BucketServerSideEncryptionEnabled:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: s3-bucket-server-side-encryption-enabled
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED
```
### Check EBS Encryption
```yaml
EBSEncryptionByDefault:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: ec2-ebs-encryption-by-default
Source:
Owner: AWS
SourceIdentifier: EC2_EBS_ENCRYPTION_BY_DEFAULT
```
### Check VPC Flow Logs
```yaml
VPCFlowLogsEnabled:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: vpc-flow-logs-enabled
Source:
Owner: AWS
SourceIdentifier: VPC_FLOW_LOGS_ENABLED
InputParameters:
trafficType: ALL
```
### Check Root Account MFA
```yaml
RootAccountMFAEnabled:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: root-account-mfa-enabled
Source:
Owner: AWS
SourceIdentifier: ROOT_ACCOUNT_MFA_ENABLED
```
## Remediating Non-Compliant Resources
### Automatic Remediation
Configure automatic remediation actions:
```yaml
# Add to conformance pack
S3BucketPublicReadProhibited:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: s3-bucket-public-read-prohibited
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_PUBLIC_READ_PROHIBITED
S3PublicReadRemediation:
Type: AWS::Config::RemediationConfiguration
Properties:
ConfigRuleName: s3-bucket-public-read-prohibited
TargetType: SSM_DOCUMENT
TargetId: AWS-DisableS3BucketPublicReadWrite
Parameters:
S3BucketName:
ResourceValue:
Value: RESOURCE_ID
Automatic: true
MaximumAutomaticAttempts: 3
RetryAttemptSeconds: 60
```
### Manual Remediation
Query non-compliant resources and remediate:
```bash
# List non-compliant resources
aws configservice get-compliance-details-by-config-rule \
--config-rule-name s3-bucket-public-read-prohibited \
--compliance-types NON_COMPLIANT \
--region us-east-1
# Get remediation status
aws configservice describe-remediation-execution-status \
--config-rule-name s3-bucket-public-read-prohibited \
--region us-east-1
```
## Troubleshooting
### Rule Not Evaluating
If rules aren't evaluating resources:
1. Verify the resource type is in the rule's scope
2. Check that Config recorder is enabled for the resource type
3. Review Lambda function logs (for custom rules)
```bash
# Check recorder status
aws configservice describe-configuration-recorder-status --region us-east-1
# Check rule evaluation
aws configservice describe-config-rule-evaluation-status \
--config-rule-names s3-bucket-public-read-prohibited \
--region us-east-1
```
### Lambda Errors
For custom Lambda rules, check CloudWatch Logs:
```bash
aws logs filter-log-events \
--log-group-name "/aws/lambda/" \
--filter-pattern "ERROR" \
--region us-east-1
```
## See Also
- [AWS Config](/layers/security-and-compliance/aws-config/) - Complete Config documentation
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - View Config compliance findings
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html)
- [AWS Managed Rules List](https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html)
- [Custom Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules.html)
- [Conformance Packs](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html)
---
## Enable GuardDuty for EKS Protection
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
GuardDuty for EKS Protection provides threat detection for your Amazon EKS clusters by analyzing Kubernetes audit logs
and monitoring runtime activity in your containerized workloads.
## Overview
GuardDuty EKS Protection includes two key features:
| Feature | Description |
|---------|-------------|
| **Kubernetes Audit Log Monitoring** | Analyzes EKS audit logs to detect suspicious API activity |
| **EKS Runtime Monitoring** | Monitors container runtime behavior to detect threats |
## Prerequisites
Before enabling EKS protection:
- GuardDuty enabled — Follow the [GuardDuty setup guide](/layers/security-and-compliance/aws-guardduty/)
- EKS clusters deployed — Have at least one EKS cluster running
- Delegated administrator configured — GuardDuty organization settings deployed
## Method 1: Enable via Terraform (Recommended)
The recommended approach is to enable EKS protection through the GuardDuty component configuration.
### Update GuardDuty Organization Settings
Add the EKS protection settings to your `guardduty/org-settings` component:
```yaml
# stacks/catalog/guardduty/org-settings.yaml
components:
terraform:
aws-guardduty/org-settings:
metadata:
component: guardduty
vars:
enabled: true
admin_delegated: true
# Enable EKS protection features
kubernetes_audit_logs_enabled: true
runtime_monitoring_enabled: true
runtime_monitoring_additional_config:
eks_addon_management_enabled: true
ecs_fargate_agent_management_enabled: true
ec2_agent_management_enabled: true
```
### Apply the Configuration
Deploy the updated GuardDuty settings:
```bash
atmos terraform apply aws-guardduty/org-settings -s core-ue1-security
```
This will:
- Enable Kubernetes Audit Log Monitoring for all member accounts
- Enable Runtime Monitoring with automatic agent management
- Deploy the GuardDuty agent to EKS clusters via the EKS add-on
### Verify Agent Deployment
Check that the GuardDuty agent is deployed to your EKS clusters:
```bash
# List EKS add-ons
aws eks list-addons --cluster-name --region us-east-1
# Check agent status
kubectl get pods -n amazon-guardduty
```
## Method 2: Enable via AWS Console
If you need to enable EKS protection manually for testing or troubleshooting:
### Navigate to GuardDuty
Open the AWS Console and navigate to **GuardDuty** in the security account.
### Enable Kubernetes Protection
1. In the left navigation, select **Kubernetes Protection**
2. Click **Configure** or **Edit**
3. Enable **Kubernetes Audit Logs Monitoring**
4. Click **Save**
### Enable for Member Accounts
1. Select **Accounts** in the left navigation
2. Select all member accounts
3. Click **Actions** → **Enable Kubernetes Audit Logs Monitoring**
4. Confirm the action
### Verify Configuration
Check the configuration status shows as enabled for all accounts:
## EKS Runtime Monitoring Agent
When Runtime Monitoring is enabled with `eks_addon_management_enabled: true`, GuardDuty automatically:
1. Creates the `amazon-guardduty` namespace in your EKS clusters
2. Deploys the GuardDuty agent as an EKS add-on
3. Manages agent updates automatically
### Manual Agent Installation
If automatic management is disabled, install the agent manually:
```bash
# Create the add-on
aws eks create-addon \
--cluster-name \
--addon-name aws-guardduty-agent \
--region us-east-1
```
## Finding Types
GuardDuty EKS Protection detects these threat categories:
| Category | Example Findings |
|----------|------------------|
| **Privilege Escalation** | Container escape attempts, privileged container launches |
| **Credential Access** | Kubernetes secrets access, service account token theft |
| **Execution** | Cryptomining, reverse shells, malicious binary execution |
| **Persistence** | Unauthorized service creation, DaemonSet deployment |
| **Discovery** | Kubernetes API enumeration, network scanning |
## Troubleshooting
### Agent Not Deploying
If the GuardDuty agent doesn't deploy automatically:
1. Verify Runtime Monitoring is enabled in GuardDuty settings
2. Check the EKS cluster has the required IAM permissions
3. Verify the cluster is in a supported region
```bash
# Check add-on status
aws eks describe-addon \
--cluster-name \
--addon-name aws-guardduty-agent \
--region us-east-1
```
### No Findings Appearing
If no findings appear after enabling:
1. Allow 15-30 minutes for initial data collection
2. Verify the EKS cluster is actively running workloads
3. Check CloudWatch Logs for agent errors
## See Also
- [AWS GuardDuty](/layers/security-and-compliance/aws-guardduty/) - Complete GuardDuty documentation
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - View aggregated GuardDuty findings
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [GuardDuty EKS Protection](https://docs.aws.amazon.com/guardduty/latest/ug/kubernetes-protection.html)
- [EKS Runtime Monitoring](https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring.html)
- [GuardDuty EKS Finding Types](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-kubernetes.html)
---
## Review and Manage Security Hub Findings
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import TaskList from '@site/src/components/TaskList';
AWS Security Hub aggregates security findings from multiple AWS services and third-party tools into a centralized
dashboard. This tutorial shows you how to effectively review, investigate, and manage these findings.
## Overview
Security Hub receives findings from:
| Source | Finding Types |
|--------|---------------|
| **GuardDuty** | Threat detection, malicious activity |
| **Inspector** | Vulnerabilities in EC2, ECR, Lambda |
| **Macie** | Sensitive data exposure in S3 |
| **Config** | Configuration compliance violations |
| **Access Analyzer** | External access, unused permissions |
| **Firewall Manager** | Firewall policy compliance |
## Prerequisites
- Security Hub deployed — Follow the [Security Hub setup guide](/layers/security-and-compliance/aws-security-hub/)
- Product subscriptions enabled — All security services integrated
- Console access — Access to the security account
## Finding Workflow
```mermaid
flowchart LR
new["NEW"]
notified["NOTIFIED"]
suppressed["SUPPRESSED"]
resolved["RESOLVED"]
new -->|"Review"| notified
notified -->|"False positive"| suppressed
notified -->|"Remediated"| resolved
new -->|"Auto-archive"| suppressed
```
## Reviewing Findings
### Access Security Hub Dashboard
1. Log into the AWS Console in the **security account**
2. Navigate to **Security Hub**
3. Select **Findings** from the left navigation
The dashboard shows findings aggregated from all accounts and regions.
### Filter by Severity
Focus on high-priority findings first:
```
SeverityLabel = CRITICAL OR SeverityLabel = HIGH
```
Or use the severity filter dropdown to select CRITICAL and HIGH findings.
### Group by Finding Type
Organize findings by type to identify patterns:
1. Click **Group by** dropdown
2. Select **Type** or **Product name**
3. Review counts for each category
### Investigate Individual Findings
For each finding, review:
- **Title**: Brief description of the issue
- **Severity**: CRITICAL, HIGH, MEDIUM, LOW, INFORMATIONAL
- **Account**: Which AWS account has the issue
- **Resource**: Affected resource ARN
- **Remediation**: Suggested fix (when available)
Click on a finding to see full details including:
- Resource configuration
- Related findings
- Remediation steps
## Using Insights
Security Hub Insights provide pre-built and custom views of your security posture.
### Built-in Insights
Navigate to **Insights** to view:
| Insight | Description |
|---------|-------------|
| **Critical findings** | All CRITICAL severity findings |
| **Failed security checks** | Compliance standard failures |
| **Top accounts by findings** | Accounts with most issues |
| **Top resources by findings** | Resources needing attention |
### Create Custom Insights
### Define Filter Criteria
```
ProductName = "GuardDuty" AND SeverityLabel = "HIGH" AND RecordState = "ACTIVE"
```
### Create the Insight
1. Click **Create insight**
2. Enter a name (e.g., "High Severity GuardDuty Findings")
3. Configure the grouping (e.g., by Resource Type)
4. Save the insight
## Managing Finding Workflow
### Update Finding Status
Use the AWS CLI to update finding workflow status:
```bash
# Mark finding as resolved
aws securityhub batch-update-findings \
--finding-identifiers '[{"Id":"arn:aws:securityhub:...","ProductArn":"arn:aws:securityhub:..."}]' \
--workflow '{"Status":"RESOLVED"}' \
--region us-east-1
# Mark finding as suppressed (false positive)
aws securityhub batch-update-findings \
--finding-identifiers '[{"Id":"arn:aws:securityhub:...","ProductArn":"arn:aws:securityhub:..."}]' \
--workflow '{"Status":"SUPPRESSED"}' \
--note '{"Text":"False positive - approved exception","UpdatedBy":"security-team"}' \
--region us-east-1
```
### Workflow Status Values
| Status | Description |
|--------|-------------|
| `NEW` | Finding has not been reviewed |
| `NOTIFIED` | Finding has been reviewed and assigned |
| `SUPPRESSED` | Finding is a false positive or accepted risk |
| `RESOLVED` | Finding has been remediated |
## Automating Finding Response
### EventBridge Integration
Create automated responses to findings:
```yaml
# Example: Alert on critical GuardDuty findings
components:
terraform:
security-hub-automation:
vars:
event_pattern:
source:
- aws.securityhub
detail-type:
- Security Hub Findings - Imported
detail:
findings:
ProductName:
- GuardDuty
Severity:
Label:
- CRITICAL
```
### SNS Notifications
Enable SNS notifications in Security Hub:
```yaml
components:
terraform:
aws-security-hub/delegated-administrator:
vars:
create_sns_topic: true
# SNS topic receives all new findings
```
## Compliance Standards
### Review Compliance Scores
1. Navigate to **Security standards**
2. Review compliance percentage for each standard:
- CIS AWS Foundations Benchmark
- AWS Foundational Security Best Practices
- PCI DSS (if enabled)
3. Click on a standard to see failed controls
### Export Compliance Report
```bash
# Get compliance summary
aws securityhub get-enabled-standards --region us-east-1
# Get control status
aws securityhub describe-standards-controls \
--standards-subscription-arn "arn:aws:securityhub:us-east-1::standards/cis-aws-foundations-benchmark/v/1.4.0" \
--region us-east-1
```
## Best Practices
1. **Daily review**: Check CRITICAL and HIGH findings daily
2. **Weekly review**: Review MEDIUM findings weekly
3. **Document exceptions**: Use notes to document why findings are suppressed
4. **Automate responses**: Use EventBridge for automated alerting and remediation
5. **Track metrics**: Monitor finding counts over time to measure improvement
## Troubleshooting
### Findings Not Appearing
If findings aren't showing up:
1. Verify product subscriptions are enabled
2. Check cross-region aggregation settings
3. Allow 15-30 minutes for initial data sync
4. Verify IAM permissions for the security account
### Duplicate Findings
Duplicate findings may occur when:
- Multiple regions report the same global resource
- Finding aggregation is misconfigured
Enable finding aggregation to deduplicate:
```yaml
components:
terraform:
aws-security-hub/delegated-administrator:
vars:
finding_aggregator_enabled: true
finding_aggregator_linking_mode: ALL_REGIONS
```
## See Also
- [AWS Security Hub](/layers/security-and-compliance/aws-security-hub/) - Complete Security Hub documentation
- [AWS GuardDuty](/layers/security-and-compliance/aws-guardduty/) - Threat detection service
- [AWS Config](/layers/security-and-compliance/aws-config/) - Configuration compliance
- [Setup Guide](/layers/security-and-compliance/setup/) - Complete deployment instructions
## References
- [Security Hub Findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings.html)
- [Security Hub Insights](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-insights.html)
- [Security Hub Automation](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cloudwatch-events.html)
---
## Tutorials(9)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These are some additional tutorials that will help you along with the associated Security & Compliance components.
---
## Decide How to distribute Docker Images
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Choosing the right method to distribute Docker images is crucial for efficient
deployment and management.There are various options, including AWS ECR, GitHub
Container Registry, DockerHub, Artifactory/Nexus, and self-hosted registries,
with multiple advantages and drawbacks.
#### Use AWS ECR
This is by far the most common approach we see taken. Our typical implementation includes a single ECR in the automation account. Read-only access granted to other accounts as necessary. Push images with commit sha’s and stage tag. Use lifecycle rules on stage tag to avoid eviction. The main downside with ECR, is each image repository in the registry must be explicitly provisioned. If we decide to go with ECR, we’ll also want to [Decide on ECR Strategy](/layers/project/design-decisions/decide-on-ecr-strategy) .
#### Use GitHub Container Registry
[https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-docker-registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-docker-registry)
#### Use Dockerhub
DockerHub is well suited for public images because it’s the default registry, however, images are aggressively rate limited for anonymous pulls, and no longer recommended. Additionally, as a private registry, it’s a bit dated and requires static credentials, unlike with ECR. One nice thing DockerHub is repositories do not need to be explicitly created.
#### Use Artifactory/Nexus/etc
This is more common for traditional artifact storage in Java shops. We don’t see this typically used with Docker, but it is supported.
#### Self-hosted Registries (e.g. Quay, Docker Registry, etc)
We don’t recommend this approach because, at the very least, we’ll need to use something else like ECR for bootstrapping.
---
## Decide on Argo CD Architecture
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Deciding on the architecture for Argo CD involves considering multiple
clusters, plugin management, and Kubernetes integration. This document
presents recommended strategies, potential risks, and common deployment
patterns for Argo CD.
## Context
Argo CD is a specialized tool for continuous delivery to Kubernetes, akin to how Terraform Cloud focuses on Terraform deployments. Argo CD does not support deployments outside of Kubernetes (e.g., uploading files to a bucket). While it supports plugins, these are not intended to extend its capabilities beyond Kubernetes.
## Considerations
- Deploy multiple Argo CD instances across clusters to facilitate systematic upgrades in different environments.
- Argo CD operates as a single pod, requiring disruptive restarts to add or upgrade plugins.
- Restarts of Argo CD are disruptive to deployments.
- Increasing the number of Argo CD servers complicates visualizing the delivery process.
- Each Argo CD server must integrate with every cluster it deploys to.
- Argo CD can deploy to the local cluster by using a service account.
### Pros
- Simplifies dependency management across components.
- Protects the KubeAPI by reducing public access requirements.
- Provides a powerful CD tool for Kubernetes with multiple pod rollout strategies.
- Offers a user-friendly UI and supports diverse deployment toolchains within the Argo CD Docker image.
- Enables faster deployments and "backup Kubernetes cluster" capabilities.
- Establishes a consistent framework for continuous deployment independent of the CI platform.
### Cons
- Asynchronous deployments can break the immediate feedback loop from commit to deployment.
- Application CRDs must reside in the namespace where Argo CD runs.
- Application names must be unique per Argo CD instance.
- Custom toolchains require custom Docker images, necessitating Argo CD redeployment.
- Redeploying Argo CD can disrupt active deployments.
- Plugin updates require redeployment since tools must be included in the Docker image.
- Access management involves multiple levels (e.g., GitHub repo access, Argo CD projects, RBAC), introducing complexity.
- Requires additional self-hosted solutions compared to simpler CI-based deployments with Helm 3.
- Repository management for private repos in Argo CD lacks a declarative approach, needing research for potential patterns.
- Argo CD's lifecycle becomes part of the critical path for deployments.
## Recommendations
- **Deploy one Argo CD instance per cluster** to simplify upgrades and manage disruptions effectively.
- **Use a single Argo CD instance for all namespaces within a cluster** to centralize deployment management and reduce complexity.
- **Adopt a dedicated repository strategy** managed by Terraform via the GitHub Terraform provider:
- One repository for production environments.
- One repository for non-production environments.
- One repository for preview environments.
- **Avoid using plugins**:
- Commit raw manifests (e.g., rendered from Helm templates or Kustomize) directly to the repository.
- Shift manifest rendering to CI to ensure predictable, verifiable deployments.
- This approach simplifies troubleshooting, avoids plugin upgrade issues, and ensures complete visibility into what is deployed.
- **Deploy operators that require IAM roles and backing services with Terraform**, not Argo CD, to ensure proper role management and infrastructure provisioning.
- **Use Argo CD for application deployments** in combination with GitHub Actions to streamline deployment pipelines and align with CI/CD best practices.
- **Use Helm to Provision Argo CD** with Terraform
---
## Decide on ArgoCD Deployment Repo Architecture
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
### Context
ArgoCD synchronizes the state of Kubernetes with some repo/branch/directory in your VCS system. There’s no canonical way to do it. There are many variations as well.
## Examples
- 1 repo per cluster
- 1 repo for multiple clusters, multiple repos for multiple groups of clusters
- 1 branch per cluster
- 1 directory per cluster
## Considerations
The more repos, the harder to update multiple clusters at a time
The more clusters in one repo, the larger the git commit history
With one repo per cluster, every time a new clsuter is created, a new repo needs to be created as well.
With fewer repos, the more contention working with Git. Git sucks for high throughput.
## Recommendation
Our recommendation is ~3 repos, with multiple clusters in each:
- prod (all production clusters)
- use main branch to represent deployed state for all clusters
- use one directory per cluster, namespace
- use branch protections to restrict commits
- non-prod (all non-production clusters)
- use main branch to represent deployed state for all clusters
- use one directory per cluster, namespace
- use branch protections to restrict commits
- preview (all preview environments)
- avoid polluting the git history
- no branch protections
## Related
- [Decide on ArgoCD Architecture](/layers/software-delivery/design-decisions/decide-on-argocd-architecture)
---
## Decide on Branching Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
## Problem Statement
A branching strategy is crucial in software development for facilitating team collaboration,
maintaining code stability, enabling Continuous Integration/Continuous Deployment (CI/CD),
ensuring code quality through reviews, simplifying change tracking and rollbacks,
supporting parallel development, and managing different product releases.
It provides individual workspaces for developers, segregates code based on development stages,
automates testing and deployment processes, allows for scrutinized merges into the main codebase,
and supports simultaneous development and maintenance of multiple product versions.
Essentially, an effective branching strategy underpins a stable, manageable, and efficient development environment.
## Options
## Gitflow
[Gitflow](https://nvie.com/posts/a-successful-git-branching-model/) has gained popularity as it provides an efficient
framework for collaborative development and scaling development teams. The Gitflow model offers robust control
mechanisms and facilitates collaborative development at a price of a slowdown development speed and higher numbers
of merge conflicts.
Teams working on projects with a massive codebase, low level of automation, and minimum test
code coverage and teams with a high percentage of junior devs all benefit from the Gitflow strategy.
However, Gitflow may not be suitable for startups where development speed is the priority.
```mermaid
---
title: Gitflow
---
gitGraph
commit tag: "0.1.0"
branch hotfix
branch develop order: 2
commit
commit
checkout develop
branch featureA order: 2
branch featureB order: 2
checkout main
checkout featureA
commit
commit
checkout hotfix
commit
checkout main
merge hotfix tag: "0.1.1"
checkout featureB
commit
commit
checkout develop
merge featureA
branch release order: 1
commit
commit
checkout develop
merge release
checkout release
commit
checkout featureB
commit
checkout develop
merge release
checkout main
checkout release
checkout main
merge release tag: "0.2.0"
checkout develop
merge featureB
checkout main
merge develop tag: "0.3.0"
checkout develop
commit
```
## GitHub Flow (Trunk-based)
[Github Flow](https://docs.github.com/en/get-started/quickstart/github-flow) is a lightweight branching strategy where
developers introduce changes as Pull Requests from short-lived, ephemeral feature branches.
Finally, the PRs merged into the single "trunk" branch that is the source of truth.
GitHub Flow works particularly well with a team of experienced developers, facilitating the quick introduction
of improvements without unnecessary bureaucracy. It fits with high automation of release engineering processes,
microservices architecture, and mature engineering culture.
```mermaid
---
title: GitHub Flow
---
gitGraph
commit tag: "0.1.0"
branch release/0.1.0 order: 4
commit type: HIGHLIGHT id: "0.1.0"
checkout main
commit
checkout main
branch feature order: 1
checkout main
commit tag: "0.2.0"
branch release/0.2.0 order: 4
commit type: HIGHLIGHT id: "0.2.0"
branch hotfix order: 4
checkout hotfix
commit
checkout feature
commit
commit
checkout main
checkout feature
commit
checkout hotfix
commit
checkout release/0.2.0
merge hotfix tag: "0.2.1"
checkout main
merge feature
checkout main
merge release/0.2.0
checkout main
commit tag: "0.3.0"
branch release/0.3.0 order: 5
commit type: HIGHLIGHT id: "0.3.0"
checkout main
commit
```
## Recommendation
We recommend using Github Flow as a basic branching strategy in conjunction with СI/CD workflows
and [branches rulesets](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-rulesets/about-rulesets)
for better quality and control over changes.
We highly recommend avoiding long-living branches, especially for software that does not have to support multiple
versions of the software running in the wild.
The branching strategy does not have to be the same across the whole organization - different teams can
use different flows. However, that often leads to complexity in the CI/CD pipelines and reduces reusability.
For these reasons, we recommend a consistent branching strategy, at a minimum on a team level.
## References
- [A successful Git branching model](https://nvie.com/posts/a-successful-git-branching-model/)
- [Trunk-Based Development vs Git Flow: When to Use Which Development Style](https://blog.mergify.com/trunk-based-development-vs-git-flow-when-to-use-which-development-style/)
- [Long-lived branches with Gitflow](https://www.thoughtworks.com/radar/techniques/long-lived-branches-with-gitflow)
---
## Decide on Customer Apps for Migration
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
We need to identify an application and its associated services that are suitable as the first candidate for migration to the new platform. This is an application that will be targeted by all the release engineering work.
We can pretty much handle anything, but ideal candidates have these characteristics: [https://cloudposse.com/12-factor-app/](https://cloudposse.com/12-factor-app/)
:::tip
Our recommendation is to start with a model application template repository so we do not impact any current CI/CD processes during the development of GitHub Action workflows.
:::
Apps that do not have these characteristics may require more engineering effort.
Using any existing repository will pose a risk of triggering GitHub events (E.g. pull requests, releases, etc) that other existing CI/CD systems (e.g. Jenkins, CircleCI, etc) will respond to. Furthermore, several GitHub Action events only work on the default branch (e.g. `main`) and for this reason, we will need to merge to PRs to test the end-to-end process. For this reason, we recommend starting with a model application template repository that your team can use to document and train others on your CI/CD process.
Completing the migration workbook will help identify suitable applications. Our workbook template is here [https://docs.google.com/spreadsheets/d/1CDcJosaqoby2Fq2AmZnf-xRizI4pcc-sqpi04ggHqSI/edit#gid=863544204](https://docs.google.com/spreadsheets/d/1CDcJosaqoby2Fq2AmZnf-xRizI4pcc-sqpi04ggHqSI/edit#gid=863544204) and can be copied and shared.
Our goal is to migrate a couple of apps within the allotted Sprint(s), however, we highly recommend leaving some for homework.
---
## Decide on Database Seeding Strategy for Ephemeral Preview Environments
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
We need to decide how to provision databases for use with preview environments. These databases must come up very quickly (e.g. 10-20 seconds rather than 20-30 minutes it takes for RDS). Once these databases come online, we need to have data staged for them to be useful. Restoring very large database dumps can be very slow and we need to update database dumps and scrub them. We typically cannot (and should not) use snapshots directly from production due to constraints around how we must handle PII, PHI, CDH, etc.
:::caution
As a general best practice, we should never use production data in non-production environments to avoid accidental leakage or usage of data.
:::
## Considerations
We prefer to include the DBA in these conversations.
Suggested requirements:
- They should come online very fast, so the process of bringing up new environments is not slowed down.
- They should be easily destroyed
- They should be inexpensive to operate because there will be many of them
- They should have realistic data, so the environments are testing something closer to staging/production
## Considered Options
**Option 1:** Seed data (fixtures) - **recommended**
- Most database migration tools support something like this (e.g. `rake db:fixtures:load`)
- This is the easiest to implement
**Option 2:** Docker Image with Preloaded Dataset
- Advisable if the dataset is large enough that loading dump would take too long, but the dataset isn’t so large that sticking it in a docker image is not unreasonable
- Implementation will require additional scope for automating the creation of the docker image
**Option 3:** Shared RDS cluster, Preloaded Shared database
- A shared database preloaded with sanitized seeded data can be shared across preview environments
- No ability to test migrations using this approach
**Option 4:** Shared RDS cluster, one database per env, with seed data
- Greater economies of scale are achieved by sharing the database
- Custom process of hydrating the database for each preview environment will need to be implemented
**Option 5:** Dedicated cluster (not advised)
- Too slow to launch (e.g. +30-40 minutes), expensive, complicated to implement
---
## Decide on GitHub Actions Workflow Organization Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
GitHub Actions Workflow files scattered in repositories throughout the GitHub organization can quickly violate the DRY principle if they contain repeated code.
## Considerations
### Standardize Workflows Across the Organization
### Metrics and Observability
With appropriate metrics, you’ll be able to answer questions like:
- Are we deploying faster? ...or slowing down?
- What is the stability of our deployments?
See [https://www.datadoghq.com/blog/datadog-github-actions-ci-visibility/](https://www.datadoghq.com/blog/datadog-github-actions-ci-visibility/)
[leanix-poster_17_metrics_to_help_build_better_software-en.pdf](/assets/refarch/leanix-poster_17_metrics_to_help_build_better_software-en.pdf)
[https://www.leanix.net/en/wiki/vsm/dora-metrics](https://www.leanix.net/en/wiki/vsm/dora-metrics)
### Public Actions
Trusted organizations for public actions
Verified organizations for public actions
### Private Actions
Private actions technically require GitHub Enterprise. We can do a workaround for non-enterprise organizations: an explicit `git clone` of a private actions repo.
### Shared Workflows
### Code Reusability
[Composite Actions](https://docs.github.com/en/actions/creating-actions/creating-a-composite-action) can be leveraged to solve that problem. Composite Actions are very similar to GHA workflow files in that they contain multiple steps, some of which can reference open-source Actions. Still, they are not individual workflows but rather actions that another workflow can reference. These Composite Actions thus reduce code repetition within the organization.
### GitHub Script
:::caution
Usage of inline GitHub Scripts are difficult to maintain. They are acceptable inside of dedicated actions, but not recommended as part of workflows.
:::
Some composite actions may utilize the [github-script](https://github.com/actions/github-script) action, resulting in inline Node.js code that lacks any syntax highlighting and can make the composite action YAML file unnecessarily long. A solution for this is to create separate Node.js modules, invoke them within the inline code supplied to `github-script`, and supply the references created by `github-script` and contexts available in GHA workflows to those modules:
```
const actionContext = require('./actions/lib/actioncontext.js')(this, context, core, github, ${{ toJSON(github) }}, ${{ toJSON(inputs) }}, ${{ toJSON(steps) }})
const deployment = require('./actions/lib/deployment.js')(actionContext)
deployment.newDeployment(JSON.parse(`${{ inputs.stages }}`))
```
## Recommendation
- Use a private repository for reusable GitHub workflows (e.g. `acme/github-workflows`)
- Use GitHub Enterprise to support approval steps
- Use Organization `acme/.github` repository with starter workflows
- Use Cloud Posse’s existing workflows to get started quickly
- Use Cloud Posse’s public github actions
- Use a private GitHub actions repository specific to your organization (e.g. `acme/github-actions`)
- Use a private template repository to make it easy for developers to initialize new projects
- Adjust `webhook_startup_timeout` in the chart. This setting is used for automatically scaling
back replicas. The recommended default is 30 minutes, but no one size fits all. Here's further
documentation for your consideration: [scaling runners](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md)
## Related
- [Decide on Strategy for Continuous Integration](/layers/software-delivery/design-decisions/decide-on-strategy-for-continuous-integration)
- [Decide on Self-Hosted GitHub Runner Strategy](/layers/software-delivery/design-decisions/decide-on-self-hosted-github-runner-strategy)
- [GitHub Actions](/learn/tips-and-tricks/github-actions)
---
## Decide on Hot-fix or Rollback Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
Releasing a change is when it’s made available to a user or subset of users. In an ideal world, deployments should not equal releases.
## Considerations
#### Feature Flagging
This strategy involves using a feature-flagging feature such as LaunchDarkly in order to be able to toggle features without redeploying code to an environment. If a new feature does not work, it can be toggled off using the LaunchDarkly API without having to author a new release and have it redeployed using the CD pipeline. If the feature flags are configured properly, this can be an effective solution as the changes do not have to be authored and passed through the CD pipeline, hence shortening the mean time to restore.
#### Rolling Forward
This strategy involves authoring a patch release in order to disable a problematic feature or author a bug-fix, using the CD pipeline. This can lead to a longer mean time to restore when compared to feature flagging, since fixes or feature disablements need to be authored and passed through the CD pipeline.
#### Release branches
If release branches are utilized, then any bug-fix commits need to be pushed to the release branch, indicating a new patch semantic version corresponding to the minor semantic version corresponding to that release branch (for example if the release branch is 1.1.x and the 1.1.0 tag had a bug, then a bug-fix commit can be pushed to that release branch and tagged at 1.1.1). These changes then need to pass through the CD pipeline and deployed to the live environment. This is very similar to the _Rolling Forward_ strategy, with the only difference being that releases are not cut directly from the trunk.
## Related
- [Decide on Release Promotion Strategy](/layers/software-delivery/design-decisions/decide-on-release-promotion-strategy)
---
## Decide on how ECS apps are deployed
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
We need to decide on what methodology to use when deploying applications to ECS.
Think of Helm Charts in Kubernetes as similar to using Terraform Modules for ECS Tasks
#### Deploy using Spacelift
- Github action that triggers Spacelift run which runs atmos and terraform
- Spacelift makes use of rego policies
- Optional manual confirmation
#### Deploy using Github Actions without Spacelift
- Github action runs atmos and terraform directly
- Auto deploy on merges
- Auto deploy on manual cut releases
---
## Decide on Kubernetes Application Artifacts
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
We prefer strategies that ship the Application code (e.g. docker images) with the Application configuration (E.g. everything needed to run the application on the platform, such as manifests, IAM roles, etc.)
**Application-specific Infrastructure Considerations**
- IAM roles, SNS topics,
- Does the terraform code live alongside the apps or in the infrastructure mono repo (our preference is alongside the apps).
- When should these infrastructure changes rollout? e.g. before or after application changes.
- If the resources will be shared amongst services, then we should probably not do this for those dependencies and instead move them to shared infrastructure since their lifecycle is not coupled to one application.
**Application Configuration Considerations**
- Raw manifests
- Helm charts
- Kustomize
---
## Decide on Maintenance Page Solution
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
When your sites go down, we need to be able to communicate more graciously to customers that you’re having issues than a generic “502 Bad Gateway” message.
:::info
Specifically, this decision relates to services behind an ALB. CloudFront and S3 are out of scope.
:::
## Solution
We recommend deploying a static maintenance page. The industry best practice is to host the downtime page on a cloud provider that does not share infrastructure with your primary cloud provider. E.g. S3 is not recommended, as even S3 has gone down. That said, using a separate cloud provider is a micro-optimization for a very narrow set of failure scenarios.
Some related considerations are how the maintenance page will be activated.
### Considered Options
There are a few options:
### Option 1
Use route53 health checks. Cloud Posse does not recommend it because poorly implemented DNS-clients clients may cache the downtime host.
### Option2
Use CloudFront dynamically redirect to downtime page using an Origin Group with fail-over.
Here’s a simple example using `terraform` to provision a maintenance page on Cloud Flare.
- [https://github.com/adinhodovic/terraform-cloudflare-maintenance](https://github.com/adinhodovic/terraform-cloudflare-maintenance)
### Option 3 (Recommended)
Use ALB with [fixed response](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#fixed-response-actions)
- Use `fixed-response` with iframe (not ideal) to S3 static site
- Use `fixed-response` populated with HTML from `file` with inline CSS, SVGs, etc. and no external dependencies (if possible)
- Add GA code for analytics
---
## Decide on Pipeline Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
## Problem Statement
Teams need a release engineering process to help QA and developer teams operate efficiently. Namely, QA needs a way to validate changes in QA environments before releasing them to staging or production. Changes to production require approval gates, so only authorized persons can release to production. And if changes need to be made to the running production release, those need to be performed via hotfixes that need a special CI/CD and release workflow. The more service you operate, the more important it is that workflows are very DRY and are not copied between all repositories, making maintenance difficult.
## Prerequisites
Before implementation on the pipeline strategy, the following should be in place
- An inventory of the applications for migration to the new pipelines
- Cloud Posse access to the repositories
- All the GitHub Action runners deployed
## High-level Approach
:::info
The following is our Kubernetes-centric approach with GitHub Actions. Similar strategies can be implemented for other platforms, but would require different techniques for integration testing and deployment.
:::
Cloud Posse’s turn-key implementation is an approach that provides QA environments, approval gates, release deployments, and hotfixes in a way that applications can utilize with minimal effort and minimal duplication.
Predefined workflows
Feature branch workflow
Triggered on changes in a pull request that target the `main` branch. It will perform CI (build and test) and CD (deploy into _Preview_ and/or _QA_ environments) jobs.
Main branch workflow
Triggered on commit into the `main` branch to integrate the latest changes and
create/update the next draft release. It will perform CI (build and test) and
CD (deploy into `Dev` environment) jobs.
Release workflow
Triggered when a new release is published. The workflow will promote artifacts
(docker image) that was built by the “_Feature branch workflow_“ to the
release version and deploy them to the `Staging` and `Production` environments
with approval gates. In addition, the workflow will create a special `release/
{version}` branch that is required for the hotfixes workflow.
Hot Fix Branch workflow
Triggered on changes in a pull request that target any `release/{version}`
branch. It will perform CI (build and test) and CD (deploy into `Hotfix`
environment) jobs.
Hot Fix Release workflow
Triggered on commit into the `release/{version}` branch to integrate new hotfix changes. It will perform CI (build and test) and CD (deploy into the `Production` environment with approval gates) jobs. In addition, it will create a new release with incremented patch version and create a regular PR target `main` branch to integrate the hotfix with the latest code.
The implementation should use custom GitHub actions and reusable workflows to have DRY code and a clear definition for each `workflow/job/step/action`.
Integrate with Github UI to visualize the release workflow in-process and in-state.
## **Goals**
The top 3 goals of our approach is to...
1. Make it very easy for developers to onboard new services
2. Ensure it’s easy for developers to understand the workflow and build failures
3. Leverage GitHub UI, so it’s easy to understand what software is released by an environment
## **Key Features & Use Cases**
What we implement as part of our approach and the specific use cases we address is explained below.
### CI testing based on the Feature branch workflow
- A developer creates a PR target to the `main` branch. GHA will perform build and run test on each commit. The developer should have ability to deploy/undeploy the changes to `Preview` and/or `QA` environment by adding/removing specific labels in PR Gihub UI. When PR merged or closed GHA should undeploy the code from `Preview`/`QA` environments where it is deployed to.
### CI Preview Environments
- Preview environments are unlimited ephimerial environments running on Kubernetes. When a PR with a target of the `main` branch is labeled with the `deploy` label, it will be deployed into a new preview environment. If developer needs to test the integration between several services they can deploy those apps into the same preview environment by creating PRs using the same named branch (e.g. `feature/add-widgets`).
- Preview environments by convention expect that all third party services (databases, messaging bus, cache and etc) are deployed from scratch in Kubernetes as a part of the environment and removed on PR close.
- The developer is responsible for defining third party services and to orchestrate them in Kubernetes (e.g. with [Operators](https://operatorhub.io/)).
### CI QA Environments
- QA environments are a discrete set of static environments running on Kubernetes with preprovisioned third party services. They are similar to preview environments, except that environments are shared by QA engineers to verify PR changes in “close to real live” environment. QA engineer can deploy/undeploy PR changes to one of the QA environments by adding or removing the `deploy/qa{number}` label.
- If several PRs of one repo have `deploy/qa{number}` label then the latest deployment (commit & push) will override each other.
- It is responsibility of QA engineers to avoid this conflict. GitHub environments UI is useful for seeing what is deployed.
### Test commits into the main branch
- On each commit into the `main` branch, the “_Main branch workflow_” triggers. It will build and test the latest code from the `main` branch, create or update the latest draft release and deploy the code to `Dev` environment.
- If the commit was done by merging a PR then the PR title/description would be added to the release changelog.
### Bleeding Edge Environment on Dev
- The “dev” environment is a single environment with provisioned third-party services. The environment should be approximately equivalent to `Staging` and `Production` environments. Developers and QA engineers need it to perform integration testing and validate the interaction between the latest version of applications and services before cutting a release. This is why it’s called the “bleeding edge.”
### Automatic Draft Releases Following Semver Convention
- On commit in the `main` branch GHA should create new draft release or update it. The release should have auto generated changelog based on commit comment messages and PRs title/descriptions.
- Developer can manage sections of the changelog by adding specific labels to the PR.
- Also labels are used to define the release major/minor semver increment (minor increments by default)
### Automated Releases with Approval Gates
- When a Developer (or Release Manager) decides to issue a new release they need just to publish the _Draft Release_ that will trigger the “_Release workflow_“. The workflow should create a new “Release branch” `release/{version}`, promote docker image with release version and consequently deploy it to `Staging` and `Production` environments with approval gates. Developer need to approve deployment on `Staging` environment, wait the deployment would be successfully completed and then repeat the same for `Production` environment.
### Staging Environment
- `Staging` is a single environment with provisioned third-party services. The environment should be approximately equivalent to `Production` environment. Developers and QA engineers use it to perform integration testing, run migrations, test deploy procedures and interactions of the latest released versions. So while the the `Dev` environment operates on the latest commit into `main`, the `Staging` environment operates on the latest release.
### Production Environment
- `Production` is a single environment with provisioned third party services used by real users. It operates on releases that have been promoted from `Staging` after approval.
### Hotfix Pull Request workflow
- In the case when there is a bug in the application that runs in the `Production` environment, the Developer needs to create a Hotfix PR.
- Hotfix PR should target to “_Release branch_” `release/{version}`. GHA should perform build and run tests on each commit. The developer should have ability to deploy/undeploy the changes to `Hotfix` environment by adding/removing specific labels in PR Gihub UI. When PR merged or closed GHA should undeploy the code from `Hotfix` environment.
### Hotfix Environment
- `Hotfix` is a single environment with provisioned third-party services. The environment should be approximately equivalent to `Production` environment. Developers and QA engineers need it to perform integration testing, migrations, deploy procedures and interactions of the hotfix with other services.
- If there are several `hotfix` PRs in one repo deployments to `Hotfix` environment will be conflicting. The latest deploy will be running on `Hotfix` environment.
- This is responsibility of Developers and QA engineers to avoid that conflicts.
### Hotfix Release workflow
- On each commit into a “_Release branch_” `release/{version}` “_Hotfix release workflow_” triggers. It will build and test the latest code from the branch, create a new release with increased patched version and deploy it with approval gate to the `Production` environment.
- Developer should also take care of the hotfix to the `main` branch, for which a reintegration PR will be created automatically.
### Deployments
- All deployments are by default performed with `helmfile` on Kubernetes clusters.
### Reusable workflows and GHA
- All workflows and custom github actions should be reusable and have not specific repository references.
- “Reusable workflows in private repo“ pattern
- Reusable worklows should be stored in separate repo and copied on change across all repositories by special workflow - according to `Reusable workflow in private organization repositories` pattern.
## Considerations
The following considerations are required before we can begin implementing the turnkey GitHub Action workflows.
### Supported Environments
The following key decisions need to be made as part of this design decision:
- Which environments are relevant to your organization? (e.g. do you need the Preview/QA environments or is Dev/Staging/Prod sufficient?)
- Preview environments (not all applications are suitable for this)
- QA environments
- Dev/Staging/Production environments
- Hotfix environment
### Approval Gate Strategy
GitHub Enterprise is required to support native approval gates on deployments to environments. Approval gates support a permissions model to restrict who is allowed to approve a deployment.
Without GitHub Enterprise, we’ll need to use an alternative strategy using [workflow_dispatch](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) to manually trigger deployments using the GitHub UI.
### GitHub Repo Strategy for Applications
We’ll need to know what strategy you use for your applications: e.g. monorepo, polyrepo, or a hybrid approach involving multiple monorepos.
### GitHub Repo for Shared Workflows
What repo do you want to use to store the shared GitHub action workflows? e.g. we recommend calling it `github-action-workflows`
GitHub Enterprise users will have a native ability to use private-shared workflows.
Non-GitHub Enterprise users will need to use a workaround, which involves cloning the shared workflows repo before using them.
### GitHub Repo for Private GitHub Actions
What repo do you want to use for your private GitHub actions?
For GitHub Enterprise users we recommend using one repo per private GitHub Action so that they can be individually versioned. We’ll need to know what convention to use. Cloud Posse uses `github-action-$name` while we’ve seen some organizations use patterns like `$name.action` and `action-$name`. We like the `github-action-$name` convention because it follows the Terraform convention for modules and providers (e.g. `terraform-provider-aws`)
We recommend a monorepo for non-GitHub enterprise users. If we take this approach, we’ll need to clone the private GitHub Actions repo as part of each workflow. We’ll need to know what this repo is called. We recommend calling it `github-actions`. Alternatively, if your company uses a monorepo strategy for
## **Out of Scope**
Automated Rollbacks
Automated triggering of rollbacks is not supported. Manually initiated,
automatic rollbacks are supported, but should be triggered by reverting the
pull request and using the aforementioned release process.
Provision environments
Provision k8s clusters, third party services for any environments should be
performed as separate mile stone. We expect already have K8S credentials for
deployments
Define Docker based third party services
Third party services running in docker should be declared individually per
application. This is Developers field of work.
Key Metrics & Observability
Monitoring CI pipelines and tests for visibility (e.g. with with Datadog CI)
is not factored in but can be added at a later time.
[https://www.datadoghq.com/blog/datadog-ci-visibility/](https://www.datadoghq.com/blog/datadog-ci-visibility/)
## **Open Issues & Key Decisions**
[Decide on Database Seeding Strategy for Ephemeral Preview Environments](/layers/software-delivery/design-decisions/decide-on-database-seeding-strategy-for-ephemeral-preview-enviro)
[Decide on Customer Apps for Migration](/layers/software-delivery/design-decisions/decide-on-customer-apps-for-migration)
[Decide on Seeding Strategy for Staging Environments](/layers/software-delivery/design-decisions/decide-on-seeding-strategy-for-staging-environments)
## **Design and Explorations Research**
Links to any supporting documentation or pages, if any
- [Continuous Delivery: Understand your Value Stream - Step 1](https://medium.com/@yaravind/continuous-delivery-understand-your-value-stream-step-1-e2955eaeba95)
- [Value Stream Management: Treat Your Pipeline as Your Most Important Product](https://devops.com/value-stream-management-treat-your-pipeline-as-your-most-important-product/)
- [Deployments approval gates with Github](https://docs.github.com/en/actions/managing-workflow-runs/reviewing-deployments)
- [Release workflow POC (Cloud Posse version)](https://github.com/cloudposse/example-github-action-release-workflow/pull/45)
- [Using environments for deployment on Github](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment)
- [https://github.community/t/reusable-workflows-in-private-organization-repositories/215009/43](https://github.community/t/reusable-workflows-in-private-organization-repositories/215009/43)
- [https://medium.com/@er.singh.nitin/how-to-share-the-github-actions-workflow-in-an-organization-privately-c3bb3e0deb3](https://medium.com/@er.singh.nitin/how-to-share-the-github-actions-workflow-in-an-organization-privately-c3bb3e0deb3)
- [GitHub Actions](/learn/tips-and-tricks/github-actions) is Cloud Posse’s own reference documentation which includes a lot of our learnings
## **Security Risk Assessment**
The release engineering system consists of two main components - _Github Action Cloud_ (a.k. _GHA_) and _Github Action Runners_ (a.k. _GHA-Runners_).
The _GHA-Runners_ can be ‘Cloud provided' or 'Self-hosted’.
‘_Self-hosted GHA-Runners_' are executed on EC2 instances under the control of the autoscaling group in the dedicated '_Automation’_ AWS account.
On an EC2 instance, bootstrap GHA-Runner registers itself on Github with a _**Registration token**_ **(1)**. From that moment _GHA_ can run workflows on it.
When a new `_Workflow Run_` is initialized, GHA issues a new unique _**Default token (2)**_. That token is used to authenticate on Github API and interact with it. For example, `_Workflow Run_` uses it to pull source code from a Github repository **(3)**.
_**Default token**_ scoped to a repository (or another Github resource) that was the source of the triggered event. On the provided diagram, it is the _Application Repository._
If a workflow needs to pull source code from another repository, we have to use _Personal Access Token (****PAT****),_ which had to be issued preliminarily. On the diagram, this is ‘**PAT PRIVATE GHA' (4)** that we use to pull the organization's private actions used as steps in GHA workflows.
In a moment GHA-Runner pulled the ‘_Application_’ source code and ‘_Private Actions_’ it is ready to perform real work - build docker images, run tests, deploy to specific environments and interact with Github for a better developer experience.
To interact with AWS services `_Workflow Run_` assumes **CICD (5)** **IAM role** that grants permissions to work with ECR and to assume **Helm (5)** **IAM roles** from another account. The **'Helm' IAM role** is useful to **Authenticate (6)** on a specific EKS cluster and to deploy there. Assuming **CICD IAM role** is possible only on '\_Self-hosted GHA-\_Runners’ as EC2 Instance credentials used for initial interaction with AWS.
_**Default token**_ fits all needs except one - creating a _Hotfix Reintegration Pull Request._ for that functionally we need to implement a workaround. On the diagram provided one of the possible workarounds - using _**PAT to Create PRs (7)**_ with wider permissions***.***
### Registration token
Registration token required only to register/deregister ‘_Self-hosted GHA-Runner_' on Github. The token allows attaching '_Self-hosted GHA-Runner_' to the organization or a single repository scope. If '_Self-hosted GHA-Runner_' scoped to the organization level, any repository in the org can run its workflows on the ‘_Self-hosted GHA-Runner_'.
- [github-runners](/components/library/aws/github-runners/)
- [https://docs.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners](https://docs.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners)
### Default Github Token
The token is generated on '_Workflow Run_' initialization. So it is unique per '_Workflow Run_'. The token is scoped to the repository, that triggered the '_Workflow Run_'.
By default, the token can have `permissive` or `restricted` scopes granted. The difference between declared in the table below. You can select which of the default scopes would be used. For settings per repo - follow [this documentation](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository), for setting for all repositories in the organization - follow [this documentation](https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#setting-the-permissions-of-the-github_token-for-your-organization).
| Scope | Default access(permissive) | Default access(restricted) |
| ------------------- | ------------------------------- | ------------------------------- |
| actions | read/write | none |
| checks | read/write | none |
| contents | read/write | read |
| deployments | read/write | none |
| id-token | none | none |
| issues | read/write | none |
| metadata | read | read |
| packages | read/write | none |
| pages | read/write | none |
| pull-requests | read/write | none |
| repository-projects | read/write | none |
| security-events | read/write | none |
| statuses | read/write | none |
We recommend using the `restricted` scope by default. GHA workflows can [explicitly escalate permissions](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#how-the-permissions-are-calculated-for-a-workflow-job) if that’s required for the process.
All workflows implemented in POC explicitly request escalation of permission from the `restricted` scope. Please check the following table.
| Scope | Default access(restricted) | Pull Request Workflow | Bleeding edge Workflow | Release Workflow | Hotfix Pull Request Workflow | Hotfix workflow |
| ------------- | ------------------------------- | --------------------- | ---------------------- | ---------------- | ---------------------------- | --------------- |
| contents | read | read | read/write | read/write | read | read/write |
| deployments | none | read/write | none | none | read/write | none |
| metadata | read | read | read | read | read | read |
| pull-requests | none | read/write | none | none | read/write | read/write |
- [https://docs.github.com/en/actions/security-guides/automatic-token-authentication](https://docs.github.com/en/actions/security-guides/automatic-token-authentication)
- [github-runners](/components/library/aws/github-runners/)
- [https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#setting-the-permissions-of-the-github_token-for-your-organization](https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#setting-the-permissions-of-the-github_token-for-your-organization)
- [https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository)
- [https://docs.github.com/en/rest/overview/permissions-required-for-github-apps#permission-on-repository-hooks](https://docs.github.com/en/rest/overview/permissions-required-for-github-apps#permission-on-repository-hooks)
### Private Github Actions PAT
Having additional PAT is a necessary evil to share the Private Github Actions library.
The only way to use private GitHub action is to pull it from a private repository and reference it with the local path.
It is impossible to use the _‘Default Github token’_ as it is scoped to one repo - [read more](https://github.com/actions/checkout#checkout-multiple-repos-private)
To get this PAT with minimal required permissions follows these steps:
1. Create a technical user on Github ( like `bot+private-gha@example.com` )
2. Added the user to the `Private Actions` repository with 'read-only' permissions (`https://github.com/{organization}/{repository}/settings/access`)
3.
Generate a PAT for the technical user with that level of permissions [https://github.com/settings/tokens/new](https://github.com/settings/tokens/new)
4. Save the PAT as organization secret with name `GITHUB_PRIVATE_ACTIONS_PAT` (`https://github.com/organizations/{organization}/settings/secrets/actions`)
- [https://github.com/actions/checkout#checkout-multiple-repos-private](https://github.com/actions/checkout#checkout-multiple-repos-private)
- [https://github.blog/changelog/2022-01-21-share-github-actions-within-your-enterprise/](https://github.blog/changelog/2022-01-21-share-github-actions-within-your-enterprise/)
- [https://github.com/marketplace/actions/private-actions-checkout#github-app](https://github.com/marketplace/actions/private-actions-checkout#github-app)
### AWS Assume Role Sessions
Detailed description interaction with AWS API is out of the scope of this POC. Just want to mention that by default ‘_Self-hosted GHA-Runner_' have the same access to AWS resources as the Instance profile role attached to the ‘GHA-Runners' EC2 instances. The minimal requirement is granted to assume the ‘CICD' role and through it assume any 'Helm’ roles to get access to EKS clusters for deployment.
### Authentication on EKS with IAM
Detailed description authentication on EKS with IAM is out of the scope of this POC. The only thing we’d like to mention is that we will have the same level of permissions on EKS as the 'Helm' role do.
### `Create PR` Problem
The final step in `Hotfix` workflow is to create PR into the `main` branch to reintegrate the hotfix changes with the latest code in the `main`.
The problem is that `Creating and approving PR` is separate permission that is disabled by default. And it seems to be a best practice to leave it as is.
That permission can be granted on the same with default scopes for '_Default token_' pages ([repo](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#preventing-github-actions-from-creating-or-approving-pull-requests) or [org](https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#preventing-github-actions-from-creating-or-approving-pull-requests) level).
#### Workarounds:
1. Enabled `Creating and approving PR` on the repo or even org level and used 'Default Github Token' to create a PR
2. Create a new technical GitHub user, permit it to create PRs, issue PAT under the user, and use it for PR creation. This is close to what we did for '_Private Actions_' but with much wider access.
3. Skip the automatic PR creation feature and rely on developers to create PRs from Github UI
### Learn more:
- https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#stealing-the-jobs-github_token
---
## Decide on Release Engineering Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
A consistent branching strategy is needed to ensure that a consistent workflow is shared across all projects.
## Solution
### GitFlow Strategy
Gitflow is a branching strategy that allows for parallel development by creating separate branches for features and releases. This strategy is considered a bit complicated and advanced but can be beneficial for larger, more complex projects.
The Gitflow branching model consists of the following branches:
- **Master Branch:** Represents the production-ready code and is typically only updated when a new release is made.
- **Develop Branch:** Represents the latest development code and serves as a parent branch for feature branches.
- **Feature Branches:** Created from the develop branch, feature branches are used to develop new features or functionality. Once the feature is complete, it is merged back into the develop branch.
- **Release Branches:** Created from the develop branch, release branches are used to prepare for a new production release. Any bug fixes and final testing are done on this branch before being merged back into both the develop and master branches.
- **Hotfix Branches:** Similar to release branches, hotfix branches are created from the master branch to address any critical bugs or issues discovered in the production code. Once the hotfix is complete, it is merged back into both the master and develop branches.
The benefit of Gitflow is that it provides a clear path for changes to be made to the codebase, ensuring that production-ready code is only released from the master branch. It also allows for multiple developers to work on features and bug fixes in parallel without disrupting the development workflow.
### Trunk-Based Strategy
Trunk-based development is a branching strategy that allows for continuous integration and deployment. This strategy is considered simpler and more lightweight than Gitflow, but may not be suitable for larger, more complex projects.
The trunk-based branching model consists of the following branches:
- **Main (_Trunk_) Branch:** Represents the latest development code and serves as a parent branch for feature branches.
- **Feature Branches:** Created from the _Trunk_ branch, feature branches are used to develop new features or functionality. Once the feature is complete, it is merged back into the develop branch.
---
## Decide on Release Promotion Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
### Problem and Context
We need to control when a release is deployed to a given stage (E.g. dev, staging, production).
We must decide how releases will be promoted from staging to production.
How that will be accomplished will depend on whether or not GitHub Enterprise features are available and whether or not it’s possible to use semantic versioning or not.
### Assumptions
- Auto deployment to the `dev` stage will be triggered upon every commit to the default branch (e.g. `main`)
### Options
#### Option A: Automatically Deploy to Staging on Every Release, Use GitHub Approval Steps for Production
##### Pros
- Natively supported by GitHub
- Environment protection rules ensure RBAC restricts who can approve deployments
##### Cons
- Requires GitHub Enterprise, as GitHub Approvals, GitHub Environment protection rules (and Environment Secrets) are only available in GitHub Enterprise.
#### Option B: Automatically Deploy to Staging on Every Release, Use Manual GitHub Action Workflow to Production Deployments
##### Pros
- Does not require GitHub Enterprise
- Staging always represents the latest release
##### Cons
- No environment protection rules; anyone who can run the workflow can deploy. Mitigated by customizing the workflow with business logic to restrict it, but not supported by Cloud Posse today.
#### Option C: Use Manual GitHub Action Workflow for Staging and Production Deployments
##### Pros
- Does not require GitHub Enterprise
- Full control over when every stage is updated
##### Cons
- More manual operations to promote a release
- No environment protection rules; anyone who can run the workflow can deploy. Mitigated by customizing the workflow with business logic to restrict it, but not supported by Cloud Posse today.
### Out of Scope
- Tightly coupled multi-service application deployments
### Related Design Decisions
- [Decide on Database Schema Migration Strategy](/layers/data/design-decisions/decide-on-database-schema-migration-strategy)
---
## Decide on Repositories Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
## Problem Statement
Deciding repository strategies for your codebase is a crucial choice because it can significantly
impact your development processes, collaboration effectiveness, tooling, and architectural decisions.
There are two main strategies for organizing source code repositories:
[monorepo](https://en.wikipedia.org/wiki/Monorepo) and `polyrepo`.
In a middle there can be a hybrid strategies:
- `multi monorepos`
- `monorepo as read-only proxy`
- `polyrepos & monorepos`
The hybrid strategies inherit gains and loss of the main two.
That's why focus on pros and const for the main repository structures.
## Poly repo
In a polyrepo structure, each project, service, or library has its own repository.
### Benefits
#### Isolation
Each project is isolated from others, reducing the risk of a change in one project inadvertently affecting others.
#### Scalability
Polyrepos can scale more effectively as each repository can be managed separately.
#### Simple and fast CI/CD Pipelines
CI/CD pipelines contains less logic and works faster because it only has to process the relevant parts of the codebase.
### Challenges
#### Code Duplication
Code that's shared between projects might have to be duplicated in each repository.
#### Increased Management Overhead
Managing multiple repositories can be more complex and time-consuming.
#### Complex Dependency Management
If libraries have interdependencies, it can be harder to manage versioning across multiple repositories.
## Monorepo
Monorepos hold all of an organization's code in a single repository. All projects, libraries, and services
live together.
### Benefits
#### Code Sharing and Reuse
With all the code in one place, it's easy to share and reuse code across multiple projects.
This can lead to more consistent code, reduce duplication, and enhance productivity.
#### Unified Versioning
A monorepo has a single source of truth for the current state of the system.
#### Collaboration and Code Review
Developers can work together on code, have visibility of changes across the entire project, and
perform code reviews more effectively.
#### Simplified Dependency Management
All projects use the same version of third-party dependencies, which can make managing those dependencies easier.
### Challenges
#### Scalability
As a codebase grows, it can become more challenging to manage and navigate a monorepo.
#### Complex and slower CI/CD Pipelines
Continuous integration and deployment can become slower as your codebase grows because the pipeline
may need to compile and test the entire codebase for every change.
CI/CD pipelines for monorepo are complex and required special tooling such as
[Bazel](https://bazel.build/), [Pants](https://www.pantsbuild.org/), [Please](https://please.build/) or [Buck](https://buck2.build/).
#### Risk of Breaking Changes
A small change in one part of the codebase might break something else unexpectedly since everything is interconnected.
#### Dummy Versioning
Whenever the entire monorepo is tagged, it automatically assigns this new tag to all code inside, including
all hosted libraries. This could lead to the release of all these libraries under the new version number,
even if many of these libraries have not been updated or modified in any way.
## Recommendation
We recommend using `Polyrepo` as a basic repository organization strategy because it leads to faster development cycle,
simplify CI/CD pipelines, do not require additional tooling.
Active usage of preview (ephemeral) and QA environments, testing automation,
deployment on multiple stages (dev -> staging -> prod) during release workflow and implementing of a particular deployment
patterns (Blue/Green, Canary, Rolling) allow catching integration issues before the code goes on production.
The repository strategy does not have to be the same across the whole organization - different teams can
use different patterns, but that lead to the complexity of the CI/CD pipelines and reduce reusability.
That's why we recommend having a consistent repository strategy, at least on a team level.
## References
- [Monorepo vs. polyrepo](https://github.com/joelparkerhenderson/monorepo-vs-polyrepo)
- [From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo](https://css-tricks.com/from-a-single-repo-to-multi-repos-to-monorepo-to-multi-monorepo/)
- [Monorepo vs Polyrepo](https://earthly.dev/blog/monorepo-vs-polyrepo/)
- [Polyrepo vs. Monorepo - How does it impact dependency management?](https://www.endorlabs.com/blog/polyrepo-vs-monorepo-how-does-it-impact-dependency-management)
- [Monorepo Vs Polyrepo Architecture: A Comparison For Effective Software Development](https://webo.digital/blog/monorepo-vs-polyrepo-architecture/)
-
---
## Decide on Seeding Strategy for Staging Environments
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
Longer-lived staging environments need a dataset that closely resembles production. If this dataset becomes stale, we’ll not be effectively testing releases before they hit production. Restoring snapshots from production is not recommended.
## Considerations
- Should contain anonymized users, invalid email addresses
- No CHD, PHI, PII must be contained in the database
- The scale of data should be close to the production database
- Snapshots from production are dangerous if not anonymized/scrubbed (imagine the risk of sending emails to everyone from your staging env)
- Fixtures are not recommended (scale of data for fixtures usually does not represent production)
- We recommend including the DBA in these conversations.
- QA teams want stable data so that they can run through their test scenarios
## Recommendations
:::caution
Cloud Posse does not have a turnkey solution for seeding staging environments
:::
- ETL pipeline scrubs the data and refreshes the database weekly or monthly. (e.g. AWS Glue, GitHub Action Schedule Job)
---
## Decide on Self-Hosted GitHub Runner Strategy
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Problem
GitHub Actions enables organizations to run their own runners (aka workers) free-of-charge. Only problem is we have to host them and there’s a couple of ways of doing it, each with their own pros and cons. For the most part, we don’t consider it optional to deploy runners, if using GitHub Actions.
## Considerations
We have prior art for the following strategies. The strategies are not mutually exclusive, but most often we see companies only implement one solution.
## Considerations
GitHub Actions is the CI/CD platform offered by GitHub. Its main advantages are that, firstly, it comes at no additional costs on top of the cost of seats for users within the GitHub organization, when using self-hosted Runners. Secondly, there is a thriving ecosystem of open source Actions that can be used within GitHub Action workflows.
### Self-hosted Runners on EC2
GitHub Actions Runners can be self-hosted on EC2 or on Kubernetes clusters. This is probably the easiest to deploy and understand how it works, but it’s the least optimal way of managing runners.
On EC2, the [Runner executable](https://github.com/actions/runner) can be installed using a user-data script, or baked into the AMI of the instance to be deployed to EC2. The runner can be deployed to an Auto Scaling Group (ASG), which usually presents a de-registration problem when the ASG scales in, however EventBridge and AWS Systems Manager can be utilized in tandem to have the runner de-register prior to being terminated (see: [Cloud Posse's github-runners component](https://github.com/cloudposse/terraform-aws-components/tree/95ade5b36b61d2432179399bd0e9fa8639eeb899/modules/github-runners) which has this implementation).
It’s also possible to use Spot Instances. Easily use AWS SSM to connect to runners. Autoscale anytime sustained CPU capacity > 5% for ~5 minutes (e.g. doing anything). Scale down when CPU capacity is < 2% for 45 minutes (e.g. doing nothing). Requires a minimum of 1 node online. This is a good route if builds need to run on multiple kinds of architectures (e.g. ARM64 for M1), for which operating the kubernetes node pools would be cumbersome.
If we go this route, we’ll also want to determine if we should deploy the Datadog Monitoring Agent on the nodes.
[https://github.com/cloudposse/terraform-aws-ec2-autoscale-group](https://github.com/cloudposse/terraform-aws-ec2-autoscale-group)
### Self-hosted Runners on Kubernetes
:::tip
This is our recommended approach
:::
Deploying these Runners on Kubernetes is possible using [actions-runner-controller](https://github.com/actions-runner-controller/actions-runner-controller). With this controller, a small-to-medium-sized cluster can house a large number of Runners (depending on their requested Memory and CPU resources), and these Runners can scale automatically using the controller’s `HorizontalRunnerAutoscaler` CRD. This has the benefit that it can scale to zero and leverages all the monitoring we have on the platform. This solution also allows for using a custom runner image without having to rebuild an AMI or modify a user-data script and re-launch instances, which would be necessary when deploying the Runners to EC2.
`actions-runner-controller` also supports several various mechanisms for scaling the number of Runners: `PercentageRunnersBusy` simply scales the Runners up or down based on how many of them are currently busy, without having to maintain a list of repositories used by the Runners, which would be the case in `TotalNumberOfQueuedAndInProgressWorkflowRuns`. The most efficient and recommended option for horizontal auto-scaling using the `actions-runner-controller`, however, is to [enable the controller’s webhook server](https://github.com/actions-runner-controller/actions-runner-controller#webhook-driven-scaling) and configure the `HorizontalRunnerAutoscaler` to scale on GitHub webhook events (for event name: `check_run`, type: `created`, status: `queued`). Note that the `actions-runner-controller` does not have any logic to automatically create the webhook configuration in GitHub, and hence, the webhook server needs to be exposed and configured manually in GitHub or using the GitHub API. If using `aws-load-balancer-controller`, ensure that within `actions-runner-controller` Helm chart, `githubWebhookServer.ingress.enabled` is set to `true`, and if using `external-dns`, set `githubWebhookServer.ingress.annotations` to include an `external-dns.alpha.kubernetes.io/alias`. Finally, configure the webhook in GitHub to match the hostname and port of the endpoint corresponding to the newly-created Ingress object.
In general, Cloud Posse recommends using `actions-runner-controller` over EC2-based Runners due to the flexibility in runner sizing, choice of container image, and advanced horizontal scaling options. If however, EC2 Runners need to be utilized due to specific requirements such as a build environment on ARM-based instances, then that option is recommended as well.
[https://github.com/summerwind/actions-runner-controller](https://github.com/summerwind/actions-runner-controller)
### Repository-wide or Organization-wide Runners
Self-hosted GitHub Actions Runners can be made to be either repository-wide or organization-wide. Runners registered for a specific repository can only run for workflows corresponding to that repository, while Runners registered for an organization can run for any workflow for any repository in an organization, provided that the labels selected by the `runs-on` attribute in the workflow definition match the labels corresponding to the runner. Repo-level runners have the befit of reduced scope for PAT, however, pools are not shared across repos so there are wasted resources.
In general, Cloud Posse recommends choosing Organization-wide Runners and ensuring horizontal scaling is configured to adequately respond to an influx of queued runs (see the previous section).
[https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups](https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups)
### Labeling Runners
Whenever a GitHub Actions Runner is registered, it provides a list of labels to GitHub. Then, workflow definitions can specify which Runners to run on.
For example, if the workflow syntax specifies `runs-on: [self-hosted, linux]`, then the runner must be registered with the label `linux`.
In another example, if two Runners are registered, one with the labels `linux`, `ubuntu`, and `medium`, and one with the labels `linux`, `ubuntu`, and `large`, and workflow A specifies `runs-on: [self-hosted, ubuntu]` and workflow B specifies `runs-on: [self-hosted, ubuntu, large]`, then:
- Workflow A can run on both the first runner and the second runner. It’ll run on whichever is available.
- Workflow B can only run on the second runner.
Some advanced configurations may involve creating multiple `RunnerDeployment` CRDs (`actions-runner-controller`) which use different container images with different linux distributions or with different packages installed, then naming the labels accordingly. In general, Cloud Posse’s recommendation is to create meaningful runner labels that can be later referenced by developers writing GHA Workflow YAML files.
### Integration with AWS
The GitHub Actions Runners often need to perform continuous integration tasks such as write to S3 or push container images to ECR. With GitHub-hosted Runners this has historically been difficult but is now made possible [using GitHub’s support for OIDC](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect), which allows for creating a trust relationship between GitHub-hosted Runners and an IAM role in one of the organization’s AWS account (preferably a dedicated `automation` account). For `actions-runner-controller`, this has been historically possible for a longer time now on self-hosted GitHub Actions Runners running on `actions-runner-controller` using EKS cluster’s OIDC provider (see: [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html)). If using GHA Runners on EC2, an EC2 instance profile can be created, allowing the instances to assume an IAM role.
## Related Decisions
- [Decide on IAM Roles for GitHub Action Runners](/resources/legacy/design-decisions/decide-on-iam-roles-for-github-action-runners)
- [Decide on Self-Hosted GitHub Runner Strategy](/layers/software-delivery/design-decisions/decide-on-self-hosted-github-runner-strategy)
- [Decide on Strategy for Continuous Integration](/layers/software-delivery/design-decisions/decide-on-strategy-for-continuous-integration)
- [Decide on GitHub Actions Workflow Organization Strategy](/layers/software-delivery/design-decisions/decide-on-github-actions-workflow-organization-strategy)
---
## Decide on Strategy for Continuous Integration
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
# Decide on Strategy for Continuous Integration
### Considerations
A strategy for Continuous Integration — i.e. container image builds, and single-page applications need to be adopted.
There are different levels of testing.
- Unit Tests
- Integration Tests
- Linting/Static Analysis Tests
- Security Tests
Centralized storage for test reports
## Options for Unit Tests
## Options for Integration Tests
The options available for integration testing will depend to some degree on the technology. For example, single-page applications that are typically deployed to S3/CloudFront, for integration testing purposes might be still tested as dockerized apps.
### Option 1: Docker Composition with Test Script
### Option 2: Deployment to Cluster with Test Script
Deploy a preview environment and then test it. Note: not all services are suitable for previews.
### Option 3: Test script
## Options for Linting/Static Analysis Tests
- Superlinter
## Options for Security Tests
---
## Decide on Strategy for Developer Environments
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Versioning Considerations
- Use latest in the default branch?
- Use the latest release?
- Use any developer-specified release?
- How do overwrite it?
## Strategy Considerations
- Local
- Do your developer workstations/laptops have sufficient resources to build and launch all dependent services?
- Remote
- Hybrid
## Tool Considerations
1. Garden
2. Skaffold
3. Docker Compose
---
## Decide on Strategy for Managing and Orchestrating Secrets
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
We’ll inevitably need some way to store confidential key material and then access that from applications. Many options exist, and the best option depends on the circumstance. An often overlooked piece of the puzzle is how the key material will be created and updated, and how that ties into the broader release engineering workflow.
## Considerations
### Exfiltration
From a security perspective, we ideally want to avoid direct access to application-level secrets in the CI/CD pipeline.
This is achieved using two techniques: encrypted files or kubernetes operators and direct platform integration with the
secrets store. When not using Kubernetes the operator approach is not suitable.
### Secrets for Applications
- Secrets storage (e.g. SSM, ASM, Vault, Encrypt Files)
- Secrets retrieval
- Operators (e.g. external-secrets-operator)
- Application Code Changes
- Environment variables
- Secrets orchestration (CRUD)
## Considered Options for GitHub Actions
For all options, we assume secrets will be manually created and updated.
### Option A: GitHub Secrets
During the CI or CD pipeline execution, it may be necessary to have access to secrets. For example, integration tokens
with third-party vendors, or tokens to retrieve files (e.g. from VCS or S3).
- GitHub Secrets Environment variables
### Option B: AWS Secrets (SSM, ASM)
- GitHub Action reads from (SSM, ASM) storage leveraging GitHub OIDC to access secrets in AWS
- IAM Roles & Permissions for applications
- IAM Roles & permissions for teams
## Considered Options for ECS
### Option A: Use SSM
Natively supported by ECS. The `chamber` tool is convenient for updating values.
### Option B: Use ASM
Natively supported by ECS. ASM supports lambda hooks that can rotate secrets. Not as easy to manage on the command-line
as using `chamber.
### Option C: Use S3
Story secrets in a KMS encrypted file in a private S3 bucket. Fetch the file as part of the container entrypoint script.
This is only recommended if the number of secrets is so large we exceed the max document size of a container definition.
We have run into this when migrating highly parameterized applications.
## Considered Options for EKS
### Option 1: Use SSM + External Secrets Operator (Recommended for EKS)
:::tip
We recommend this option because SSM is a centralized source of truth and well understood.
:::
[https://external-secrets.io/](https://external-secrets.io/)
#### Pros
- Applications automatically updated when SSM values change
- Easier to rotate secrets without CI/CD application deployments (a new `ReplicaSet` is created)
#### Cons
- es If many secrets change at around the same time, it can be disruptive to the application as each change causes
kubernetes to create a new `ReplicaSet` as part of the Kubernetes `Deployment`
### Option 2: Use Sops Secrets Operator + KMS
[https://github.com/isindir/sops-secrets-operator](https://github.com/isindir/sops-secrets-operator)
#### Pros
- Easily rollout secrets along side application deployments
- Secrets are protected by KMS using IAM
#### Cons
- Secrets rotation requires application deployment
- Mozilla SOPS project (despite being used by thousands of projects) lack maintainers.
[https://github.com/mozilla/sops/discussions/927](https://github.com/mozilla/sops/discussions/927)
## References
- [Decide on Secrets Management Strategy for Terraform](/layers/project/design-decisions/decide-on-secrets-management-strategy-for-terraform)
---
## Decide on Strategy for Preview Environments (e.g. Review Apps)
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Considerations
- Use as few AWS proprietary services (E.g DynamoDB, SNS, SQS) because provisioning terraform for ephemeral environments is very slow, complicated and therefore not recommended
- Use instead as many tools that have managed service equivalents in AWS (DocumentDB~MongoDB container, MSK~Kafka container, MySQL, Postgres)
- Usage of API Gateway will require running terraform and will complicate preview environments
- Preview environments are poor substitutions for remote development environments due to the slow feedback loop (e.g. commit, push, build, deploy)
- Preview environments are not a replacement for staging and QA environments, which should have configurations that more likely resemble production
- Multiple microservices are much harder to bring up if any sort of version pinning is required
### Use helmfile with raw chart
:::caution
We do not recommend this approach.
:::
#### Pros
- _Very rapid to prototype;_ no need to commit to any chart or convention
- Repos/apps share nothing, so they won’t be affected by breaking changes in a shared chart (mitigated by versioning charts)
#### Cons
- This is the least DRY approach and the manifest for services are not reusable across microservices
- Lots of manifests everywhere leads to inconsistency between services; adding some new convention/annotation requires updating every repo
- No standardization of how apps are described for kubernetes (e.g. what we get with a custom helm chart)
### Use helmfile with custom chart
Slight improvement over using `helmfile` with the `raw` chart.
#### Pros
- Reusable chart between services; very DRY
- Chart can be tested and standardized to reduce the variations of applications deployed (e.g. NodeJS chart, Rails chart)
- More conventional approach used by community at large (not cloudposse, but everyone using helm and helmfile)
#### Cons
- Using `helmfile` is one more tool; for new comers they often don’t appreciate the value it brings
- CIOps is slowly falling out of favor for GitOps
### Use GitHub Actions directly with helm
#### Pros
- Very easy to understand what is happening with “CIOps”
- Very easy to implement
#### Cons
- No record of the deployed state for preview environments in source control
- Requires granted direct Kubernetes administrative access GitHub Action runners in order to deploy helm charts
- GitHub Action runners will need direct access to read any secrets needed to deploy the helm releases.
(mitigation is to use something like `sops-operator` or `external-secrets` operator)
### Use GitHub actions with ArgoCD and helm
For some additional context on ArgoCD [Decide on ArgoCD Architecture](/layers/software-delivery/design-decisions/decide-on-argocd-architecture)
## Requirements
- How quickly should environments come online? e.g. 5 minutes or less
- How many backing services are required to validate your one service?
- Do you need to pin dependent services at specific versions for previews? - if so, rearchitect how we do this
- How should we name DNS for previews?
- The biggest limitation is ACM and wildcard certs, so we’ll need a flat namespace
- `https://pr-123-new-service.dev.acme.org`. (using the `*.dev.acme.org` ACM certificate)
- URLs will be posted to GitHub Status API to that environments are directly reachable from PRs
- Do we need to secure these environments? We recommend just locking down the ALB to internal traffic and using VPN
- How will we handle databases for previews? How will we seed data. [Decide on Database Seeding Strategy for Ephemeral Preview Environments](/layers/software-delivery/design-decisions/decide-on-database-seeding-strategy-for-ephemeral-preview-enviro)
- What is the effort to implement ArgoCD?
- Very little, we have all the terraform code to deploy ArgoCD
- We need to change the GitHub Actions to:
- build the docker image (like to do)
- render the helm chart to the raw k8s manifests
- commit the manifests to deployment repo when ready to deploy
- What is the simplest path we could take to implement and that developers will have the easiest time understanding?
## Patterns of Microservices
This is more of a side note that not all microservices organizations are the same. If you’re using microservices, please self-identify with some of these patterns as it will be helpful in understanding the drivers behind them and how they are implemented.
1. As a result of acquisitions
2. As a result of architecture design from the beginning (this premature)
3. As a result of needing to use different languages for specific purposes
4. As a result of seeing performance needs to scale
5. If this is the case, we _technically_ don’t need to do microservices; we just need to be able to control the entry point & routing (e.g. a “Microservices Monolith”)
6. For this to work, the monolith needs to be able to communicate with itself as a service (e.g. gRPC) for local development. We see this with Go microservices; this can be done when it’s necessary as a pattern to scale endpoints
7. Preview environments can still use the gRPC but over localhost
8. As a result of wanting to experiment with multiple versions of the same service (E.g. using a service mesh)
## Related Design Decisions
[Decide on Strategy for Preview Environments (e.g. Review Apps)](/layers/software-delivery/design-decisions/decide-on-strategy-for-preview-environments-e-g-review-apps)
:::caution
Internal preview environments cannot accept webhook callbacks from external services like twilio
:::
---
## Decide on Terraform Configuration Pattern for Application Repositories
import Intro from "@site/src/components/Intro";
import KeyPoints from "@site/src/components/KeyPoints";
## Context and Problem Statement
The infrastructure monorepo that exists within an organization is responsible for configuring the core infrastructure of the organization: AWS accounts, VPCs, Kubernetes clusters, Route53, etc. However, AWS resources and/or other dependencies specific to a single application — such as a single S3 bucket — is not in the scope of the infrastructure monorepo, and should be managed externally, such that developers responsible for the application in question can manage its dependencies via infrastructure-as-code.
## Considered Options
### In-repo Terraform
A Terraform configuration can be placed within the application repository and automated using `atmos`. This Terraform configuration requires the `infra-state.mixin.tf` mixin in order to be able to read the state of components in the infrastructure monorepo, for example from the `eks` component.
#### Implementation
This implementation is described in detail in the following guide: [How to Manage Terraform Dependencies in Micro-service Repositories](/learn/maintenance/tutorials/how-to-manage-terraform-dependencies-in-micro-service-repositori) .
#### Scope
The Terraform configuration within the application repository should have resources pertaining specifically to that application, specifically for the regional stack configured by `atmos` (see previous section). This includes:
- An IAM Role for a ServiceAccount for that application (see: [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html))
- An S3 bucket for the application
- An SNS topic for the application
- etc.
These Terraform resources are not limited to the AWS provider. Other valid types of resources include:
- LaunchDarkly Feature Flags
- Datadog Monitors
## References
- [How to Manage Terraform Dependencies in Micro-service Repositories](/learn/maintenance/tutorials/how-to-manage-terraform-dependencies-in-micro-service-repositori)
---
## Design Decisions(7)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions for how you'll implement CI/CD for your
applications.
---
## ECS with ecspresso
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CollapsibleText from '@site/src/components/CollapsibleText';
We use the [`ecspresso`](https://github.com/kayac/ecspresso) deployment tool for Amazon ECS to manage ECS services using a code-driven approach, alongside reusable GitHub Action workflows. This setup allows tasks to be defined with Terraform within the infrastructure repository, and task definitions to reside alongside the application code. Ecspresso provides extensive configuration options via YAML, JSON, and Jsonnet, and includes plugins for enhanced functionality such as Terraform state lookups.
```mermaid
---
title: Ecspresso Deployment Lifecycle
---
sequenceDiagram
actor dev as Developer
participant commit as Application
box GitHub Action Workflow
participant ci as CI
participant deploy as CD
end
box ECS
participant service as ECS Service
participant deployment as ECS Deployment
participant cluster as ECS Cluster
end
activate cluster
dev ->>+ commit: Create Commit
commit ->>+ ci: Trigger
ci ->>+ deploy: Trigger
deactivate ci
deactivate commit
deploy ->>+ service: RenderTask Definition
loop
deploy --> commit: WaitService Status
end
service ->>+ deployment: UpdateTask Definition
deactivate service
loop
deployment ->> cluster: Removeold tasks
deployment ->> cluster: Addnew tasks
end
deactivate deployment
```
### Github Action Workflows
The basic deployment flow is for feature branches. You can use the following sample workflow to add pull request deploys to your application repository:
:::tip Latest Examples
Check out our [example app-on-ecs](https://github.com/cloudposse-examples/app-on-ecs) for the latest example of how to use `ecspresso` with GitHub Actions.
:::
```yaml title=".github/workflows/feature-branch.yaml"
name: 1 - Feature Branch
on:
pull_request:
branches: [ main ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
permissions:
pull-requests: write
deployments: write
id-token: write
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: false
jobs:
monorepo:
uses: cloudposse/github-actions-workflows/.github/workflows/controller-monorepo.yml@main
with:
file: ./deploy/config.yaml
ci:
uses: cloudposse/github-actions-workflows/.github/workflows/ci-dockerized-app-build.yml@main
needs: [ monorepo ]
with:
organization: "cloudposse"
repository: ${{ github.event.repository.name }}
secrets:
ecr-region: ${{ secrets.ECR_REGION }}
ecr-iam-role: ${{ secrets.ECR_IAM_ROLE }}
registry: ${{ secrets.ECR_REGISTRY }}
secret-outputs-passphrase: ${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}
cd:
uses: cloudposse/github-actions-workflows/.github/workflows/cd-preview-ecspresso.yml@main
needs: [ ci, monorepo ]
if: ${{ always() && needs.monorepo.outputs.apps != '[]' }}
strategy:
matrix:
app: ${{ fromJson(needs.monorepo.outputs.apps) }}
with:
image: ${{ needs.ci.outputs.image }}
tag: ${{ needs.ci.outputs.tag }}
repository: ${{ github.event.repository.name }}
app: ${{ matrix.app }}
open: ${{ github.event.pull_request.state == 'open' }}
labels: ${{ toJSON(github.event.pull_request.labels.*.name) }}
ref: ${{ github.event.pull_request.head.ref }}
exclusive: true
enable-migration: ${{ contains(fromJSON(needs.monorepo.outputs.migrations), matrix.app) }}
settings: ${{ needs.monorepo.outputs.settings }}
env-label: |
qa1: deploy/qa1
secrets:
secret-outputs-passphrase: ${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}
```
```yaml title=".github/workflows/main-branch.yaml"
name: 2 - Main Branch
on:
push:
branches: [ main ]
permissions:
contents: write
id-token: write
pull-requests: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: false
jobs:
monorepo:
uses: cloudposse/github-actions-workflows/.github/workflows/controller-monorepo.yml@main
with:
file: ./deploy/config.yaml
ci:
uses: cloudposse/github-actions-workflows/.github/workflows/ci-dockerized-app-build.yml@main
needs: [ monorepo ]
with:
organization: "cloudposse"
repository: ${{ github.event.repository.name }}
secrets:
ecr-region: ${{ secrets.ECR_REGION }}
ecr-iam-role: ${{ secrets.ECR_IAM_ROLE }}
registry: ${{ secrets.ECR_REGISTRY }}
secret-outputs-passphrase: ${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}
cd:
uses: cloudposse/github-actions-workflows/.github/workflows/cd-ecspresso.yml@main
needs: [ ci, monorepo ]
strategy:
matrix:
app: ${{ fromJson(needs.monorepo.outputs.apps) }}
with:
image: ${{ needs.ci.outputs.image }}
tag: ${{ needs.ci.outputs.tag }}
repository: ${{ github.event.repository.name }}
app: ${{ matrix.app }}
environment: dev
enable-migration: ${{ contains(fromJSON(needs.monorepo.outputs.migrations), matrix.app) }}
settings: ${{ needs.monorepo.outputs.settings }}
secrets:
secret-outputs-passphrase: ${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}
release:
uses: cloudposse/github-actions-workflows/.github/workflows/controller-draft-release.yml@main
needs: [ cd ]
```
```yaml title=".github/workflows/release.yaml"
name: 3 - Release
on:
release:
types: [published]
permissions:
id-token: write
contents: write
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: false
jobs:
monorepo:
uses: cloudposse/github-actions-workflows/.github/workflows/controller-monorepo.yml@main
with:
file: ./deploy/config.yaml
ci:
uses: cloudposse/github-actions-workflows/.github/workflows/ci-dockerized-app-promote.yml@main
needs: [ monorepo ]
with:
organization: "cloudposse"
repository: ${{ github.event.repository.name }}
version: ${{ github.event.release.tag_name }}
secrets:
ecr-region: ${{ secrets.ECR_REGION }}
ecr-iam-role: ${{ secrets.ECR_IAM_ROLE }}
registry: ${{ secrets.ECR_REGISTRY }}
secret-outputs-passphrase: ${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}
cd:
uses: cloudposse/github-actions-workflows/.github/workflows/cd-ecspresso.yml@main
needs: [ ci, monorepo ]
strategy:
matrix:
app: ${{ fromJson(needs.monorepo.outputs.apps) }}
with:
image: ${{ needs.ci.outputs.image }}
tag: ${{ needs.ci.outputs.tag }}
repository: ${{ github.event.repository.name }}
app: ${{ matrix.app }}
environment: "staging"
enable-migration: ${{ contains(fromJSON(needs.monorepo.outputs.migrations), matrix.app) }}
settings: ${{ needs.monorepo.outputs.settings }}
secrets:
secret-outputs-passphrase: ${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}
```
## References
- [Ecspresso](https://github.com/kayac/ecspresso) : Tool repo
- [example-app-on-ecs](https://github.com/cloudposse/example-app-on-ecs): Example app
- [github-action-deploy-ecspresso](https://github.com/cloudposse/github-action-deploy-ecspresso): Base action
- [`cd-ecspresso`](https://github.com/cloudposse/github-actions-workflows/blob/main/.github/workflows/cd-ecspresso.yml): Primary workflow
- [`cd-preview-ecspresso`](https://github.com/cloudposse/github-actions-workflows/blob/main/.github/workflows/cd-preview-ecspresso.yml): feature branch workflow
---
## ECS Partial Task Definitions
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
This document describes what partial task definitions are and how we can use them to set up ECS services using Terraform and GitHub Actions.
## The Problem
Managing ECS Services is challenging. Ideally, we want our services to be managed by Terraform so everything is living
in code. However, we also want to update the task definition via GitOps as through the GitHub release lifecycle. This is
challenging because Terraform can create the task definition, but if updated by the application repository, the task
definition will be out of sync with the Terraform state.
Managing it entirely through Terraform means we cannot easily update the newly built image by the application repository
unless we directly commit to the infrastructure repository, which is not ideal.
Managing it entirely through the application repository means we cannot codify the infrastructure and have to hardcode
ARNs, secrets, and other infrastructure-specific configurations.
## Introduction
ECS Partial task definitions is the idea of breaking the task definition into smaller parts. This allows for easier
management of the task definition and makes it easier to update the task definition.
We do this by setting up Terraform to manage a portion of the task definition, and the application repository to manage
another portion.
The Terraform (infrastructure) portion is created first. It will create an ECS Service in ECS, and then upload the task
definition JSON to S3 as `task-template.json`.The application repository will have a `task-definition.json` git
controlled, during the development lifecycle, the application repository will download the task definition from S3,
merge the task definitions, then update the ECS Service with the new task definition. Finally, GitHub actions will
update the S3 bucket with the deployed task definition under `task-definition.json`. If Terraform is planned again, it
will use the new task definition as the base for the next deployment, thus not resetting the image or application
configuration.
### Pros
The **benefit** to using this approach is that we can manage the task definition portion in Terraform with the
infrastructure, meaning secrets, volumes, and other ARNs can be managed in Terraform. If a filesystem ID updates we can
re-apply Terraform to update the task definition with the new filesystem ID. The application repository can manage the
container definitions, environment variables, and other application-specific configurations. This allows developers who
are closer to the application to quickly update the environment variables or other configuration.
### Cons
The drawback to this approach is that it is more complex than managing the task definition entirely in Terraform or the
application repository. It requires more setup and more moving parts. It can be confusing for a developer who is not
familiar with the setup to understand how the task definition is being managed and deployed.
This also means that when something goes wrong, it becomes harder to troubleshoot as there are more moving parts.
### Getting Setup
#### Pre-requisites
- Application Repository - [Cloud Posse Example ECS Application](https://github.com/cloudposse-examples/app-on-ecs)
- Infrastructure Repository
- ECS Cluster - [Cloud Posse Docs](https://docs.cloudposse.com/components/library/aws/ecs/) -
[Component](https://github.com/cloudposse/Terraform-aws-components/tree/main/modules/ecs).
- `ecs-service` - [Cloud Posse Docs](https://docs.cloudposse.com/components/library/aws/ecs-service/) -
[Component](https://github.com/cloudposse/Terraform-aws-components/tree/main/modules/ecs-service).
- **Must** use the Cloud Posse Component.
- [`v1.416.0`](https://github.com/cloudposse/Terraform-aws-components/releases/tag/1.416.0) or later.
- S3 Bucket - [Cloud Posse Docs](https://docs.cloudposse.com/components/library/aws/s3-bucket/) -
[Component](https://github.com/cloudposse/Terraform-aws-components/tree/main/modules/s3-bucket).
#### Steps
1. Set up the S3 Bucket that will store the task definition.
This bucket should be in the same account as the ECS Cluster.
S3 Bucket Default Definition
```yaml
components:
terraform:
s3-bucket/defaults:
metadata:
type: abstract
vars:
enabled: true
account_map_tenant_name: core
# Suggested configuration for all buckets
user_enabled: false
acl: "private"
grants: null
force_destroy: false
versioning_enabled: false
allow_encrypted_uploads_only: true
block_public_acls: true
block_public_policy: true
ignore_public_acls: true
restrict_public_buckets: true
allow_ssl_requests_only: true
lifecycle_configuration_rules:
- id: default
enabled: true
abort_incomplete_multipart_upload_days: 90
filter_and:
prefix: ""
tags: {}
# Move to Glacier after 2 years
transition:
- storage_class: GLACIER
days: 730
# Never expire
expiration: {}
# Versioning isn't enabled, but these default values are still required
noncurrent_version_transition:
- storage_class: GLACIER
days: 90
noncurrent_version_expiration: {}
```
```yaml
import:
- catalog/s3-bucket/defaults
components:
terraform:
s3-bucket/ecs-tasks-mirror: #NOTE this is the component instance name.
metadata:
component: s3-bucket
inherits:
- s3-bucket/defaults
vars:
enabled: true
name: ecs-tasks-mirror
```
2. Create an ECS Service in Terraform
Set up the ECS Service in Terraform using the
[`ecs-service` component](https://github.com/cloudposse/Terraform-aws-components/tree/main/modules/ecs-service). This
will create the ECS Service and upload the task definition to the S3 bucket.
To enable Partial Task Definitions, set the variable `s3_mirror_name` to be the component instance name of the
bucket to mirror to. For example `s3-bucket/ecs-tasks-mirror`
```yaml
components:
terraform:
ecs-services/defaults:
metadata:
component: ecs-service
type: abstract
vars:
enabled: true
ecs_cluster_name: "ecs/cluster"
s3_mirror_name: s3-bucket/ecs-tasks-mirror
```
3. Set up an Application repository with GitHub workflows.
An example application repository can be found [here](https://github.com/cloudposse-examples/app-on-ecs).
Two things need to be pulled from this repository:
- The `task-definition.json` file under `deploy/task-definition.json`
- The GitHub Workflows.
An important note about the GitHub Workflows, in the example repository they all live under `.github/workflows`. This
is done so development of workflows can be fast, however we recommend moving the shared workflows to a separate
repository and calling them from the application repository. The application repository should only contain the
workflows `main-branch.yaml`, `release.yaml` and `feature-branch.yml`.
To enable Partial Task Definitions in the workflows, the call to
[`cloudposse/github-action-run-ecspresso` (link)](https://github.com/cloudposse-examples/app-on-ecs/blob/main/.github/workflows/workflow-cd-ecspresso.yml#L133-L147)
should have the input `mirror_to_s3_bucket` set to the S3 bucket name. the variable `use_partial_taskdefinition`
should be set to `'true'`
Example GitHub Action Step
```yaml
- name: Deploy
uses: cloudposse/github-action-deploy-ecspresso@0.6.0
continue-on-error: true
if: ${{ steps.db_migrate.outcome != 'failure' }}
id: deploy
with:
image: ${{ steps.image.outputs.out }}
image-tag: ${{ inputs.tag }}
region: ${{ steps.environment.outputs.region }}
operation: deploy
debug: false
cluster: ${{ steps.environment.outputs.cluster }}
application: ${{ steps.environment.outputs.name }}
taskdef-path: ${{ inputs.path }}
mirror_to_s3_bucket: ${{ steps.environment.outputs.s3-bucket }}
use_partial_taskdefinition: "true"
timeout: 10m
```
## Operation
Changes through Terraform will not immediately be reflected in the ECS Service. This is because the task template has
been updated, but whatever was in the `task-definition.json` file in the S3 bucket will be used for deployment.
To update the ECS Service after updating the Terraform for it, you must deploy through GitHub Actions. This will then
download the new template and create a new updated `task-definition.json` to store in s3.
---
## Setting up ecspresso
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import Note from '@site/src/components/Note'
import Admonition from '@theme/Admonition'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
This setup guide will help you get started with [ecspresso](https://github.com/kayac/ecspresso). It features an example app, which demonstrates how your GitHub Actions work with your infrastructure repository.
| Steps | Actions |
| ---------------------------------------------------- | --------------------------------------------------------------------------------- |
| 1. Create a repository from the Example App template | [cloudposse-examples/app-on-ecs](https://github.com/cloudposse-examples/app-on-ecs) |
| 2. Update and reapply `ecr` | `atmos terraform apply ecr -s core-use1-artifacts` |
| 3. Validate the environment configuration | Click Ops |
| 4. Create a GitHub PAT | Click Ops |
| 5. Set all Example App repository secrets | Click Ops |
| 6. Deploy the shared ECS Task Definition S3 Bucket | `atmos apply s3-bucket/ecs-tasks-mirror -s < YOUR STACK >` |
| 7. Deploy the example ECS services | `atmos workflow deploy/app-on-ecs -f quickstart/app/app-on-ecs` |
We recommend moving all workflows with `ecspresso` and `workflow` **prefixes** to a shared workflow repository that can be used by the rest of your organization.
We do not recommend keeping all shared workflows in the same repository as in this example, because it defeats the reusability. We've included all workflows in the example app repositories to make it easier to follow along and document.
### Create the Example App repository
This step requires access to the GitHub Organization. Customers will need to create this GitHub repository in Jumpstart engagements.
Cloud Posse deploys an example application with this new repository. This is separate from the infrastructure repository
and can be used as reference for future applications. Cloud Posse also maintains a public example of this app repository
with [cloudposse/example-app-on-ecs](https://github.com/cloudposse/example-app-on-ecs/). This is a GitHub repository
template, meaning it can be used to create new repositories with predefined content.
1. Create a new repository in your organization from the
[cloudposse/example-app-on-ecs](https://github.com/cloudposse/example-app-on-ecs/) template.
2. Choose any name for the repository. For example, we might call this repo
`acme/example-app-on-ecs`.
3. Grant Cloud Posse `admin` access to this repository.
4. If necessary, install the self-hosted runner GitHub App to this new repository.
### Create Image and GitHub OIDC Access Roles for ECR
The Example App will build and push an image to the ECR registry. Create that image with the `ecr` component if not
already created. The Example App GitHub Workflows will also need to be able to access that registry, and to do so, we
deploy GitHub OIDC roles with the same `ecr` component.
Add the following snippet in addition to any other repositories or images already included in these lists:
```yaml
components:
terraform:
ecr:
vars:
github_actions_allowed_repos:
- acme/example-app-on-ecs
# ECR must be all lowercase
images:
- acme/example-app-on-ecs
```
Reapply the `ecr` component with the following:
```console
atmos terraform apply ecr -s core-use1-artifacts
```
### Configure the Environment
We use the [cloudposse/github-action-interface-environment](https://github.com/cloudposse/github-action-interface-environment)
GitHub Composite Action to read environment configuration from a private location. By default, we use the infrastructure
repository as that private location and save the configuration to `.github/environments/ecspresso.yaml`.
This action stores metadata about the environments we want to deploy to. It is the binding glue between our GHA, GitHub
environments, and our infrastructure. When this action is called, an `environment` input is passed in. We then look up
in the map below information about that environment, that information is stored as an output to be used by the rest of
the GitHub actions.
For more on GitHub Composite Actions, please see the [official GitHub documentation](https://docs.github.com/en/actions/creating-actions/creating-a-composite-action).
Create or confirm the configuration in `.github/environments/ecspresso.yaml` in the `acme/infra-acme` repository now.
If the file doesn't exist, here's the template:
The `role` defined in this configuration may not exist yet. This role will be created by the given `ecs-service`
component with the GitHub OIDC mixin. Once completing the
[Deploy the Example App ECS Service](#deploy-the-example-app-ecs-service) step, please verify this role is correct.
Copy, paste, and edit this in ./.github/environments/ecspresso.yaml
```yaml
name: 'Environments'
description: 'Get information about cluster'
inputs:
environment:
description: "Environment name"
required: true
namespace:
description: "Namespace name"
required: true
repository:
description: "Repository name"
required: false
application:
description: "Application name"
required: false
attributes:
description: "Comma separated attributes"
required: false
outputs:
name:
description: "Environment name"
value: ${{ steps.result.outputs.name }}
region:
description: "AWS Region"
value: ${{ steps.result.outputs.region }}
role:
description: "IAM Role"
value: ${{ steps.result.outputs.role }}
cluster:
description: "Cluster"
value: ${{ steps.result.outputs.cluster }}
namespace:
description: "Namespace"
value: ${{ steps.result.outputs.namespace }}
ssm-path:
description: "SSM path"
value: ${{ steps.result.outputs.ssm-path }}
s3-bucket:
description: "S3 Bucket name"
value: ${{ steps.result.outputs.s3-bucket }}
account-id:
description: "AWS account id"
value: ${{ steps.result.outputs.aws-account-id }}
stage:
description: "Stage name"
value: ${{ steps.result.outputs.stage }}
runs:
using: "composite"
steps:
- uses: cloudposse/github-action-yaml-config-query@0.1.0
id: suffix
with:
query: .${{ inputs.application == '' }}
config: |
true:
suffix: ${{ inputs.repository }}
false:
suffix: ${{ inputs.repository }}-${{ inputs.application }}
- uses: cloudposse/github-action-yaml-config-query@0.1.0
id: result
with:
query: .${{ inputs.environment }}
config: |
qa1:
cluster: acme-plat-${{ steps.region.outputs.result }}-dev-ecs-platform
name: acme-plat-${{ steps.region.outputs.result }}-dev-${{ steps.name.outputs.name }}-qa1
role: arn:aws:iam::101010101010:role/acme-plat-${{ steps.region.outputs.result }}-dev-${{ steps.name.outputs.name }}-qa1
ssm-path: /ecs-service/${{ steps.name.outputs.name }}/url/0
region: us-east-1
qa2:
cluster: acme-plat-${{ steps.region.outputs.result }}-dev-ecs-platform
name: acme-plat-${{ steps.region.outputs.result }}-dev-${{ steps.name.outputs.name }}-qa2
role: arn:aws:iam::101010101010:role/acme-plat-${{ steps.region.outputs.result }}-dev-${{ steps.name.outputs.name }}-qa2
ssm-path: /ecs-service/${{ steps.name.outputs.name }}/url/0
region: us-east-1
dev:
cluster: acme-plat-use1-dev-ecs-platform
name: acme-plat-use1-dev-${{ steps.suffix.outputs.suffix }}
role: arn:aws:iam::101010101010:role/acme-plat-use1-dev-${{ steps.suffix.outputs.suffix }}
ssm-path: /ecs-service/${{ steps.suffix.outputs.suffix }}/url/0
region: us-east-1
s3-bucket: acme-plat-use1-dev-ecs-tasks-mirror
aws-account-id: 101010101010
stage: dev
prod:
cluster: acme-plat-use1-prod-ecs-platform
name: acme-plat-use1-prod-${{ steps.suffix.outputs.suffix }}
role: arn:aws:iam::202020202020:role/acme-plat-use1-prod-${{ steps.suffix.outputs.suffix }}
ssm-path: /ecs-service/${{ steps.suffix.outputs.suffix }}/url/0
region: us-east-1
s3-bucket: acme-plat-use1-prod-ecs-tasks-mirror
aws-account-id: 202020202020
stage: prod
sandbox:
cluster: acme-plat-use1-sandbox-ecs-platform
name: acme-plat-use1-sandbox-${{ steps.suffix.outputs.suffix }}
role: arn:aws:iam::303030303030:role/acme-plat-use1-sandbox-${{ steps.suffix.outputs.suffix }}
ssm-path: /ecs-service/${{ steps.suffix.outputs.suffix }}/url/0
region: us-east-1
s3-bucket: acme-plat-use1-sandbox-ecs-tasks-mirror
aws-account-id: 303030303030
stage: sandbox
staging:
cluster: acme-plat-use1-staging-ecs-platform
name: acme-plat-use1-staging-${{ steps.suffix.outputs.suffix }}
role: arn:aws:iam::404040404040:role/acme-plat-use1-staging-${{ steps.suffix.outputs.suffix }}
ssm-path: /ecs-service/${{ steps.suffix.outputs.suffix }}/url/0
region: us-east-1
s3-bucket: acme-plat-use1-staging-ecs-tasks-mirror
aws-account-id: 404040404040
stage: staging
```
Then the Example App, verify that the target environment is correct. This should be in the `.github/configs/environment.yaml` file in Example App repository.
```yaml
## file: .github/configs/environment.yaml
# assumes the same organization
environment-info-repo: infrastructure
implementation_path: .github/environments
implementation_file: ecspresso.yaml
implementation_ref: main
```
### Create a GitHub PAT
This step requires access to the GitHub Organization. Customers will need to create this PAT in Jumpstart engagements.
In order for the Example App workflows to read the private environment configuration, we need to pass a token to the
Composite Action.
1. Create a fine-grained PAT. Please see
[Creating a fine-grained personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-fine-grained-personal-access-token).
2. Name this PAT whatever you would like. We recommend calling it `PRIVATE_CONFIG_READ_ACCESS`
3. Grant this PAT `read` permission on the `acme/infra-acme`
repository:
```diff
Repository
+ Contents: Read-only
+ Metadata: Read-only
```
4. Upload this PAT to 1Password, and Cloud Posse will add it as a GitHub repository secret. Or you can create an
organization secret now that you can reuse in future application repositories.
### Add the Example App Secrets
The GitHub Action workflows expect a few GitHub Secrets to exist to build images in AWS ECR. Add each of the following
secrets to the Example App repository:
`PRIVATE_CONFIG_READ_ACCESS`
This is the PAT we created above in Create a GitHub PAT.
`ECR_REGISTRY`
This is your ECR Registry, such as `111111111111.dkr.ecr.us-east-1.amazonaws.com`.
`ECR_REGION`
This is the AWS region where the `ecr` component is deployed. For example, `us-east-1`
`ECR_IAM_ROLE`
This is the GitHub OIDC role created by the `ecr` component for accessing the registry. For this organization, this would be `arn:aws:iam::111111111111:role/acme-core-use1-artifacts-ecr-gha`.
Verify this value by checking the output of the `ecr` component.
`GHA_SECRET_OUTPUT_PASSPHRASE`
This is a random string used to encrypt and decrypt sensitive image names and tags. This can be anything.
For example, generate this with the following:
```console
openssl rand -base64 24
```
### Configure the S3 Mirror Bucket, if not already configured
If you haven't already configured the S3 mirror bucket, deploy and configure the shared S3 bucket for ECS tasks definitions now. Follow the [ECS Partial Task Definitions guide](/layers/software-delivery/ecs-ecspresso/ecs-partial-task-definitions/#steps)
### Deploy the Example App ECS Service
Ensure you have stacks configured for the Example App in every stage of your platform.
This task definition uses the `latest` ECR image for the Example App, which is built by the CI steps of the release
pipelines. However, that step hasn't been run yet!
You will need to first trigger the `main-branch` CI steps for the Example App, ignore the failure in the deploy step,
and then deploy these components.
Catalog entry for the Example App
```yaml
import:
- catalog/ecs-services/defaults
components:
terraform:
ecs-services/example-app-on-ecs:
metadata:
component: ecs-service
inherits:
- ecs-services/defaults
vars:
name: example-app-on-ecs
ssm_enabled: true
github_actions_iam_role_enabled: true
github_actions_iam_role_attributes: [ "gha" ]
github_actions_ecspresso_enabled: true
github_actions_allowed_repos:
- acme/example-app-on-ecs
cluster_attributes: [platform]
alb_configuration: "private"
use_lb: true
unauthenticated_paths:
- "/"
- "/dashboard"
containers:
service:
name: app
image: 111111111111.dkr.ecr.us-east-1.amazonaws.com/example-app-on-ecs:latest
log_configuration:
logDriver: awslogs
options: {}
port_mappings:
- containerPort: 8080
hostPort: 8080
protocol: tcp
task:
desired_count: 1
task_memory: 512
task_cpu: 256
ignore_changes_desired_count: true
ignore_changes_task_definition: true
```
Finally, apply the `ecs-services/example-app-on-ecs` component to deploy the Example App ECS service.
## Triggering Workflows
Now that all requirements are in place, validate all workflows.
1. ### Clone the Example App locally
```bash
git clone git@github.com:acme/example-app-on-ecs.git
```
2. ### Change the demo color in `main.go`
```go
func main() {
c := os.Getenv("COLOR")
if len(c) == 0 {
c = "red" // change this color to something else, such as "blue"
}
```
3. ### Create a Pull Request
Creating a PR will trigger the CI build and test workflows and the QA cleanup workflows. Ensure these all pass successfully.
4. ### Add the `deploy/qa1` label
Adding this label will kickoff a new workflow to build and test once again and then deploy to the `qa1` environment.
Ensure this workflow passes successfully and then validate the "Deployment URL" returned.
Private endpoints require the VPN. If you're deploying a private endpoint, connect to the VPN in order to access the
deployment URL.
5. ### Merge the Pull Request
Merging the PR will trigger two different workflows. The Feature Branch workflow will be triggered to clean up and release the QA environment, and the Main Branch workflow will be triggered to deploy to `dev` and draft a release. Once both workflows pass, check that the QA environment is no longer active and then validate the dev URL. Finally, make sure a draft release was successfully created.
6. ### Publish a Release
Using the draft release created by the Main Branch workflow, click Edit and then Publish. This will kick off the Release workflow and deploy to `staging` and then to `prod`. Once this workflow finishes, validate both endpoints.
## Next Steps
Workflows with `ecspresso` and `workflow` **prefixes** should be moved to a shared workflow repository that can be used
by the rest of your organization.
## FAQ
### Adding Additional Applications
This setup is a one time setup. You can add as many applications as you want to your platform. You can also add as many
environments as you want to your platform.
To add additional applications:
1. Ensure the `ecspresso` and `workflow` **prefixes** are moved to a shared workflow repository that can be used by the
rest of your organization.
2. Create a new repository from one of the example app templates.
3. Create your Example app Configuration file in the new repository.
4. Ensure your infrastructure is deployed.
## References
- [Ecspresso](https://github.com/kayac/ecspresso) : Tool repo
- [example-app-on-ecs](https://github.com/cloudposse/example-app-on-ecs): Example app
- [github-action-deploy-ecspresso](https://github.com/cloudposse/github-action-deploy-ecspresso): Base action
- [`cd-ecspresso`](https://github.com/cloudposse/github-actions-workflows/blob/main/.github/workflows/cd-ecspresso.yml): Primary workflow
- [`cd-preview-ecspresso`](https://github.com/cloudposse/github-actions-workflows/blob/main/.github/workflows/cd-preview-ecspresso.yml): feature branch workflow
---
## EKS with ArgoCD
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Note from '@site/src/components/Note';
import CollapsibleText from '@site/src/components/CollapsibleText';
Argo CD is an open-source declarative, GitOps continuous delivery tool for Kubernetes applications. It enables developers to manage and deploy applications on Kubernetes clusters using Git repositories as the source of truth for configuration and definitions. Argo CD follows the GitOps methodology, which means that the entire application configuration, including manifests, parameters, and even application state, is stored in a Git repository.
#### SAML Security Considerations
SAML is an industry-standard but security concerns have been raised by Dex, Mastadon, and others, due to the inherent
difficulty of validating XML documents and inconsistent handling by SAML libraries in various languages. Our ArgoCD
implementation by default uses SAML authentication with Dex and ArgoCD.
For more information, please see:
- [SAML is insecure by design](https://joonas.fi/2021/08/saml-is-insecure-by-design/)
- [SAML Raider - SAML2 Burp Extension](https://github.com/CompassSecurity/SAMLRaider)
- [Proposal: deprecate the SAML connector](https://github.com/dexidp/dex/discussions/1884)
- [Mattermost blog post of July 28, 2021 where `@jupenur`](https://mattermost.com/blog/securing-xml-implementations-across-the-web/)
states:
> If you maintain an application in Ruby, JavaScript, .NET, or Java and rely on SAML or other security-critical XML
> use-cases, the question burning in the back of your mind should be: "How do I patch this?" The good news is that you
> should already be patched if you use Ruby or JavaScript and update your dependencies regularly. And if you use .NET
> or Java, there's probably nothing to worry about.
### Overview
Argo CD simplifies the deployment and management of applications on Kubernetes by leveraging GitOps principles, providing a clear separation between the desired state of applications and the operational state of the cluster. This approach enhances collaboration, repeatability, and traceability in the deployment process.
```mermaid
---
title: ArgoCD Deployment Lifecycle
---
sequenceDiagram
actor dev as Developer
participant commit as Application
box GitHub Action Workflow
participant ci as CI
participant deploy as CD
end
participant repo as ArgoCD Repo
box EKS
participant argocd as ArgoCD
participant k8s as K8S API
end
activate argocd
loop Lookup changes
argocd --> repo: Pull Desired State
end
dev ->>+ commit: Create Commit
commit ->>+ ci: Trigger
ci ->>+ deploy: Trigger
deactivate ci
deactivate commit
deploy ->>+ deploy: Render Manifest
deploy ->>+ repo: Commit Desired State
deactivate deploy
loop
deploy --> commit: Wait Commit Status
end
opt New Desired State Found
argocd ->> repo: Pull Desired State
deactivate repo
activate argocd
argocd ->> k8s: Reconcile State
argocd ->>+ commit: Set Commit Status
end
deactivate argocd
loop
deploy ->> commit: Wait Commit Status
commit ->>- deploy: Commit Status Success
end
deactivate deploy
deactivate argocd
```
### Deployment
Application repository will create a deployment when a workflow is triggered and call the relevant shared workflow.
:::tip Latest Examples
Check out our [example app-on-eks-with-argocd](https://github.com/cloudposse-examples/app-on-eks-with-argocd) for the latest example of how to use ArgoCD with GitHub Actions.
:::
```yaml title=".github/workflows/feature-branch.yaml"
name: Feature Branch
on:
pull_request:
branches: [ 'main' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
permissions:
pull-requests: write
deployments: write
id-token: write
contents: read
jobs:
do:
uses: cloudposse/github-actions-workflows-docker-ecr-eks-helm-argocd/.github/workflows/feature-branch.yml@main
with:
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
open: ${{ github.event.pull_request.state == 'open' }}
labels: ${{ toJSON(github.event.pull_request.labels.*.name) }}
ref: ${{ github.event.pull_request.head.ref }}
secrets:
github-private-actions-pat: "${{ secrets.PUBLIC_REPO_ACCESS_TOKEN }}"
registry: "${{ secrets.ECR_REGISTRY }}"
secret-outputs-passphrase: "${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}"
ecr-region: "${{ secrets.ECR_REGION }}"
ecr-iam-role: "${{ secrets.ECR_IAM_ROLE }}"
```
```yaml title=".github/workflows/main-branch.yaml"
name: Main Branch
on:
push:
branches: [ main ]
permissions:
contents: write
id-token: write
jobs:
do:
uses: cloudposse/github-actions-workflows-docker-ecr-eks-helm-argocd/.github/workflows/main-branch.yml@main
with:
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
secrets:
github-private-actions-pat: "${{ secrets.PUBLIC_REPO_ACCESS_TOKEN }}"
registry: "${{ secrets.ECR_REGISTRY }}"
secret-outputs-passphrase: "${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}"
ecr-region: "${{ secrets.ECR_REGION }}"
ecr-iam-role: "${{ secrets.ECR_IAM_ROLE }}"
```
```yaml title=".github/workflows/release.yaml"
name: Release
on:
release:
types: [published]
permissions:
id-token: write
contents: write
jobs:
perform:
uses: cloudposse/github-actions-workflows-docker-ecr-eks-helm-argocd/.github/workflows/release.yml@main
with:
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
version: ${{ github.event.release.tag_name }}
secrets:
github-private-actions-pat: "${{ secrets.PUBLIC_REPO_ACCESS_TOKEN }}"
registry: "${{ secrets.ECR_REGISTRY }}"
secret-outputs-passphrase: "${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}"
ecr-region: "${{ secrets.ECR_REGION }}"
ecr-iam-role: "${{ secrets.ECR_IAM_ROLE }}"
```
That workflow calls a Reusable Workflow, `cloudposse/github-actions-workflows-docker-ecr-eks-helm-argocd`, that is designed to deploy a
dockerized application from ECR to EKS using ArgoCD specifically.
### Hotfix Workflows
Hotfix workflows are designed to push changes directly to a released version in production. Ideally we want any change to move through the standard release lifecycle, but in reality there are times when we need the ability to push a hotfix directly to production.
In order to enable hotfix workflows, create two additional workflows and modify the existing release workflow. See each of the following workflows:
Before running any hotfix workflows, we must first create release branches with any release. Modify the existing release workflow to include the `hotfix` job below.
```yaml title=".github/workflows/release.yaml"
name: Release
on:
release:
types: [published]
permissions:
id-token: write
contents: write
jobs:
perform:
...
hotfix:
name: release / branch
uses: cloudposse/github-actions-workflows-docker-ecr-eks-helm-argocd/.github/workflows/hotfix-mixin.yml@main
with:
version: ${{ github.event.release.tag_name }}
```
This `hotfix-branch.yaml` workflow will deploy a duplicate app in the _production_ cluster to a new namespace. We need to deploy to production to validate a hotfix directly for production.
Deploy this workflow by creating a Pull Request into the a release branch and adding the `deploy` label.
```yaml title=".github/workflows/hotfix-branch.yaml"
name: Hotfix Branch
on:
pull_request:
branches: [ 'release/**' ]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
permissions:
pull-requests: write
deployments: write
id-token: write
contents: read
jobs:
do:
uses: cloudposse/github-actions-workflows-docker-ecr-eks-helm-argocd/.github/workflows/hotfix-branch.yml@main
with:
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
open: ${{ github.event.pull_request.state == 'open' }}
labels: ${{ toJSON(github.event.pull_request.labels.*.name) }}
ref: ${{ github.event.pull_request.head.ref }}
path: deploy
secrets:
github-private-actions-pat: "${{ secrets.PRIVATE_REPO_ACCESS_TOKEN }}"
registry: "${{ secrets.ECR_REGISTRY }}"
secret-outputs-passphrase: "${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}"
ecr-region: "${{ secrets.ECR_REGION }}"
ecr-iam-role: "${{ secrets.ECR_IAM_ROLE }}"
```
Once we've validated a Pull Request for a given hotfix, we can merge that change into the release branch. When changes are pushed to a release branch, the "Hotfix Release" workflow is triggered. _This workflow will deploy the given change directly to production_.
Before deploying, the workflow will create a minor version release and test it.
After the deployment, it will create a reintegration pull request to bring the hotfix back into the main branch and lower environments.
In order to enable the "Hotfix Release" workflow, add the following:
```yaml title=".github/workflows/hotfix-release.yaml"
name: Hotfix Release
on:
push:
branches: [ 'release/**' ]
permissions:
contents: write
id-token: write
jobs:
do:
uses: cloudposse/github-actions-workflows-docker-ecr-eks-helm-argocd/.github/workflows/hotfix-release.yml@main
with:
organization: "${{ github.event.repository.owner.login }}"
repository: "${{ github.event.repository.name }}"
path: deploy
secrets:
github-private-actions-pat: "${{ secrets.PRIVATE_REPO_ACCESS_TOKEN }}"
registry: "${{ secrets.ECR_REGISTRY }}"
secret-outputs-passphrase: "${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}"
ecr-region: "${{ secrets.ECR_REGION }}"
ecr-iam-role: "${{ secrets.ECR_IAM_ROLE }}"
```
These workflows also call the same Reusuable Workflow repository, `cloudposse/github-actions-workflows-docker-ecr-eks-helm-argocd`, as well as several of the same Reusuable Workflows called from that repository. For example, `cloudposse/github-actions-workflows` and `cloudposse/actions-private`.
:::tip Verify environment configs carefully
Be sure the environment configuration mapping includes `hotfix`. This typically lives with your private configuration repository, for example `cloudposse/actions-private`, and is called by the `cloudposse/github-action-interface-environment` action.
For example, add the following:
```yaml
runs:
using: "composite"
steps:
- name: Environment info
uses: cloudposse/github-action-yaml-config-query@0.1.0
id: result
with:
query: .${{ inputs.environment }}
config: |
...
hotfix:
cluster: https://github.com/GH_ORG/argocd-deploy-prod/blob/main/plat/use2-prod/apps
cluster-role: arn:aws:iam::PRODUCTION_ACCOUNT_ID:role/acme-plat-use2-prod-eks-cluster-gha
namespace: ${{ inputs.namespace }}
ssm-path: platform/acme-plat-use2-prod-eks-cluster
```
:::
### Implementation
- [`eks/argocd`](/components/library/aws/eks/argocd/): This component is responsible for provisioning [ArgoCD](https://argoproj.github.io/cd/).
- [`argocd-repo`](/components/library/aws/argocd-github-repo/): This component is responsible for creating and managing an ArgoCD desired state repository.
- [`sso-saml-provider`](/components/library/aws/sso-saml-provider/): This component reads sso credentials from SSM Parameter store and provides them as outputs
## References
- [ArgoCD Setup](/layers/software-delivery/eks-argocd/setup)
- [Decide on Pipeline Strategy](/layers/software-delivery/design-decisions/decide-on-pipeline-strategy)
---
## Setup Argo CD
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import Admonition from '@theme/Admonition'
import TaskList from '@site/src/components/TaskList'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
This setup guide will walk you through the process of setting up Argo CD in your environment.
## Requirements
In order to deploy Argo CD, EKS must be fully deployed and functional. In particular, the user deploying the cluster must
have a working VPN connection to the targeted account. See
[the EKS documentation](/layers/eks/deploy-clusters/) for details.
All deployment steps below assume that the environment has been successfully set up with the following steps.
### Authentication
1. Sign into AWS via [Atmos Auth](/layers/identity/how-to-log-into-aws/)
1. Connect to the VPN
1. Open Geodesic
## Setup Steps
## Vendor Argo CD components
First vendor all related components for the Argo CD layer:
## Create Argo CD GitHub Repositories
- Create the two required Argo CD GitHub repos:
- [acme/argocd-deploy-non-prod](https://github.com/acme/argocd-deploy-non-prod)
- [acme/argocd-deploy-prod](https://github.com/acme/argocd-deploy-prod)
## Prepare Authentication
Argo CD can be integrated with GitHub using either GitHub Apps (recommended) or Personal Access Tokens (PATs). GitHub Apps provide more granular permissions, better security, and improved audit capabilities.
Argo CD requires several different types of GitHub authentication for various components and workflows. While these could be combined, we follow the principle of least privilege by creating separate authentication credentials for each specific purpose. The following authentication methods are required:
1. #### Terraform `argocd-repo` Access
First we will need to apply the Argo CD desired state repositories configuration with Terraform. By default, we use local access to apply the component. This requires an engineer to locally authenticate with GitHub and apply this component locally. Since this component is rarely updated, this can be a reasonable trade-off.
2. #### Argo CD Instance
Next, we need a GitHub App for Terraform and the `eks/argocd` component. This app is used to register the webhook in GitHub for the Argo CD Application created with this given component.
After creating the GitHub App, add the app's private key to AWS SSM Parameter Store in each account with Argo CD, typically the `plat-dev`, `plat-staging`, and `plat-prod` accounts, and add the App ID to the stack catalog. Detailed instructions linked below.
3. #### Argo CD Desired State Repository Access (2)
We will need two more GitHub Apps for accessing the ArgoCD desired state repositories _from GitHub Actions_. GitHub Actions running for an application repositories will build and update application manifests in the Argo CD desired state repositories and therefore will need write access to that respective non-prod or prod repository.
This GitHub App does not need to be added to the stack catalog or SSM since it will be used by GitHub Actions, not Terraform.
4. #### Argo CD GitHub Notification Access
The last GitHub App is used by the Argo CD notifications system to update the GitHub commit status on deployments. This is stored in SSM and pulled by the `eks/argocd` component. That component will pass the ID and private key to the Argo CD instance in the given EKS cluster. That Argo CD instance uses that app _only when synchronous mode is enabled_.
After creating the GitHub App, add the app's private key to AWS SSM Parameter Store in each account with Argo CD, typically the `plat-dev`, `plat-staging`, and `plat-prod` accounts, and add the App ID to the stack catalog. Detailed instructions linked below.
Follow the instructions in [Argo CD Integrations: How to set up Authorization for Argo CD with GitHub Apps](/layers/software-delivery/eks-argocd/tutorials/github-apps) to create and configure all GitHub Apps for Argo CD. Once completed, you should have 4 GitHub Apps:
- `Argo CD Instance`
- `Argo CD Deploy Non-Prod`
- `Argo CD Deploy Prod`
- `Argo CD Notifications`
## Deploy the Argo CD Desired State Repositories
Deploy the Argo CD configuration for the two Argo CD desired state GitHub repositories with the following workflow:
Once this finishes, review the two repos in your GitHub Organization. These should both be fully configured at this point.
- [acme/argocd-deploy-non-prod](https://github.com/acme/argocd-deploy-non-prod)
- [acme/argocd-deploy-prod](https://github.com/acme/argocd-deploy-prod)
## Create AWS Identity Center Applications
In order to authenticate with Argo CD, we recommend using an AWS IAM Identity Center SAML Application. These apps can use existing Identity Center groups that we've already setup as part of the [Identity layer](/layers/identity/).
Please see [Argo CD Integrations: How to create an AWS Identity Center Application](/layers/software-delivery/eks-argocd/tutorials/identity-center-apps) and follow all steps.
## Deploy the Argo CD Instances to each EKS Cluster
Once the GitHub repositories are in place and the SAML applications have been created and configuration uploaded to SSM,
we're ready to deploy Argo CD to each cluster.
Deploy `eks/argocd` to each cluster with the following workflow:
## Validation
Once all deployment steps are completed, Argo CD should be accessible at the following URLs. Please note that you must be
able to authenticate with AWS Identity Center to access any given app.
- https://argocd.use1.dev.plat.acme-svc.com
- https://argocd.use1.staging.plat.acme-svc.com
- https://argocd.use1.prod.plat.acme-svc.com
## Next Steps
Assuming login goes well, here's a checklist of GitHub repos needed to connect Argo CD:
- [ ] `acme/infra-acme` repo (Should already exist!)
- [ ] `acme/infra-acme/.github/environments` private
workflows. This directory stores private environment configurations. Primarily, that is the
[`cloudposse/github-action-yaml-config-query`](https://github.com/cloudposse/github-action-yaml-config-query)
action used to get role, namespace, and cluster mapping for each environment.
- [ ] (2) Argo CD deploy nonprod and prod (Should already be created by `argocd-repo` component in earlier step)
- [ ] `argocd-deploy-non-prod`
- [ ] `argocd-deploy-prod`
- [ ] `acme/example-app` repo should be private repo generated from the
[app-on-eks-with-argocd](https://github.com/cloudposse-examples/app-on-eks-with-argocd) template
:::info Sensitive Log Output
Note that all of these workflow runs run from within your private app repo, so any sensitive log output will not be
public.
:::
### Environment Configuration
Update the `cloudposse/github-action-interface-environment` action to point to your infrastructure repository.
1. Set `implementation_repository` to `acme/infra-acme`
2. Verify `implementation_path`, `implementation_file`, and `implementation_ref` match your local configuration.
[Example app reference](https://github.com/cloudposse-examples/app-on-eks-with-argocd/blob/1abe260c7f43dde1c6610845e5a64a9d08eb8856/.github/workflows/workflow-cd-preview-argocd.yml#L167-L178)
### Verify GitHub OIDC Access Roles
The IDP permissions in IAM will be sensitive to capitalization, and yet the docker image must -not- have uppercase letters!
Make sure that your repo is allowed to assume roles for all relevant clusters and ECR repos:
1. Update the `github_actions_allowed_repos` variable in `ecr`, `eks/cluster`, or any other relevant components with
GitHub OIDC access.
2. If your GitHub Organization has mixed capitalization cases, make sure these entries are case-sensitive
### GitHub Environment Secrets
Add each of the following secrets to the `acme/example-app` repo:
1. `github-private-actions-pat`: `${{ secrets.PUBLIC_REPO_ACCESS_TOKEN }}`
2. `registry`: `${{ secrets.ECR_REGISTRY }}`
3. `secret-outputs-passphrase`: `${{ secrets.GHA_SECRET_OUTPUT_PASSPHRASE }}`
4. `ecr-region`: `${{ secrets.ECR_REGION }}`
5. `ecr-iam-role`: `${{ secrets.ECR_IAM_ROLE }}`
### Specify Ingress Group
1. Update the `deploy/releases/app.yaml`
2. Make sure the ingress is not set to `default`. It should likely be `alb-controller-ingress-group`. you can read more
about this
[from our docs on the alb controller component](/layers/eks/faq/#how-does-the-alb-controller-ingress-group-determine-the-name-of-the-alb)
3. Set the domain accordingly. Each environment will need the service domain + environment.stage.tenant (ie.
`use2.staging.plat.acme-svc.com` )
4. If your organization has mixed case, you'll need to edit the `organization` parameter to be lowercased in the GitHub
workflows: `feature-branch.yml`, `main-branch.yaml`, and `release.yaml`
## FAQ
### GitHub Apps vs Personal Access Tokens
We recommend using GitHub Apps for Argo CD integration with GitHub. GitHub Apps offer several advantages over Personal Access Tokens:
1. **Granular Permissions**: GitHub Apps can be granted access to specific repositories rather than requiring organization-wide access.
2. **Better Security**: GitHub Apps use JWT authentication and short-lived tokens, reducing the risk of token exposure.
3. **Improved Audit Capabilities**: Actions performed by GitHub Apps are clearly identified in audit logs.
4. **Rate Limiting**: GitHub Apps have their own rate limits, separate from user-based limits.
5. **Webhook Support**: GitHub Apps can receive webhooks for events in repositories they have access to.
6. **Multiple Installations**: The same GitHub App can be installed on different repositories with different permissions.
For more information on setting up Argo CD with GitHub Apps, see [Argo CD Integrations: How to set up Authorization for Argo CD with GitHub Apps](/layers/software-delivery/eks-argocd/tutorials/github-apps).
---
## How to set up Authorization for Argo CD with GitHub Apps
import Admonition from '@theme/Admonition'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Note from '@site/src/components/Note'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import Steps from '@site/src/components/Steps'
import TaskList from '@site/src/components/TaskList'
GitHub Apps is now the preferred method for Argo CD integration with GitHub. GitHub Apps provide more granular permissions, better security, and improved audit capabilities compared to Personal Access Tokens (PATs). This guide will walk you through setting up Argo CD with GitHub Apps.
- GitHub Apps provide more granular permissions than PATs
- GitHub Apps can be installed on specific repositories
- GitHub Apps have built-in rate limiting and audit capabilities
- GitHub Apps need the ability to bypass branch protection rules
## Required GitHub Apps
Argo CD integration requires multiple GitHub Apps, each with specific permissions and repository scopes:
### 1. Argo CD Instance
- **Actor**: Argo CD Deployment in Kubernetes
- **Use cases and permissions**:
- Allow Argo CD to read from Desired State Repositories:
- `repository.contents`: Read-Only
- `repository.metadata`: Read-Only
- Webhooks for Desired State Repositories:
- `repository.webhooks`: Read and Write
- `repository.metadata`: Read-Only
- **Repository scope**:
- `acme/argocd-deploy-non-prod`
- `acme/argocd-deploy-prod`
### 2. Argo CD Deploy Non-prod
- **Actor**: GitHub App IAT supplied to GitHub Actions Workflows
- **Use cases and permissions**:
- Push commits to Desired State Repositories
- `repository.contents`: Read and Write
- `repository.metadata`: Read-Only
- **Repository scope**:
- `acme/argocd-deploy-non-prod`
### 3. Argo CD Deploy Prod
- **Actor**: GitHub App IAT supplied to GitHub Actions Workflows
- **Use cases and permissions**:
- Push commits to Desired State Repositories
- `repository.contents`: Read and Write
- `repository.metadata`: Read-Only
- **Repository scope**:
- `acme/argocd-deploy-prod`
### 4. Argo CD Notifications
- **Use cases and permissions**:
- Commit status API (relay commit statuses from Desired State Repositories to app repositories via notification templates in `eks/argocd`):
- `repository.commit_statuses`: Write
- **Repository scope**:
- All of the app repositories
## Deployment
### Create GitHub Apps
First, you need to create the required GitHub Apps in your organization:
1. Go to your GitHub organization settings
2. Navigate to "GitHub Apps" and click "New GitHub App"
3. Create the following GitHub Apps with their respective permissions:
#### Argo CD Instance
- Name: `Argo CD Instance`
- Homepage URL: Your organization's homepage
- Webhook: Disabled
- Permissions:
- Allow Argo CD to read from Desired State Repositories:
- `repository.contents`: Read-Only
- `repository.metadata`: Read-Only
- Webhooks for Desired State Repositories:
- `repository.webhooks`: Read and Write
- `repository.metadata`: Read-Only
#### Argo CD Deploy Non-prod
- Name: `Argo CD Deploy Non-prod`
- Homepage URL: Your organization's homepage
- Webhook: Disabled
- Permissions:
- Push commits to Desired State Repositories:
- `repository.contents`: Read and Write
- `repository.metadata`: Read-Only
#### Argo CD Deploy Prod
- Name: `Argo CD Deploy Prod`
- Homepage URL: Your organization's homepage
- Webhook: Disabled
- Permissions:
- Push commits to Desired State Repositories:
- `repository.contents`: Read and Write
- `repository.metadata`: Read-Only
#### Argo CD Notifications
- Name: `Argo CD Notifications`
- Homepage URL: Your organization's homepage
- Webhook: Disabled
- Permissions:
- Commit status API (relay commit statuses from Desired State Repositories to app repositories via notification templates in `eks/argocd`):
- `repository.commit_statuses`: Write
### Generate and Store GitHub App Credentials
After creating each GitHub App, you need to generate and store credentials:
1. For each GitHub App:
1. On the GitHub App page, scroll down to "Private keys" and click "Generate a private key"
1. Download the private key file
1. Store the App ID, Installation ID, and private key securely in 1Password
2. Upload these credentials to AWS SSM Parameter Store
- Upload the `Argo CD Instance` private key to `/argocd/argo_cd_instance/app_private_key` in `core-auto`:
```bash
# Replace acme with your namespace or assume the role separately.
# Your default region should be the same as your primary region.
assume-role acme-core-gbl-auto-admin
chamber write argocd argo_cd_instance/app_private_key \
"$(cat /path/to/argocd-deploy-non-prod.private-key.pem)"
```
- Upload the `Argo CD Notifications` private key to `/argocd/argo_cd_notifications/app_private_key` to `core-auto`:
```bash
assume-role acme-core-gbl-auto-admin
chamber write argocd argo_cd_notifications/app_private_key \
"$(cat /path/to/argocd-notifications.private-key.pem)"
```
### Install the GitHub Apps
Install each GitHub App on its required repositories:
1. For `Argo CD Instance`:
- Go to the GitHub App settings page
- Click "Install App" in the sidebar
- Select the repositories:
- `acme/argocd-deploy-non-prod`
- `acme/argocd-deploy-prod`
- Complete the installation
2. For `Argo CD Deploy Non-Prod`:
- Go to the GitHub App settings page
- Click "Install App" in the sidebar
- Select the repository:
- `acme/argocd-deploy-non-prod`
- Complete the installation
3. For `Argo CD Deploy Prod`:
- Go to the GitHub App settings page
- Click "Install App" in the sidebar
- Select the repository:
- `acme/argocd-deploy-prod`
- Complete the installation
4. For `Argo CD Notifications`:
- Go to the GitHub App settings page
- Click "Install App" in the sidebar
- Select all app repositories, such as `acme/example-app-on-eks`
- Complete the installation
### Configure Branch Protection Rules
If branch protection rules are enabled in your GitHub Organization, you'll need to configure exceptions for the ArgoCD GitHub Apps. This allows ArgoCD to update repositories while still maintaining security. The GitHub Apps must be able to bypass branch protection rules in order for ArgoCD's automated deployments to work correctly.
Branch rulesets may be configured both or either at an organization and repository level. Check the enabled rulesets in both ArgoCD desired state repositories under "Code, planning, and automation" > "Branch rules". Add the ArgoCD GitHub Apps to the bypass list of any ruleset that prevents changes to the main branch.
## Configure Argo CD Desire State Repositories to Use GitHub Apps
This step should be pre-configured for Reference Architecture users.
Update your Argo CD desired state repository configuration to use the GitHub App:
```yaml
components:
terraform:
argocd-repo:
vars:
# 1. Use local access to apply this component rather than a PAT
# https://registry.terraform.io/providers/integrations/github/latest/docs#github-cli
use_local_github_credentials: true
# 2. If synchronous mode is enabled, set the notifications to send to "github" and not to the "webhook"
github_notifications:
- "notifications.argoproj.io/subscribe.on-deploy-started.github: \"\""
- "notifications.argoproj.io/subscribe.on-deploy-succeeded.github: \"\""
- "notifications.argoproj.io/subscribe.on-deploy-failed.github: \"\""
# 3. Optional, disable the SSH deploy keys to use a GitHub App
# for the Argo CD instance to authenticate with the desired state repository
deploy_keys_enabled: false
```
### Configure Argo CD to Use GitHub Apps
Reference Architecture users will need to update both GitHub App and Installation IDs.
Update your Argo CD configuration to use the GitHub Apps by modifying the component configuration shown below.
You can find the Installation ID by going to your GitHub Organization settings, selecting "GitHub Apps", clicking on your app, then selecting "Install App". The Installation ID will be in the URL, e.g. https://github.com/organizations/acme/settings/installations/44444444.
```yaml
# stacks/catalog/eks/argocd/defaults.yaml
components:
terraform:
eks/argocd:
vars:
# GitHub App (Argo CD Instance)
# This GitHub App is used for the Argo CD instance to manage webhooks and read from the desired state repository.
# ie https://github.com/acme/argocd-deploy-non-prod
github_app_enabled: true
github_app_id: "1234567"
github_app_installation_id: "44444444"
# The SSM parameter must exist in the account and region where Argo CD is deployed.
ssm_github_app_private_key: "/argocd/argo_cd_instance/app_private_key"
# Optional, disable the SSH deploy keys to use this GitHub App
# for the Argo CD instance to authenticate with the desired state repository
github_deploy_keys_enabled: false
# GitHub App (Argo CD Notifications)
# This GitHub App is used for the Argo CD instance to send commit status updates back to each ap repository.
# This is only required if synchronous mode is enabled.
# ie https://github.com/acme/example-app-on-eks
github_notifications_app_enabled: true
github_notifications_app_id: "8901235"
github_notifications_app_installation_id: "55555555"
# The SSM parameter must exist in the account and region where Argo CD is deployed.
ssm_github_notifications_app_private_key: "/argocd/argo_cd_notifications/app_private_key"
```
### Deploy the updated configuration
Redeploy both the `argocd-repo` component for both nonprod and prod.
Then redeploy all instances of `eks/argocd`
### Configure GitHub Actions Workflows
Update your GitHub Actions workflows to use the appropriate GitHub App.
Set the following GitHub environment variables for the application repositories:
1. Set `ARGO_CD_DEPLOY_NONPROD_APP_ID` in both `preview` and `dev`
2. Set `ARGO_CD_DEPLOY_PROD_APP_ID` in `staging` and `prod`.
Then set the following secrets:
1. Add `ARGO_CD_DEPLOY_NONPROD_APP_PRIVATE_KEY` to `preview` and `dev`.
2. Add `ARGO_CD_DEPLOY_PROD_APP_PRIVATE_KEY` to `staging` and `prod`.
\**Add QA environments if necessary*
Please be sure to update your GitHub Workflows to support GitHub App authentication. If you are unsure, please reach out to Cloud Posse.
## References
- [Setting up Argo CD](/layers/software-delivery/eks-argocd/setup/)
- [GitHub Apps Documentation](https://docs.github.com/en/developers/apps)
- [GitHub Apps Permissions](https://docs.github.com/en/developers/apps/building-github-apps/setting-permissions-for-github-apps)
---
## How to setup Synchronous Notifications for Argo CD with GitHub Commit Statuses
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
Synchronous notifications are used in Argo CD release engineering workflows to notify an application workflow of a successful deployment. The application repo deploys an updated app manifest to the Argo CD deployment repo. Then the Argo CD app in the EKS cluster pulls and deploys that updated application. Depending on the result of that deployment, Argo CD triggers a notifier.
Our implementation of Argo CD breaks up notifications into "notifiers", "templates", and "triggers". You will see these keywords referenced frequently in the `eks/argocd` component.
## Create Notifiers and Webhook Tokens
A notifier is the top level resource for any notification. If you wish to set up any notification, you must first create a notifier to react to a given event. Furthermore, this is where we set up authorization for webhooks.
By default, we create 2 notifier webhooks: `app-repo-github-commit-status` and `argocd-repo-github-commit-status`, both of which use the `common_github-token` secret as an authorization token. That authorization token is programmatically pulled from AWS SSM using the path defined by `var.notifications_notifiers.ssm_path_prefix`, which is typically `/argocd/notifications/notifiers`. Using this prefix, the `/argocd/notifications/notifiers/common/github-token` parameter value is given to the `common_github-token` secret.
You may add additional notifiers as follows. In this use case, `var.notifications_notifiers` is deep merged with the 2 default notifiers for `app-repo-github-commit-status` and `argocd-repo-github-commit-status`. This allows us to use different authorization tokens for this given webhook than the default `$common_github-token`
```yaml
components:
terraform:
eks/argocd:
vars:
notifications_notifiers:
webhook:
foo-repo-github-commit:
url: "https://api.github.com"
headers:
- name: "Authorization"
value: "Bearer $webhook_foo-repo-github-commit_github-token"
```
Similarly, authorization token is programmatically pulled from AWS SSM using the path defined by `var.notifications_notifiers.ssm_path_prefix` _for any `webhook` notifier given_. Therefore, if you add an SSM parameter such as `/argocd/notifications/notifiers/foo-repo-github-commit/github-token`, the component will create the `webhook_foo-repo-github-commit_github-token` secret.
## Define Notification Templates
A template defines the event structure for a notification. This is the message and webhook. Again, by default we set up `app-repo-github-commit-status` and `argocd-repo-github-commit-status` templates.
```yaml
templates:
template.app-deploy-failed: |
"alertmanager": null
"message": "Application {{ .app.metadata.name }} failed deploying new version."
"webhook":
"app-repo-github-commit-status":
"body": "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"error\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
"method": "POST"
"path": "/repos/{{call .repo.FullNameByRepoURL .app.metadata.annotations.app_repository}}/statuses/{{.app.metadata.annotations.app_commit}}"
"argocd-repo-github-commit-status":
"body": "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"error\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
"method": "POST"
"path": "/repos/{{call .repo.FullNameByRepoURL .app.spec.source.repoURL}}/statuses/{{.app.status.operationState.operation.sync.revision}}"
template.app-deploy-started: |
"alertmanager": null
"message": "Application {{ .app.metadata.name }} is now running new version of deployments
manifests."
"webhook":
"app-repo-github-commit-status":
"body": "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"pending\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
"method": "POST"
"path": "/repos/{{call .repo.FullNameByRepoURL .app.metadata.annotations.app_repository}}/statuses/{{.app.metadata.annotations.app_commit}}"
"argocd-repo-github-commit-status":
"body": "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"pending\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
"method": "POST"
"path": "/repos/{{call .repo.FullNameByRepoURL .app.spec.source.repoURL}}/statuses/{{.app.status.operationState.operation.sync.revision}}"
template.app-deploy-succeeded: |
"alertmanager": null
"message": "Application {{ .app.metadata.name }} is now running new version of deployments
manifests."
"webhook":
"app-repo-github-commit-status":
"body": "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"success\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
"method": "POST"
"path": "/repos/{{call .repo.FullNameByRepoURL .app.metadata.annotations.app_repository}}/statuses/{{.app.metadata.annotations.app_commit}}"
"argocd-repo-github-commit-status":
"body": "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"success\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
"method": "POST"
"path": "/repos/{{call .repo.FullNameByRepoURL .app.spec.source.repoURL}}/statuses/{{.app.status.operationState.operation.sync.revision}}"
```
In order to add additional templates, use `var.notifications_templates`. This value is again deep merged with `app-repo-github-commit-status` and `argocd-repo-github-commit-status`.
```yaml
components:
terraform:
eks/argocd:
vars:
notifications_templates:
app-deploy-succeeded:
message: "Application {{ .app.metadata.name }} is now running new version of deployments"
webhook:
foo-repo-github-commit:
body: "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"success\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
method: "POST"
path: "/repos/{{call .repo.FullNameByRepoURL .app.metadata.annotations.app_repository}}/statuses/{{.app.metadata.annotations.app_commit}}"
app-deploy-started:
message: "Application {{ .app.metadata.name }} is now running new version of deployments"
webhook:
foo-repo-github-commit:
body: "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"pending\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
method: "POST"
path: "/repos/{{call .repo.FullNameByRepoURL .app.metadata.annotations.app_repository}}/statuses/{{.app.metadata.annotations.app_commit}}"
app-deploy-failed:
message: "Application {{ .app.metadata.name }} failed deploying new version."
webhook:
foo-repo-github-commit:
body: "{\"context\":\"continuous-delivery/{{.app.metadata.name}}\",\"description\":\"ArgoCD\",\"state\":\"error\",\"target_url\":\"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}\"}"
method: "POST"
path: "/repos/{{call .repo.FullNameByRepoURL .app.metadata.annotations.app_repository}}/statuses/{{.app.metadata.annotations.app_commit}}"
```
## Configure Triggers
Finally, a trigger determines when these notifications are sent. By default we set up `app-repo-github-commit-status` and `argocd-repo-github-commit-status` triggers.
```yaml
triggers:
trigger.on-deploy-failed: |
- "oncePer": "app.status.sync.revision"
"send":
- "app-deploy-failed"
"when": "app.status.operationState.phase in ['Error', 'Failed' ] or ( app.status.operationState.phase
== 'Succeeded' and app.status.health.status == 'Degraded' )"
trigger.on-deploy-started: |
- "oncePer": "app.status.sync.revision"
"send":
- "app-deploy-started"
"when": "app.status.operationState.phase in ['Running'] or ( app.status.operationState.phase
== 'Succeeded' and app.status.health.status == 'Progressing' )"
trigger.on-deploy-succeeded: |
- "oncePer": "app.status.sync.revision"
"send":
- "app-deploy-succeeded"
"when": "app.status.operationState.phase == 'Succeeded' and app.status.health.status
== 'Healthy'"
```
These triggers may trigger _multiple templates_. For example `trigger.on-deploy-succeeded` triggers both `template.app-deploy-succeeded.webhook.app-repo-github-commit-status` and `template.app-deploy-succeeded.webhook.argocd-repo-github-commit-status`.
## References
- [Setting up ArgoCD](/layers/software-delivery/eks-argocd/setup/)
- [Argo CD Notifications (official)](https://argocd-notifications.readthedocs.io/en/stable/)
- [GitHub Commit statuses API](https://docs.github.com/en/rest/commits/statuses?apiVersion=2022-11-28#create-a-commit-status)
---
## How to create an AWS Identity Center Application for ArgoCD
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
In order to authenticate with ArgoCD, we recommend using an AWS IAM Identity Center SAML Application. These apps can use existing Identity Center groups that we've already setup as part of the [Identity layer](/layers/identity).
## Create AWS Identity Center Applications
1. For each `dev`, `staging`, and `prod` in the `plat` tenant, create an [IAM Identity Center Application](https://docs.aws.amazon.com/singlesignon/latest/userguide/samlapps.html).
2. Use the 'callback' url of `eks/argocd` for both the ACS URL and the SAML Audience fields. For example, `https://argocd.use1.dev.plat.acme-svc.com/api/dex/callback`. This should be your _service domain_.
3. Next, update the custom SAML application attributes:
| Name | Value | Type |
| :-------- | :---------------- | :------------ |
| `Subject` | `${user:subject}` | `persistent` |
| `email` | `${user:email}` | `unspecified` |
| `groups` | `${user:groups}` | `unspecified` |
4. Now assign AWS Identity Center groups to the SAML app. If you ever recreate the groups, you'll need to go back to the SAML application and remove/re-add the group.
5. Record the IDs of each group you assigned. If you've recently updated the groups, you'll likely need to redo this step as group IDs change on any significant updates.
6. Update the config for `eks/argocd` to use the given AWS Identity Center groups groups:
```console
components:
terraform:
eks/argocd:
vars:
# Note: the IDs for AWS Identity Center groups will change if you alter/replace them:
argocd_rbac_groups:
- group: deadbeef-dead-beef-dead-beefdeadbeef
role: admin
- group: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
role: reader
```
7. Finally, for each stage run `atmos terraform deploy sso-saml-provider -s plat-use1-{ stage }`
:::info Tip
If you get any errors using AWS SSO, make sure the `Subject` attribute is set to `persistent` and connect to the cluster with `set-cluster plat-{ region }-{ stage } admin && kubens argocd` and then delete the dex pod to reset it.
:::
# References
- [Setting up ArgoCD](/layers/software-delivery/eks-argocd/setup/)
---
## How to set up Authorization for ArgoCD with GitHub PATs
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps'
import Step from '@site/src/components/Step'
import StepNumber from '@site/src/components/StepNumber'
import Admonition from '@theme/Admonition'
import Note from '@site/src/components/Note'
import AtmosWorkflow from '@site/src/components/AtmosWorkflow';
The deployment process for ArgoCD includes setting up access tokens for a number of responsibilities. We will need to create the desire state repositories with necessary access, create Webhooks for these repos, grant the app in the EKS cluster permission to send notifications, and grant access for GitHub workflows.
:::tip Fine-grained Personal Access Tokens (PAT)
Fine-grained PATs are preferred to classic PATs. All PATs except the Notifications GitHub PAT will be fine-grained PATs. See [Managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens).
:::
## Establish a Bot User
We deploy a number of GitHub Personal Access Tokens (PATs) as part of the EKS with ArgoCD application. By default each PAT is given the least-access required for the given job.
Each one of these PATs will be associated with a given user. We recommend creating or using an existing "bot" user. For example, at Cloud Posse we have the "cloudpossebot" GitHub user. This user has its own email address and GitHub account, is accessible from our internal 1Password vault for all privileged users, and has all access keys and tokens stored with it in 1Password.
This bot user will need permission to manage a few repositories in your Organization. If you wish to simplify deployment, you can grant this user permission to create repositories. See [Can we use the Bot user to create the ArgoCD repos](#can-we-use-the-bot-user-to-create-the-argocd-repos).
Use this bot user for all access tokens in the remainder of this guide.
## Create ArgoCD GitHub Repositories
Create the two required ArgoCD GitHub repos:
- [acme/argocd-deploy-non-prod](https://github.com/acme/argocd-deploy-non-prod)
- [acme/argocd-deploy-prod](https://github.com/acme/argocd-deploy-prod)
Then grant the bot user `Admin` access to these two repositories.
## Create the first GitHub PAT
In order for Terraform to manage these two GitHub repositories for ArgoCD, we must deploy our first GitHub PAT ([follow this manual to create a PAT](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)).
All tokens created in this setup guide use a Cloud Posse naming convention. If a different pattern makes more sense for your organization, feel free to change it up! See [FAQ: What is the Cloud Posse naming convention for PATs?](#what-is-the-cloud-posse-naming-convention-for-pats).
This token needs permission to manage the ArgoCD deployment repositories.
1. Name this PAT `argocd/terraform/argocd-repo`.
2. For repository access, select "Only select repositories" and choose both `acme/argocd-deploy-non-prod` and `acme/argocd-deploy-prod`.
3. Grant this PAT the following permissions
```diff
# Repository permissions
+ Administration: Read and write
+ Contents: Read and write
+ Metadata: Read-only
# Organization permissions
+ Members: Read-only
```
Use the following workflow to upload this PAT to AWS SSM:
Or manually save this PAT to AWS SSM at `argocd/github/api_key` in the `core-auto` account.
## Deploy ArgoCD GitHub Repositories Configuration
Deploy the ArgoCD configuration for the two GitHub repos with the following workflow:
Once this finishes, review the two repos in your GitHub Organization. These should both be fully configured at this point.
- [acme/argocd-deploy-non-prod](https://github.com/acme/argocd-deploy-non-prod)
- [acme/argocd-deploy-prod](https://github.com/acme/argocd-deploy-prod)
Now that the ArgoCD deployment repos are configured, we need to create GitHub PATs for ArgoCD.
## Create the Webhook GitHub PATs
The next two PATs created will be used by Terraform with the `eks/argocd` component; one for `argocd-deploy-non-prod` and one for `argocd-deploy-prod`. Each of these PATs is used to register the webhook in GitHub for the ArgoCD Application created with this given component. Terraform will pull that PAT from SSM typically using the `argocd/github` path in `plat-dev`, `plat-staging`, and `plat-prod` accounts.
You may notice these PATs use the same SSM path as PAT #1 yet is deployed to a different account. We intentionally separate these PATs in order to adhere to least-privilege principle. This way, each account can pull a PAT from the same SSM path in the same account with the minimal set of permission that this account requires.
These PATs can be combined into a single PAT if preferred.
Create two PATs with the following allowed permissions. First nonprod:
1. Name this PAT `argocd/terraform-webhooks/nonprod`
2. Limit this PAT to `acme/argocd-deploy-non-prod`
3. Grant the following permission:
```diff
Repository:
+ Webhooks: Read and write
+ Metadata: Read-only
```
4. Use the following workflow to upload this PAT to AWS SSM:
Or manually save this PAT to AWS SSM at `argocd/github/api_key` in the `plat-dev` and `plat-staging` accounts.
Now repeat the same process for production:
1. Name this PAT `argocd/terraform-webhooks/prod`
2. Limit this PAT to `acme/argocd-deploy-prod`
3. Grant the following permission (again):
```diff
Repository:
+ Webhooks: Read and write
+ Metadata: Read-only
```
4. Use the following workflow to upload this PAT to AWS SSM:
Or manually save this PAT to AWS SSM at `argocd/github/api_key` in the `plat-prod` account.
## Create the Notifications GitHub PAT
The next PAT is used by the ArgoCD notifications system to set the GitHub to commit status on successful deployments. This PAT is stored in SSM and pulled by the `eks/argocd` component. That component will pass the token to the ArgoCD application in the given EKS cluster. That ArgoCD Application uses that PAT only when synchronous mode is enabled.
As of January 2023, GitHub does not support fine-grained PATs with the [commit statuses API](https://docs.github.com/en/rest/commits/statuses?apiVersion=2022-11-28#create-a-commit-status). Therefore, we must create a _classic_ PAT for the bot user.
1. Name this _classic_ PAT `ARGOCD_APP_NOTIFICATIONS`
2. Grant the following permission:
```diff
+ repo:status
```
3. Then check that the bot user has access to the _application_ repo. For example for Cloud Posse, that is [cloudposse-examples app-on-eks-with-argo](https://github.com/cloudposse-examples/app-on-eks-with-argocd).
Use the following workflow to upload this PAT to AWS SSM:
Or manually save this PAT to AWS SSM at `argocd/notifications/notifiers/common/github-token` in the `plat-dev`, `plat-staging`, and `plat-prod` accounts.
## Create the Workflows GitHub PATs
The final two PATs are used in the release engineering workflows; again one for nonprod and one for prod. Each PAT will need access to two repos. First, it needs read access to the private environment configuration. By default, this is the `infra-repo` repository. Second, it needs write access to the given ArgoCD deploy repository in order to update the deployment configuration for new applications.
For the nonprod PAT:
1. Name this PAT `argocd/github/nonprod`
2. Limit this PAT to `acme/argocd-deploy-non-prod` and `acme/infra-repo`
3. Grant this PAT the following permissions:
```diff
Repository
+ Contents: Read and write
+ Metadata: Read-only
```
This PAT _does not_ need to be uploaded to AWS SSM, and instead store this PAT for reference in 1Password. We will upload this PAT as a GitHub secret for the release workflows, typically with the `ARGOCD_GITHUB_NONPROD` secret.
Now for the prod PAT:
1. Name this PAT `argocd/github/prod`
2. Limit this PAT to `acme/argocd-deploy-prod` and `acme/infra-repo`
3. Grant this PAT the following permissions:
```diff
Repository
+ Contents: Read and write
+ Metadata: Read-only
```
Again store this PAT for reference in 1Password and upload this PAT as a GitHub secret for the release workflows, typically with the `ARGOCD_GITHUB_PROD` secret.
## FAQ
### What is the Cloud Posse naming convention for PATs
You can name your PATs however you prefer, but for the sake of consistency, we recommend establishing a naming convention for PATs. At Cloud Posse we prefer to use the following pattern:
`//`
However for *classic* PATs, secret names can only contain alphanumeric characters ([a-z], [A-Z], [0-9]) or underscores (\_). Spaces are not allowed. Must start with a letter ([a-z], [A-Z]) or underscores (\_). So for *classic* PATs use all Caps and underscores.
For example:
```console
# 1. Terraform access for argocd-repo. Requires access to apply both prod and nonprod
argocd/terraform/argocd-repo # needs read on org members, write admin and code on both argocd repos
# 2. Terraform access for eks/argocd webhooks
argocd/terraform-webhooks/nonprod # needs permission to write repository hooks on argocd-deploy-nonprod
argocd/terraform-webhooks/prod # needs permission to write repository hooks on argocd-deploy-prod
# 3. ArgoCD access for app in cluster
ARGOCD_APP_NOTIFICATIONS # needs permission to write commit statuses on any application repo
# 4. GitHub Workflow access
argocd/github/nonprod # needs write access to argocd-deploy-nonprod and read for infra
argocd/github/prod # needs write access to argocd-deploy-prod and read for infra
```
### Can we use the Bot user to create the ArgoCD repos?
By default, we do not require that the bot user creates the ArgoCD deployment repositories. However, the component does support enabling that option. If you wish to allow the bot user to both create and manage the ArgoCD deployment repos, grant the bot user permission in your Organization, and then set `var.create_repo` to `true` in `stacks/catalog/argocd-repo/defaults.yaml`.
### Resource not accessible by personal access token
```console
{
"message": "Resource not accessible by personal access token",
"documentation_url": "https://docs.github.com/rest/commits/statuses#create-a-commit-status"
}
```
You may see this message if you attempt to use a fine-grained PAT to set a GitHub commit status. As of January 2023, GitHub does not support fine-grained PATs with the [commit statuses API](https://docs.github.com/en/rest/commits/statuses?apiVersion=2022-11-28#create-a-commit-status). Therefore, we must create a _classic_ PAT for the bot user.
### Forbids access via a personal access token (classic).
```console
{
"message": "`acme` forbids access via a personal access token (classic). Please use a GitHub App, OAuth App, or a personal access token with fine-grained permissions.",
"documentation_url": "https://docs.github.com/rest/commits/statuses#create-a-commit-status"
}
```
You must enable classic PATs for the GitHub Organization.
Under the GitHub Organization settings, go to `Personal access tokens` > `Settings` > `Personal access token (classic)` and select `Allow access via personal access tokens (classic)`.
### Why not use a GitHub App?
At the time this component was developed, GitHub Apps were not fully supported. However, we plan to update our recommendation to a GitHub App soon! Please check with Cloud Posse on the latest status.
## References
- [Setting up ArgoCD](/layers/software-delivery/eks-argocd/setup/)
---
## Tutorials(10)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
Here are some additional tutorials that will help you along in your usage of Argo CD.
---
## Implementing CI/CD
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import ReactPlayer from 'react-player';
import Note from '@site/src/components/Note';
This document assumes the reader understands the importance of CI/CD and DevOps as a concept.
For more, see [What is DevOps](https://aws.amazon.com/devops/what-is-devops/).
AI generated voice
## The Problem
At this point, we have defined a complete Infrastructure as Code (IaC) environment, yet we still need some way to deliver our application
to users. Release Engineering is that process. Primarily, this includes Continuous Integration (CI) and Continuous Delivery (CD). CI/CD acts
as the glue between our IaC and the delivery of the software to customers and is how an app consumes its platform.
Historically, developers would define a CI/CD process for a given application, typically with Jenkins, and then duplicate that process for
each application. Quickly we would have many different pipelines for our library of apps, each with an individual purpose specific to that
given app. As pipelines grew in numbers and complexity, entire teams would be hired solely to manage these systems, and CI/CD became the
scapegoat system that everyone loves to hate.
For years CI/CD meant the same thing. Now CI is separate from CD. CI is process of building an artifacts, and CD is the process of taking
that artifact to delivery. There are many methods of CI and of CD. For example, Spacelift is CD for Terraform; ArgoCD is CD for Kubernetes.
Similarly there are many tools for CI. Jenkins, GitHub Actions, CircleCI, and countless others all offer solutions. Ultimately, any solution
must be codified and create a standard pattern of software delivery. It needs to support many languages and many frameworks. All Git
workflows need to be supported for these different languages and frameworks in a consistent way, such that we do not create snowflakes.
Yet one size will not fit all, so we need the ability to break glass without throwing out the whole solution.
Modern day Release Engineering is complex. Pipelines grow exponentially and often cannot be tested. Companies rely on CI/CD to ship software,
yet have no way to test CI/CD _itself_.
## Our Solution
### Concept
Release engineering process is critical for successful software development. On the one hand,
it is responsible for the continuous resilient delivery of the software at a consistent quality.
On the other hand, the process can be treated as
a part of the [organization's value stream](https://www.thoughtworks.com/radar/techniques/path-to-production-mapping),
can highlight specific organizational [structure](https://en.wikipedia.org/wiki/Conway%27s_law),
and demonstrate the maturity of the engineering culture. Release engineering pipelines are required
to measure [the organization's performance](https://dora.dev/).
At Cloud Posse, we consider CI/CD pipelines as [software](https://www.thoughtworks.com/radar/techniques/pipelines-as-code), and
an automated part of the release engineering process. Developing CI/CD pipelines we use practices and design principles
well-established in software engineering, including but not limited to
* [Separation of concern](https://en.wikipedia.org/wiki/Separation_of_concerns)
* [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself)
* [Tests automation](https://en.wikipedia.org/wiki/Test_automation)
* [Versioning](https://en.wikipedia.org/wiki/Software_versioning)
* [Design by contract](https://en.wikipedia.org/wiki/Design_by_contract)
* [Convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) and [code](https://en.wikipedia.org/wiki/Convention_over_Code)
While there is value in flexibility by creating custom pipelines in each repo, we value the convention
across pipelines to improve maintainability and consistency across organizations.
This interface should be standardarized regardless of what
is being delivered. In order to minimize boilerplate in pipelines, we create shared workflows with Github Actions to define how to handle
specific aspects of the Release Engineering process. These workflows use a combination of Reusuable Workflows, Composite Actions, and regular
Actions (both public or private). These layers have multiple levels of abstraction and allow us to define configuration per environment and
create exceptions at any given point. Much of the common boilerplate patterns can be combined into reusable steps broken down into each level
of abstraction.
Composite Actions consolidate common steps into a single, modular action that can be documented, parameterized, and tested. Reusable Workflows
combine these tested Composite Actions and regular Actions to into common processes. Composite Actions can be anywhere, public or private, but
Reusable Workflows must be specific to an Organization. Moreover, a Reusable Workflow can have multiple jobs that can run together. Yet
Composite Actions cannot have multiple jobs but can have multiple steps. Reusable Workflows can call other Reusable Workflows and Composite
Actions can call other Composite Actions. Both Reusable Workflows and Composite Actions do not have a trigger. Both are functions that take
inputs and produce outputs and therefore can and and should be documented and tested.
Cloud Posse defines common patterns across customers and offers several solutions. We have many workflows for many purposes. For example
you could have a shared CI workflow to provide linting, testing, and validation. Or you could have several CD workflows: CD to deploy an app
to EKS with ArgoCD, CD to deploy code to a Lambda function, or a CD to deploy a Docker image to ECR. All these workflows are stored in YAML
files and follow a common convention. Finally, they are organized consistently so that we are able to introduce additional interfaces down
the road.
## Workflows
```mermaid
---
title: Deployment Lifecycle
---
stateDiagram-v2
direction LR
[*] --> pr : Create Pull Request
pr --> main : merge
main --> release : Create Release
state "Pull Request" as pr {
[*] --> label
label --> preview
preview --> [*]
state "Add deploy Label" as label
state "Deploy to Preview" as preview
}
state "Main Branch" as main {
[*] --> dev
dev --> [*]
state "Deploy to Dev" as dev
}
state "Release" as release {
[*] --> staging
staging --> confirm
confirm --> prod
prod --> [*]
state "Deploy to Staging" as staging
state "Confirm" as confirm
state "Deploy to Prod" as prod
}
```
Create a Pull Request with changes to the application. Add the "deploy" label to the PR, which will trigger a deployment to the Preview
environment. Validate your changes and approve the PR. When the PR is merged into main, a deployment to Dev will be triggered next. When
ready to cut a release, create a Release with GitHub. This will trigger another workflow to first deploy to Staging and then will wait for
manual confirmation. Once manually approved, the workflow will continue and deploy to Production.
### Release Engineering Flavors
Refer to our stack specific implementations for more details:
- [**Dockerized App on EKS with ArgoCD**](/layers/software-delivery/eks-argocd/)
- [**Dockerized App on ECS with Ecspresso**](/layers/software-delivery/ecs-ecspresso/)
- [**Lambda App**](/layers/software-delivery/lambda)
## FAQ
### I cannot assume the AWS roles from GitHub Workflows
The following error commonly occurs when setting up GitHub OIDC roles and permission:
```
Error: Could not assume role with OIDC: Not authorized to perform sts:AssumeRoleWithWebIdentity
```
To resolve this error, make sure your workflow has appropriate permission to assume GitHub OIDC roles.
```yaml
permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout
```
### How does GitHub OIDC work with AWS?
Please see [How to use GitHub OIDC with AWS](/layers/github-actions/github-oidc-with-aws)
---
## Lambda with GitHub Workflows
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CollapsibleText from '@site/src/components/CollapsibleText';
Deploy Lambda functions using GitHub Workflows with a code-driven approach. The build process updates S3 with assets and SSM with the new version, requiring a Terraform run for promotion. GitHub Workflows manage the entire lifecycle, from building and packaging Lambda functions to deploying them with reusable workflows.
### Overview
```mermaid
---
title: Lambdas with GitHub Workflows
---
flowchart LR
subgraph core-auto
publish["publish"]
promote["promote"]
end
subgraph core-artifacts
artifacts_bucket["@Bucket lambda artifacts"]
end
subgraph "SSM parameters"
subgraph "plat-dev"
dev_lambda_tag["@SystemsManager /lambda/hello/tag"]
end
subgraph "plat-staging"
staging_lambda_tag["@SystemsManager /lambda/hello/tag"]
end
subgraph "plat-prod"
prod_lambda_tag["@SystemsManager /lambda/hello/tag"]
end
end
pr["@PullRequest PR #1234 "] --> publish --> artifacts_bucket
publish --> dev_lambda_tag
push["@GitBranch push → main"] --> publish --> artifacts_bucket
publish --> staging_lambda_tag
release["@GitHubRelease release"] --> promote
promote --> artifacts_bucket
promote --> prod_lambda_tag
artifacts_bucket --> staging_lambda_tag
```
### Build and Deployment
Application repository updates S3 with build assets, then updates SSM with the new version.
Each SSM update is basically a promotion, and requires a Terraform run to realize the change.
```yaml title=".github/workflows/reusable-publish-lambda-zip.yaml"
name: Publish Lambda Function
on:
workflow_call:
inputs:
function-name:
required: true
type: string
source-folder:
required: true
type: string
artifacts-bucket-and-prefix:
required: true
type: string
aws-region:
required: true
type: string
secrets:
cicd-role-arn:
required: true
permissions:
id-token: write
contents: read
jobs:
publish:
runs-on: self-hosted
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ inputs.cicd-role-arn }}
aws-region: ${{ inputs.aws-region }}
- name: Checkout
uses: actions/checkout@v4
- name: Package Lambda
run: |
cd ${{ inputs.source-folder }} && zip ${{ github.sha }}.zip *
- name: Push Lambda
run: |
aws s3 cp ${{ inputs.source-folder }}/${{ github.sha }}.zip s3://${{ inputs.artifacts-bucket-and-prefix }}/${{ inputs.function-name }}/ --sse
- name: Write tag to SSM
run: |
aws ssm put-parameter --name /lambda/${{ inputs.function-name}}/tag --type String --value ${{ github.sha }} --overwrite
```
```yaml title=".github/workflows/reusable-promote-lambda-zip.yaml"
name: Publish Lambda Function
on:
workflow_call:
inputs:
function-name:
required: true
type: string
artifacts-bucket-and-prefix:
required: true
type: string
aws-region:
required: true
type: string
secrets:
cicd-role-arn:
required: true
staging-role-arn:
required: true
prod-role-arn:
required: true
permissions:
id-token: write
contents: read
jobs:
publish:
runs-on: self-hosted
steps:
- name: Configure AWS credentials for 'cicd' role
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ inputs.cicd-role-arn }}
aws-region: ${{ inputs.aws-region }}
- name: Configure AWS credentials for source stage
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
aws-session-token: ${{ env.AWS_SESSION_TOKEN }}
role-duration-seconds: 3000
role-skip-session-tagging: true
role-to-assume: ${{ inputs.staging-role-arn }}
aws-region: ${{ inputs.aws-region }}
- name: Checkout
uses: actions/checkout@v4
- name: Get tag from SSM
id: get-tag-from-ssm
run: |
TAG=`aws ssm get-parameter --name /lambda/${{ inputs.function-name }}/tag | jq -r .Parameter.Value`
echo "tag=$TAG" >> $GITHUB_OUTPUT
- name: Copy Lambda to local
run: |
aws s3 cp s3://${{ inputs.artifacts-bucket-and-prefix }}/${{ inputs.function-name }}/${{ steps.get-tag-from-ssm.outputs.tag }}.zip .
- name: Configure AWS credentials for 'cicd' role
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ inputs.cicd-role-arn }}
aws-region: ${{ inputs.aws-region }}
- name: Configure AWS credentials for destination stage
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
aws-session-token: ${{ env.AWS_SESSION_TOKEN }}
role-duration-seconds: 3000
role-skip-session-tagging: true
role-to-assume: ${{ inputs.prod-role-arn }}
aws-region: ${{ inputs.aws-region }}
- name: Copy Lambda to destination bucket
run: |
aws s3 cp ${{ steps.get-tag-from-ssm.outputs.tag }}.zip \
s3://${{ inputs.artifacts-bucket-and-prefix }}/${{ inputs.function-name }}/ --sse
- name: Write tag to SSM
run: |
aws ssm put-parameter --name /lambda/${{ inputs.function-name}}/tag --type String --value ${{ steps.get-tag-from-ssm.outputs.tag }} --overwrite
```
```yaml title=".github/workflows/reusable-promote-lambda-zip.yaml"
name: Deploy Lambda via Spacelift
on:
workflow_call:
inputs:
function-name:
required: true
type: string
stack:
required: true
type: string
secrets:
spacelift-api-key-id:
required: true
spacelift-api-key-secret:
required: true
jobs:
deploy:
runs-on: self-hosted
container: 123456789012.dkr.ecr.us-east-2.amazonaws.com/acme/infra:latest
steps:
- name: Trigger Spacelift Stack Execution
env:
SPACELIFT_API_ENDPOINT: https://acme.app.spacelift.io
SPACELIFT_API_KEY_ID: ${{ secrets.spacelift-api-key-id }}
SPACELIFT_API_KEY_SECRET: ${{ secrets.spacelift-api-key-secret }}
run: |
spacectl stack deploy --id ${{ inputs.stack }}-lambda-${{ inputs.function-name}} --tail
```
### Implementation
- [`lambda`](/components/library/aws/lambda/): This component is responsible for creating the Lambda function.
After promotion, the Lambda function is updated with the new version.
## References
- [Lambda Setup](/layers/software-delivery/lambda)
- [Foundation Release Engineering](/layers/software-delivery/lambda/)
- [Decide on Pipeline Strategy](/layers/software-delivery/design-decisions/decide-on-pipeline-strategy)
---
## Software Delivery
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import StepNumber from '@site/src/components/StepNumber';
import Step from '@site/src/components/Step';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import ReactPlayer from 'react-player';
Software delivery is the process of moving your applications from development to production. This involves building, testing, deploying, and promoting them through environments like dev, staging, and production, with approval gates at each stage. Whether you're using EKS, ECS, or Lambdas, the solutions may vary slightly, but we maintain a consistent, reusable pattern across all applications.
## Deploy all Backing Services & Databases
Ensure all the backing services that your applications depend on are deployed and running. This includes databases, caches, and message queues, etc.
Get Started
## Implement CI/CD
Choose a path for delivery of your services with GitHub Actions. The reference architecture supports deployment to AWS EKS, Amazon ECS, and Lambda functions.
We use the `ecspresso` deployment tool for Amazon ECS to manage ECS services using a code-driven approach, alongside reusable GitHub Action workflows. This setup allows tasks to be defined with Terraform within the infrastructure repository, and task definitions to reside alongside the application code.
Get Started
Argo CD is an open-source declarative, GitOps continuous delivery tool for Kubernetes applications. It enables developers to manage and deploy applications on Kubernetes clusters using Git repositories as the source of truth for configuration and definitions. Our Argo CD implementation follows the GitOps methodology and integrates with GitHub Actions, ensuring that the entire application configuration, including manifests, parameters, and even application state, is stored in a Git repository.
Get Started
Deploy Lambda functions using GitHub Workflows with a code-driven approach. The build process updates S3 with assets and SSM with the new version, requiring a Terraform run for promotion. GitHub Workflows manage the entire lifecycle, from building and packaging Lambda functions to deploying them, with reusable workflows.
Get StartedAI generated voice
Once you're done deploying your apps, it's time to start monitoring everything. We'll show you how to do that next.
Our Examples
### Reusable Workflows
We've consolidated all the workflows into the example applications,
including the GitHub reusable workflows.
We've done this to make it easier for Developers to understand how the example leverages all the workflows.
In practice, we recommend moving the reusable workflows into a centralized repository,
where they can be shared by other application repositories.
For example,
we would recommend moving all the `ecspresso-*` and all `workflow-*` workflow files to a centralized repository
(e.g. a repository named `github-action-workflows`, alternatively the `infrastructure` repository directly).
The best solution will depend on your GitHub Organization structure and team size.
Pick what works for you and your team.
When your workflows are consolidated, you will need only 3 inside an application repository:
1. `feature-branch.yaml`
2. `main-branch.yaml`
3. `release.yaml`
4. (optional) `hotfix-branch.yaml`
5. (optional) `hotfix-enabled.yaml`
6. (optional) `hotfix-release.yaml`
The remaining workflows are the reusable/shared implementation. This approach makes it easier to define a standardized CI/CD interface for all of your services.
```console
.github
├── configs/
│ ├── draft-release.yml
│ └── environment.yaml
└── workflows/
├── ecspresso-feature-branch.yml
├── ecspresso-hotfix-branch.yml
├── ecspresso-hotfix-mixin.yml
├── ecspresso-hotfix-release.yml
├── ecspresso-main-branch.yml
├── ecspresso-release.yml
├── feature-branch.yml
├── main-branch.yaml
├── release.yaml
├── workflow-cd-ecspresso.yml
├── workflow-cd-preview-ecspresso.yml
├── workflow-ci-dockerized-app-build.yml
├── workflow-ci-dockerized-app-promote.yml
├── workflow-controller-draft-release.yml
├── workflow-controller-hotfix-reintegrate.yml
├── workflow-controller-hotfix-release-branch.yml
└── workflow-controller-hotfix-release.yml
```
After moving to a centralized workflow repository, you should have a setup like the following:
```console
Application Repository
├── .github
│ ├── configs/
│ │ └── draft-release.yml
│ └── workflows/
│ ├── feature-branch.yml
│ ├── main-branch.yaml
│ └── release.yaml
└── ...
github-action-workflows
├── .github/
│ └── workflows
│ ├── ecspresso-feature-branch.yml
│ ├── ecspresso-hotfix-branch.yml
│ ├── ecspresso-hotfix-mixin.yml
│ ├── ecspresso-hotfix-release.yml
│ ├── ecspresso-main-branch.yml
│ ├── ecspresso-release.yml
│ ├── workflow-cd-ecspresso.yml
│ ├── workflow-cd-preview-ecspresso.yml
│ ├── workflow-ci-dockerized-app-build.yml
│ ├── workflow-ci-dockerized-app-promote.yml
│ ├── workflow-controller-draft-release.yml
│ ├── workflow-controller-hotfix-reintegrate.yml
│ ├── workflow-controller-hotfix-release-branch.yml
│ └── workflow-controller-hotfix-release.yml
└── ...
```
---
## How to Create a Migration Checklist
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
## Solution
:::warning
This didn't export cleanly and needs to be reworked.
:::
### Pre-cutover Tasks
- [ ] Update EKS Task with ElasticSearch URL
- [ ] Ensure ALB ingress can support 2 active vanity domains
- [ ] Create ACM component
- [ ] Audit Security Groups
- [ ] Implement Database Seeding & Migration Strategy
- [ ] Create `release` pipeline
- [ ] Redeploy bastion on public and private subnets
- [ ] SSH access via bastion host
- [ ] Restore most recent snapshot from production to staging
- [ ] Attach additional scratch space to EKS tasks for web app
- [ ] Bastion Host user-data/ACL updates
- [ ] Investigate 502s
- [ ] Tune EKS web app and tasks
- [ ] Implement Lambda Log Parser for Cloudwatch Logs
- [ ] Update Spacelift to Trigger on Changes to TF Var files
- [ ] Rename SSM Parameters for AWS_* to LEGACY_AWS_*
- [ ] Add feature flag to disable scheduled tasks for Preview
- [ ] Provision Read Replica For Production Database (used by Redshift)
- [ ] Implement Scheduled Tasks
- [ ] Implement Pipeline to Create Preview Environments
- [ ] Decide on Cut Over Plan
- [ ] Deploy sidekiq workers (high priority)
- [ ] Containers should log to stdout
- [ ] Decide on Pipeline Strategy
- [ ] Implement `registry/terraform/eks-web-app` module
- [ ] Deploy acme to Production ([app.acme.com](http://app.acme.com))
- [ ] Integrate EKS Web App with Cloudwatch Logs
- [ ] Implement Vanity DNS with EKS Tasks
- [ ] Deploy [http://acme.com](http://acme.com) to Staging ([app.acme.com](http://app.acme.com))
- [ ] Deploy [http://acme.com](http://acme.com) as EKS Task with Spacelift
- [ ] Create `build` pipeline
- [ ] Reduce Scope of IAM Grants for GitHub Runners
- [ ] Create `deploy` pipeline
- [ ] ETL Postgres Databases to Bastion Instance
- [ ] Import Staging Database to All RDS Clusters for Testing
- [ ] Update Spacelift Config to Assume Role before Apply
- [ ] Implement Preview Environment Destroy Pipeline
- [ ] Increase GitHub Runners volume sizes
- [ ] Make sure all required backing services are provisioned on *acme accounts
- [ ] Setup [http://acme.com](http://acme.com) staging domain
- [ ] Move aurora-potsgres from *acme accounts to *acme
- [ ] Setup [http://acme.com](http://acme.com) temp vanity domain
- [ ] Deploy bastion to corp account
- [ ] Update RDS Maintenance Window
- [ ] Provision ECS Bastion Instance with SSM Agent
- [ ] Decide How to Run Database Migrations
- [ ] Decide on Database Seeding Strategy
- [ ] Decide on deployment strategy for `repository`
- [ ] Decide on Log Group Architecture
- [ ] Implement `cloudposse/terraform-aws-code-deploy` module
- [ ] Add Instance Profile to GitHub Runners to Support Pushing to ECR
- [ ] Use Postgres terraform provider to manage users
- [ ] Deploy self-hosted GitHub Action Runners with Terraform
- [ ] Proposal: Implement GitOps-driven Continuous Delivery Pipeline for Microservices and Preview Environments
- [ ] Decide on RDS Maintenance Window
- [ ] Move remaining child modules from acme-com to infrastructure registry
### Cutover Plan
##### Rollback Plan
- [ ] Verify Backup Integrity and Recency
- [ ] Ensure ability to perform software rollbacks with automation (E.g. CI/CD)
- [ ] Prepare step-by-step plan to rollback to Heroku
##### External Availability Monitoring
- [ ] Enable “Real User Monitoring” (RUM). Establish a 1-2 week baseline before launch
- [ ] Enable external synthetic tests 2-4 weeks before launch to identify any potential stability problems (e.g. during deployments)
##### Exception Logging
- [ ] Ensure you have frontend/javascript exception logging enabled in Datadog
##### QA
- [ ] Test & Time Restore Process (x minutes)
- [ ] Audit errors/warnings from pg_restore to ensure they are acceptable
- [ ] Coordinate with QA team on acceptance testing
- [ ] Ensure robots.txt blocks crawlers on non-prod environments
##### Load Tests
- [ ] Replicate production workloads to ensure systems handle as expected
- [ ] Tune EKS Autoscaling
- [ ] Verify Alert Escalations
##### Reduce DNS TTLs
- [ ] Set all SOAs for TLDs (e.g. `acme.com`) to 60 seconds to mitigate effects of negative DNS caching
- [ ] Set TTLs to 60 seconds on branded domains (E.g. `acme.com`)
##### Security
- [ ] Audit Security Groups (EKS & RDS)
##### Schedule Cut Over
- [ ] Identify all relevant parties, stakeholders
- [ ] Communicate scope of migration and any expected downtime
##### Prepare Maintenance Page
- [ ] Provide a means to display a maintenance page (if necessary)
- [ ] Should be a static page (e.g. hosted on S3)
- [ ] Update copy as necessary to communicate extent of the outage our downtime
##### Perform End-to-End Tests
- [ ] Verify deployments are working
- [ ] Verify software rollbacks are working
- [ ] Verify auto-scaling is working (pods and nodes) - or we can over-provision for go-live
- [ ] Verify TLS certificates are in working order (non-staging)
- [ ] Verify logs are flowing to cloudwatch and Datadog
- [ ] Verify TLD redirects are working
##### Perform Cut-Over
- [ ] [Choose time] Activate Maintenance Page
- [ ] Delegate [http://acme.com](http://acme.com) zone to new account
- [ ] Take Fresh Production Database Dump on Bastion
- [ ] Load Database Dump
- [ ] Update env vars in Production SSM to use prod settings from 1password
- [ ] Disable Heroku deployments
- [ ] Perform ACM flip for [http://acme.com](http://acme.com)
- [ ] Disable monitoring?
- [ ] Merge/Rebase main into acme-master
- [ ] Open PR for acme-master into main
- [ ] replace `acme-master` with `master` in github
- [ ] Merge the PR to master
- [ ] Merge the auto-generated PR in `infra`
- [ ] Confirm ALL deployments in spacelift
- [ ] Instruct QA team to commence testing on `app.acme.com`
- [ ] Flip CNAME for [http://acme.com](http://acme.com) to [http://acme.com](http://acme.com) in legacy account
- [ ] Manual TLS validation for [http://acme.com](http://acme.com) ACM
- [ ] Instruct QA team to commence testing on `app.acme.com`
- [ ] Enable monitoring
- [ ] Deactivate Maintenance Page (happens automatically by flipping DNS)
##### Post-Cut-over Checklist
- [ ] Verify ability to deploy
- [ ] Monitor customer support tickets
- [ ] re-enable scheduled EKS tasks for production
- [ ] Review exception logs
- [ ] Review Slow Query Logs
- [ ] Monitor non-200 status codes for anomalies
- [ ] Check Real End User Data
- [ ] Audit Errors/Warnings after loading
- [ ] Ensure `robots.txt` is permitting indexing in production (SEO)
### Post Cutover Tasks
- [ ] Ensure Idempotent Plan for Scheduled EKS Tasks
- [ ] Rename acme component to `acme-com`
- [ ] Configure auto-scaling
- [ ] Fix Bastion host to access Redis
- [ ] Tune Healthcheck Settings
- [ ] Automatically add `migrate` label
- [ ] Improve Automated PR Descriptions
- [ ] Clean up acme Artifacts In Spacelift (no longer needed after move to acme-com)
- [ ] Update Spacelift for acme
- [ ] Remove unneeded resources from data accounts
##### Someday
- [ ] Prepare `acme.com` vanity domain in `prod` and all DNS records (do not delegate NS)
---
## Tutorials(11)
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import DocCardList from '@theme/DocCardList';
These are some additional tutorials that will help you along with the software-delivery components.
---
## Component Development
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import ReactPlayer from "react-player";
import TaskList from '@site/src/components/TaskList';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Admonition from '@theme/Admonition';
# The Problem
While all companies are unique, their infrastructure doesn't need to be. Well-built infrastructure consists of reusable building blocks that implement all the standard components like servers, clusters, load balancers, etc. Rather than building everything from scratch “the hard way”, there's an easier way. Using our “reference architecture” and its service catalog of all the essential pieces of infrastructure, everything a business needs can be composed together as an architecture using “Stack” configurations. Best of all, it's all native terraform.
AI generated voice
# Our Solution
Cloud Posse defines components. Components are opinionated, self-contained building blocks of Infrastructure-as-Code
(IAC) that solve one specific problem or use-case. Components are similar to a Terraform root module and define a set of
resources for any deployment.
- Terraform components live under the `components/terraform` directory.
- Cloud Posse maintains a collection of public components with
[`terraform-aws-components`](https://github.com/cloudposse/terraform-aws-components)
- The best components are generic enough to be reused in any organization, but there's nothing wrong with writing
specialized components for your company.
- Detailed documentation for using components with Atmos can be found under
[atmos.tools Core Concepts](https://atmos.tools/core-concepts/components/)
:::info Pro tip!
We recommend that you always check first with Cloud Posse to see if we have an existing component before writing your
own. Sometimes we have work that has not yet been upstreamed to our public repository.
:::
## Prerequisites
In order to be able to create a new component, this document assumes the developer has the following requirements:
- [ ] Authentication to AWS, typically with Leapp
- [ ] The infrastructure repository cloned locally
- [ ] Geodesic up and running
- [ ] A basic understanding of Atmos
- [ ] An intermediate understanding of Terraform
## Create the component in Terraform
- Make a new directory in `components/terraform` with the name of the component
- Add the files that should typically be in all components:
```console
.
├── README.md
├── component.yaml # if vendoring from cloudposse
├── context.tf
├── main.tf
├── outputs.tf
├── providers.tf
├── remote-state.tf
├── variables.tf
└── versions.tf
```
All the files above should look familiar to Terraform developers, except for a few of the following.
`context.tf`
Cloud Posse uses `context.tf` to consistently set metadata across all resources. The `context.tf` is always identical. Copy it exactly from [here](https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf).
```bash
curl -sL https://raw.githubusercontent.com/cloudposse/terraform-null-label/master/exports/context.tf -o context.tf
```
`providers.tf`
- For `providers.tf`, if we are just using AWS providers only, copy it from [our common files (commonly referred to as mixins) folder](https://github.com/cloudposse/terraform-aws-components/blob/master/mixins/providers.depth-1.tf).
- If we are using Kubernetes, then you may need an additional providers file for Helm and Kubernetes providers. Also copy this file from [the mixins folder](https://github.com/cloudposse/terraform-aws-components/blob/master/mixins/provider-helm.tf).
`remote-state.tf`
By convention, we use this file when we want to pull Terraform Outputs from other components. See [the `remote-state` Module](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/remote-state).
`component.yaml`
The component manifest is used for vendoring the latest version from Cloud Posse. More on this later.
## Add Terraform Modules or Resources
- In your `main.tf`, or other file names of your choosing, add configurations of Terraform modules and/or resources
directly.
- When you use a Cloud Posse module, you should **pass the context metadata into the module**, like
[this](https://github.com/cloudposse/terraform-aws-components/blob/master/modules/s3-bucket/main.tf#L35). All Cloud
Posse modules have the variable `context`, which you pass `module.this.context`
- You could also use other external modules that are not provided by Cloud Posse.
- Use `module.this.tags` when you want to pass a list of tags to a resources or module not provided by Cloud Posse. Tags
are already included with `var.context` for any Cloud Posse module.
Cloud Posse has a lot of open source modules, so [check here first](https://registry.terraform.io/namespaces/cloudposse) to avoid repeating existing effort.
- Handle the variable `module.this.enabled`, so that resources are not created when `var.enabled` is set to `false`.
Cloud Posse modules will do this automatically when passed `var.context`. When adding a resource or using a non-Cloud
Posse module, then configure enabled with a count, for example `count = module.this.enabled ? 1 : 0`
- Use
[`remote-state`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/latest/submodules/remote-state) to
read Terraform Output from other components.
[For example the `eks/alb-controller` component](https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks/alb-controller/remote-state.tf)
## Configure Stacks
### Directory Structure
- Put all Stack configurations in the `stacks` directory
```
stacks/
├── catalog/
├── mixins/
├── orgs/
└── workflows/
```
- Default configurations for a component live in the `catalog` directory, and configurations for deployed accounts live
in the `orgs` directory.
- All files in the `catalog` and `orgs` directories are **stack configuration files**. Read on for more information.
### Define defaults for a component in the `catalog` directory
- The Stack Catalog is used to define a component's default configuration for a specific organization. Define variables
that would not be shared in an Open Source setting here.
- By convention, name the configuration the same as your component. For example, if your component is
`components/terraform/foobar` then the file would be named `stacks/catalog/foobar.yaml`
```yaml
components:
terraform:
foobar:
```
- Above, `foobar` is the name of the component.
- Pass variables into Terraform like this:
```yaml
components:
terraform:
foobar:
vars:
sample_variable_present_in_variables_tf_of_component: "hello-world"
```
### Component Types
- Atmos supports component types with the `metadata.type` parameter
- There are two types of components:
real
is a "concrete" component instance
abstract
a component configuration, which cannot be instantiated directly. The concept is borrowed from
[abstract base classes](https://en.wikipedia.org/wiki/Abstract_type) of Object Oriented Programming.
- By default, all components are `real`
- Define an `abstract` component by setting `metadata.type` to `abstract`. See the following example
```yaml
components:
terraform:
foobar/defaults: # We can name this anything
metadata:
type: abstract # This is what makes the component `abstract`
component: foobar # This needs to match exactly an existing component name
vars:
tags:
team: devops
```
For more details, see [atmos.tools](https://atmos.tools/core-concepts/components/inheritance)
- With an `abstract` component default, we can inherit default settings for any number of derived components. For
example:
```yaml
components:
terraform:
foobar: # Since this component name matches exactly, we do not need to add `metadata.component`
metadata:
type: real # This is the default value and is only added for visibility
inherits:
- foobar/defaults # The name of the `abstract` component
vars:
sample_variable_present_in_variables_tf_of_component: "hello-world"
```
Now `foobar` will uses the same configuration as `foobar/defaults` but may describe additional variables.
## Add Component Imports
- In a stack configuration file, we can import other stack configuration files with `import`
- When a file is imported, the YAML from that file is deep merged on top of earlier imports. This is the same idea as
merging two dictionaries together.
- Stack configurations can import each other as needed, and there can be multiple layers or different hierarchies of
configurations
## Deploy Components with a Stack
- In the directory corresponding to the environment you want to deploy in, for example
`stacks/orgs/acme/plat/sandbox/us-east-1/`, add a new file (or adding to an existing file) your component by importing
it from the catalog.
```yaml
import:
# These two imports add default variables
- orgs/acme/plat/sandbox/_defaults
- mixins/region/us-east-1
# This imports a real component, which will deploy even if we do not
# inherit from it or override any values.
- catalog/foobar
```
- In the above example, we have imported the `foobar` catalog configuration into the `plat-use1-sandbox` environment
into a new YAML file of any name. For example `foobar.yaml`
```yaml
import:
- orgs/acme/plat/sandbox/_defaults
- mixins/region/us-east-1
- catalog/foobar
components:
terraform:
foobar:
vars:
sample_variable_present_in_variables_tf_of_component: "env-specific-config"
```
## Deploy
Now that the component is [(1) defined in Terraform](#create-the-component-in-terraform),
[(2) created in Atmos](#catalog-stacks), and [(3) imported in the target Stack](#deploy-components-with-a-stack), now
deploy the component with Atmos.
```
atmos terraform apply foobar -s plat-use1-sandbox
```
## Vendoring
Atmos supports component vendoring. We use vendor to pull a specific version of the component from
[the upstream library](https://github.com/cloudposse/terraform-aws-components).
When vendoring a component,
1. Create a branch of your repository
2. Add a `component.yaml` file to the components directory
```yaml
apiVersion: atmos/v1
kind: ComponentVendorConfig
spec:
source:
uri: github.com/cloudposse/terraform-aws-components.git//modules/foobar?ref={{ .Version }}
version: 1.160.0
included_paths:
- "**/**"
excluded_paths: []
```
3. Fill out the `component.yaml` with the latest version from
[the upstream library](https://github.com/cloudposse/terraform-aws-components)
4. Run the vendor commands: `atmos vendor pull --component foobar`
5. Create a Pull Request to check for changes against any existing component. Keep in mind vendoring will overwrite any
custom changes to existing files upstream.
## Next Steps
At this point, your component is complete in code, but there is still more to do!
1. Run precommit Terraform docs and linting against the new component
2. Add your new component to your GitOps automation tooling, such as Spacelift
3. Configure `CODEOWNERS` for the new component, if necessary
4. Documentation!
# References
- [Cloud Posse's Library of Terraform Components](https://github.com/cloudposse/terraform-aws-components)
- [Cloud Posse's Library of Terraform Modules](https://github.com/orgs/cloudposse/repositories?q=terraform-aws&type=all&language=&sort=)
- [Atmos Core Concepts](https://atmos.tools/core-concepts/components/#types-of-components)
---
## Exercises
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
Exercises are a great way to get hands-on with the code. None of these are necessary, but if you would like to take the
initiative, feel free to complete any or all of these exercises.
If you have questions or would like Cloud Posse to review your solution, please reach out!
## Exercise: Add a component from Cloud Posse to your library
This exercise is intended to demonstrate how to pull an existing component that is already upstream and supported by
Cloud Posse into your environment. For the sake of this exercise, let's create an SQS Queue component. This component is
supported by Cloud Posse, find and deploy that component into your sandbox environment.
When this component is deployed, you should have a single new Spacelift stack (if you're using Spacelift) and have
created a new SQS queue in your sandbox environment. This queue should be logically named following Cloud Posse naming
standards, and that name should be easily retrievable with Terraform state.
## Exercise: Create a new custom component not supported by Cloud Posse
This exercise is intended to practice creating a new component that is not supported by Cloud Posse. Cloud Posse does
not currently have a component for a static website. For this exercise, use the
[`cloudposse/cloudfront-s3-cdn/aws` module](https://github.com/cloudposse/terraform-aws-cloudfront-cdn) to deploy a
basic static website to your sandbox environment.
When this exercise is complete, you should have a new component, a new component catalog, and an import into sandbox.
This component should follow Cloud Posse convention outlined above and be maintained with GitOps (if applicable).
---
## Frequently Asked Questions(Component-development)
### When should we write a new component?
Developing a new component may be necessary when:
1. Existing components do not provide the required functionality and the existing component cannot be easily extended.
2. [Cloud Posse's Library of Terraform Components](https://github.com/cloudposse/terraform-aws-components) does not
already provide the component. We recommend asking Cloud Posse in your shared Slack channel with us if we happen to
have the component not yet upstreamed.
3. You have some existing Terraform code that you would like to import into the Atmos framework.
### Can components include other components?
Components should not include other components but instead can refer to another component's Terraform Output with
`remote-state`. Components can however refer to any Terraform module
### What is the purpose of `context.tf`?
Cloud Posse uses [`context.tf`](/resources/legacy/learning-resources#the-contexttf-pattern) to
consistently set metadata across all modules and resources.
### Where should the `context.tf` file be copied from?
The `context.tf` file should be copied exactly from
https://raw.githubusercontent.com/cloudposse/terraform-null-label/master/exports/context.tf.
### How can Terraform modules or resources be added to a component?
Terraform modules or resources can be added directly to the `main.tf` file or other files of your choosing.
**Please be advised** that modifying any of the vendored components may break the ability to pull down upstream changes.
If modifying an existing component to preserve vendoring capabilities, name your files something like
`customizations.extended.tf`, so that it won't collide with upstream files (e.g. using `.extended.tf` as the extension).
Similarly, if needing to modify core functionality, consider using
[Terraform overrides](https://developer.hashicorp.com/terraform/language/files/override).
### What is an abstract component?
An [abstract component](https://atmos.tools/core-concepts/components#types-of-components) is a component configuration
that cannot be instantiated directly.
### Does Cloud Posse have automated testing for components?
We do not have automated testing for our components at this time. However, GitOps pipelines with Spacelift can
automate component deployment into lower environments so that we can validate any code changes in each stage before
reaching production.
Also, we do have automated tests for all Terraform modules the components built off of
([rds-cluster example](https://github.com/cloudposse/terraform-aws-rds-cluster/tree/master/test)) and static code
linting for Terraform and Atmos (with a local `.github/workflows/pre-commit.yml`).
---
## Count vs For Each
import Intro from '@site/src/components/Intro';
import Note from '@site/src/components/Note';
import PillBox from '@site/src/components/PillBox';
Terraform in Depth
This article is part of our "Terraform in Depth" series, where we dive into
the details of Terraform that require a deeper understanding and longer
explanation than are required for our other Terraform articles.
When you are dynamically creating multiple instances of a resource in Terraform,
you have two options: `count` and `for_each`. Both of these options allow you to
create multiple instances of a resource, but they have different use cases
and different implications for your Terraform code.

There are 2 key considerations when using `count` or `for_each`:
1. **Addressing**: Terraform must be able to determine the "address" of each
resource instance during the planning phase. This is discussed in the
next section.
2. **Stability**: When using `count`, resources whose configuration has not
changed can nevertheless be destroyed and recreated by Terraform because
they have moved to a new address. Using `for_each` usually avoids this issue.
This is discussed further below.
Use `for_each` when possible, and `count` when you can't use `for_each`.
### Background: Terraform Resource Addressing
During the planning phase, Terraform must be able to
determine the ["address"](https://developer.hashicorp.com/terraform/cli/state/resource-addressing)
of each resource instance. The address of a resource
instance is a unique identifier that Terraform uses to track the state of
the resource. The address is a combination of the resource type, the
resource name, and, when that is not unique due to `count` or `for_each`,
the index or key of the resource instance, possibly along with other
information.
For example:
```hcl
locals {
availability_zone_ids = ["usw2a", "usw2b"]
}
resource "aws_eip" "pub" {
count = length(local.availability_zone_ids)
}
```
will generate resources with addresses like `aws_eip.pub[0]` and `aws_eip.pub[1]`.
```hcl
resource "aws_eip" "pub" {
for_each = toset(local.availability_zone_ids)
}
```
will generate resources with addresses like `aws_eip.pub["usw2a"]` and
`aws_eip.pub["usw2b"]`. The values supplied to for each (either the strings
in a set of strings, or the keys of a map) are used as the keys in the
addresses.
:::important
Although documentation and commentary often refer to the requirement that
Terraform must know at plan time how many instances of a resource to create,
it is more accurate to say that Terraform must know at plan time the address
of each instance of a resource under its management. This is because the
address is used to as the key to the data structure that stores the state of
the resource, and Terraform must be able to access that data during the plan
phase to compare it to the desired state of the resource and compute the
necessary changes.
If some address cannot be determined at plan time, `terraform plan` will
fail with an error. This issue is discussed in greater detail in [Error: Values Cannot Be Determined Until Apply](/learn/component-development/terraform-in-depth/terraform-unknown-at-plan-time).
The main reason not to use `for_each` is that the values supplied to it
would not be known until apply time.
:::
### Count is Easier to Determine, but Less Stable
The `count` option operates on simple integers: you specify the number of
resource instances you want to create (`n`), and Terraform will create that
many (0 to `n-1`).
Because you can often know at plan time how many instances of a resource you
will need without knowing exact details of each instance, `count` is almost
always easier to use than `for_each`. However, `count` is less stable than
`for_each`, which makes it less desirable.
#### Use `count` for Simple Optional Cases
When you have a simple case where you know you want to create zero or one
instance of a resource, particularly as the result of a boolean input variable,
`count` is the best choice. For example:
```hcl
resource "aws_instance" "bastion" {
count = var.bastion_enabled ? 1 : 0
# ...
}
```
None of the drawbacks of `count` versus `for_each` apply when you are never
creating more than one instance, so the advantages of `count` versus `for_each`
favor using `count` in this case. This is, in fact, the most common usage of
`count` in Cloud Posse's Terraform modules.
Similarly, to avoid using two variables for an optional resource, for
example `vpc_enabled` and `vpc_ipv4_cidr_block`, you can use a single
variable and toggle the option based on whether or not it is supplied.
:::caution
Do not condition the creation of a resource on the _value_ of a variable.
Instead, place the value in a list and condition the creation of the resource
on the length of the list. This is discussed in greater detail below.
:::
It can be tempting, and indeed early Cloud Posse modules did this, to use
the value of a variable to determine whether or not to create a resource.
```hcl
# DO NOT DO THIS
variable "vpc_ipv4_cidr_block" {
type = string
default = null
}
resource "aws_vpc" "vpc" {
# This fails when var.vpc_ipv4_cidr_block is computed during the apply phase
count = var.vpc_ipv4_cidr_block == null ? 0 : 1
cidr_block = var.vpc_ipv4_cidr_block
}
```
The problem with this approach is that it requires the value of `var.vpc_ipv4_cidr_block`
to be known at plan time, which is frequently not the case. Often, the value
supplied will be generated or computed during the apply phase, and this
whole construct fails in this scenario.
The recommended way to toggle an option by supplying a value is to supply
the value inside a list, and toggle the option based on the length of the list.
```hcl
variable "vpc_ipv4_cidr_block" {
type = list(string)
default = []
# Accepting the value as a list can lead a casual user to think that
# they can supply multiple values, and that each value will be used
# somehow, perhaps to create multiple VPCs. To prevent confusion or surprise,
# add a validation rule. Without this kind of validation rule, the user
# will not get any feedback that their additional list items are being ignored.
validation {
condition = length(var.vpc_ipv4_cidr_block) < 2
error_message = <<-EOT
The list should contain at most 1 CIDR block.
If the list is empty, no VPC will be created.
EOT
}
}
resource "aws_vpc" "vpc" {
count = length(var.vpc_ipv4_cidr_block)
cidr_block = var.vpc_ipv4_cidr_block[count.index]
}
```
This allows the user to choose the option without having to supply the
value as configuration available at plan time.
#### The Instability of `count`
The problem with `count` is that when you use it with a list of
configuration values, the resource instances are configured according to their
index in the list. If you add or remove an item from the list, the index of
other items in the list will change, and so even though a resource configuration
has not changed in any fundamental way, the configuration will now
apply to a different instance of the resource, effectively causing Terraform
to destroy it in one index and recreate it in another.
For example, consider the case where you want to create a set of IAM Users.
We will illustrate the problem with `count` here, and then show how to use
`for_each` to avoid the problem in the next section.
To create a reusable module that creates IAM users, you might do something
like this:
```hcl
variable "users" {
type = list(string)
}
resource "aws_iam_user" "example" {
count = length(var.users)
name = var.users[count.index]
}
output "users" {
value = aws_iam_user.example
}
```
Say you first deploy this configuration with the following input:
```hcl
module "users" {
source = "./iam_users"
users = ["Dick", "Harry"]
}
output "ids" {
value = { for v in module.users.users : v.name => v.id }
}
```
You will get a plan like this (many elements omitted):
```hcl
# module.users.aws_iam_user.example[0] will be created
+ resource "aws_iam_user" "example" {
+ name = "Dick"
}
# module.users.aws_iam_user.example[1] will be created
+ resource "aws_iam_user" "example" {
+ name = "Harry"
}
Changes to Outputs:
+ ids = {
+ Dick = "Dick"
+ Harry = "Harry"
}
```
This is all fine, until you realize you left out "Tom". You revise your root
module like this:
```hcl
module "users" {
users = ["Tom", "Dick", "Harry"]
}
```
You will get a plan like this (many elements omitted):
```hcl
# module.users.aws_iam_user.example[0] will be updated in-place
~ resource "aws_iam_user" "example" {
id = "Dick"
~ name = "Dick" -> "Tom"
tags = {}
# (5 unchanged attributes hidden)
}
# module.users.aws_iam_user.example[1] will be updated in-place
~ resource "aws_iam_user" "example" {
id = "Harry"
~ name = "Harry" -> "Dick"
tags = {}
# (5 unchanged attributes hidden)
}
# module.users.aws_iam_user.example[2] will be created
+ resource "aws_iam_user" "example" {
+ id = (known after apply)
+ name = "Harry"
}
Plan: 1 to add, 2 to change, 0 to destroy.
Changes to Outputs:
~ ids = {
~ Dick = "Dick" -> "Harry"
~ Harry = "Harry" -> (known after apply)
+ Tom = "Dick"
}
```
Note that because "Tom" was inserted at the beginning of the list, all the
other elements moved to new addresses, so all 3 users are going to be modified.
In most cases, the existing 2 resources would be destroyed and 3 new
resources would be created. That would be bad enough if, for example, the
resources were NAT gateways or VPCs.
In this particular case, it is even worse! In this case (and for some other
resources), existing resources will be updated in place, potentially causing
serious problems when those resources are referenced by ID. If you are lucky,
you will get an error message like this:
```
Error: updating IAM User (Harry): EntityAlreadyExists: User with name Dick already exists.
```
If you are unlucky (or if you run `terraform apply` 3 times), the change
will go through, and user "Dick" will be renamed user "Tom", meaning that
whatever access Dick had, Tom now gets. Likewise, user Dick is renamed Harry,
getting Harry's access, and Harry get the newly created user. For example, Tom
can now log in with user name "Tom" using Dick's password, while Harry will be
locked out as a new user. This nightmare scenario has a lot to do with
peculiarities of the implementation of IAM principals, but gives you an idea
of what can happen when you use `count` with a list of resource configurations.
Note: The above behavior was actually observed using Terraform v1.5.7 and
AWS provider v5.38.0. Hopefully something less dangerous and confusing
happens with the current versions of the tools when you try this yourself,
but nevertheless be prepared for behavior like this.
:::note
All of this instability is a direct consequence of resource configuration
being address by its position in a list of configurations. When items are
added to or deleted from the list, or when the list provided in a random order
(as used to happen with many AWS data sources), resources may be needlessly
affected. The answer to this is `for_each`, but that is not without its own
limitations.
:::
### For Each is Stable, But Not Always Feasible to Use
#### The Stability of `for_each`
In large part to address the instability of `count`, Terraform introduced
the ability to use `for_each` to create multiple instances of a resource.
Where `count` takes a single integer, `for_each` takes a set of strings,
either explicitly or as the keys of a map. When you use `for_each`, the
instance addresses are the string values of the set of strings passed to
`for_each`.
We can rewrite the IAM User example using `for_each` like this:
```hcl
variable "users" {
type = set(string)
}
resource "aws_iam_user" "example" {
for_each = var.users
name = each.key
}
```
Now, if deploy the code with Dick and Harry, and then you add Tom to the
list of users, the plan will look like this (many elements omitted):
```hcl
# module.users.aws_iam_user.example["Tom"] will be created
+ resource "aws_iam_user" "example" {
+ name = "Tom"
}
Changes to Outputs:
~ ids = {
+ Tom = (known after apply)
# (2 unchanged attributes hidden)
}
```
This is what we want! Nothing has changed in the code regarding Dick or
Harry, so nothing has changed in the infrastructure regarding Dick or Harry.
##### The problems with `for_each`
If `for_each` is so much better, why is it not used by everybody all the time?
The answer is because the keys supplied to `for_each` must be known at plan
time. It used to be the case that data sources were not read during the plan
phase, so something like:
```
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_subnet" "zone" {
for_each = data.aws_aws_availability_zones.available.names
}
```
would fail because the zone names would not be available during the planning
phase. Hashicorp has worked to make more data available during planning to
mitigate such problems, with one major improvement being reading data sources
during planning. Thus you will see a lot of old code using `count` that
could now be rewritten to use `for_each` and require a recent version of
Terraform.
A still current issue would be trying to create a dynamic number of compute
instances and assign IP addresses to them. The most obvious way to do this
would be to use `for_each` with a map of instance IDs to IP addresses, but
the instance IDs are not known until the instances are created, so that
would fail. In some cases, possibly such as this one, where the
configuration for all the resources is the same, you can generate keys using
`range()` so that the association between a compute instance and an IP
address remains stable and is not dependent on the order in which instance
IDs are returned by the cloud provider.
In other cases, such as when the configurations vary, using proxy keys like
this has all the same problems as `count`, in which case using `count` is
better because it is simpler and all of the issues with `count` are already
understood.
::: note
Another limitation, though not frequently encountered, is that "sensitive"
values, such as sensitive input variables, sensitive outputs, or sensitive
resource attributes, cannot be used as arguments to `for_each`. As stated
previously, the value supplied to `for_each` is used as part of the resource
address, and as such, it will always be disclosed in UI output, which is why
sensitive values are not allowed. Attempts to use sensitive values as
`for_each` arguments will result in an error.
:::
Ideally, as we saw with IAM users in the examples above, the user would
supply static keys in the initial configuration, and then they would always
be known and usable in `for_each`, while allowing the user to add or remove
instances without affecting the others. The main obstacle to this is when
the user does not know how many instances they will need. For example, if
they need one for each availability zone, the number they need will depend
on the region they are deploying to, and they may want to adjust the
configuration for each region in that way.
### Conclusion
In conclusion, use `for_each` when you can, and `count` when you must.
Finding suitable keys to use with `for_each` can be a challenge, but it is
often worth the effort to avoid the instability of `count`. Module authors
should be sensitive to the needs of their users and provide `for_each` where
possible, but consider using `count` where it seems likely the user may have
trouble providing suitable keys.
One possible solution for module authors (though generally not advisable due
to the complexity it introduces) is to accept a list of object that have an
optional `key` attribute, and use that attribute as the key for `for_each` if
it is present, and use `count` if it is not. This presents a consistent
interface to the user, and allows them to use `for_each` when they can, and
`count` when they must. It does introduce complexity and new failure modes,
such as when some elements have keys and others do not, or when duplicate
keys are present, or again if the keys are not known at plan time, so this
particular solution should be approached with caution. Weigh the
consequences of the complexity against the benefits of the stability of
`for_each`. For many kinds of resources, having them be destroyed and
recreated is of little practical consequence, so the instability of `count`
is not worth the added complexity and potential for failure that `for_each`
introduces.
### Further Reading
- [Error: Values Cannot Be Determined Until Apply](/learn/component-development/terraform-in-depth/terraform-unknown-at-plan-time).
- [Terraform Best Practices](/best-practices/terraform)
---
## Terraform in Depth
import Intro from '@site/src/components/Intro'
import DocCardList from '@theme/DocCardList'
In this section, we dive into advanced details of Terraform that require a deeper understanding of Terraform and longer explanation than are required for the other articles.
We provide a lot of information about how to use Terraform and write Terraform code elsewhere on this site:
---
## Error: Values Cannot Be Determined Until Apply
import Intro from '@site/src/components/Intro';
import Note from '@site/src/components/Note';
import PillBox from '@site/src/components/PillBox';
Terraform in Depth
This article is part of our [Terraform in Depth](/learn/component-development/terraform-in-depth) series, where we dive into advanced details of Terraform that require a deeper understanding of Terraform and longer explanation than are required for our other Terraform articles.
## Terraform Errors When Planning: Values Cannot Be Determined Until Apply
One of the more frustrating errors you can encounter when using Terraform is
an error message referring to a value "that cannot be determined until
apply". These are often referred to as "unknown at plan time" errors, in
part because they show up when running `terraform plan`.
```
Error: Invalid count argument
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
```
These errors always occur in the context of creating a variable number of
resources, and they can be confusing, because it can appear to the user that
the value in question should be known at plan time. In fact, the same code
will sometimes work and sometimes not, depending on the specific details of
how it is used and the state of the infrastructure. This is a particular
problem for authors of reusable Terraform modules, because they need to be
aware of potential problems that may occur when their module is used in
contexts they did not anticipate.
In this article, we will explain what these errors mean, why they occur, and
how to avoid them.
### The Two-Phase Execution Model
To begin with, Terraform implements a [two-phase execution model](https://developer.hashicorp.com/terraform/learn/core-workflow).
1. The first phase is the "plan" phase, where Terraform determines what changes
are necessary to achieve the desired state.
2. The second phase is the "apply" phase, where Terraform makes the changes
determined to be required during the plan phase.
The rationale for this two-phase model is to allow Terraform to show you what
changes it will make before it makes them. Terraform is designed so that it
make no changes during the plan phase, making it always safe to run `terraform plan`.
Then, during the apply phase, it will only make the changes you approved
from the plan phase.
The error message above only occurs during the plan phase, and it means that
some value that Terraform needs to know in order to plan the changes is not
known while executing the plan phase. It implies that the value is properly
defined, but that it depends on some value that will be generated during the
apply phase.
### When Does an Unknown Value Cause a Plan to Fail?
Terraform always requires you to approve any changes before it makes them,
but it does not always show you the exact details of the changes it will make.
#### Unknown Individual Attribute Values are Allowed in a Plan
It is impractical for Terraform to compute every detail of the changes it will
make during the apply phase, and therefore some details can be declared
"unknown" at plan time but still allow the plan to succeed and be approved.
In general, the value of a resource attribute is allowed to be unknown, and
the plan will show that the attribute will change, but show that the new value
is unknown, or, more specifically: `(known after apply)`.
For example, consider the case where you want to create a new compute instance
and then add
it as a target to a load balancer. Terraform will not know the specific IP
address of the compute instance until it is created, so it cannot show you
the exact details of how it will be added as a target to the load balancer.
Instead, it will show you that it will add a target to the load balancer,
and you will have to approve that change without knowing the exact details.
(The alternative would be to require that you create the compute
instance in
one configuration, obtain its IP address from that configuration, and then
use that IP address in a second configuration to add it as a target to the
load balancer. If you want that level of control, you can set up your
configurations to work that way, but in most cases, people prefer to manage
as much as possible with a single configuration.)
#### Resource Creation and Destruction Always Require Explicit Approval
Terraform always requires explicit approval to create or destroy a resource.
It also allows you to create a number of resources of the same type using a
parameter with `count` or `for_each` to determine how many to create. For
example:
```hcl
resource "aws_instance" "bastion" {
count = length(var.availability_zones)
# ...
}
```
This feature is also used in practice to create a resource conditionally:
```hcl
resource "aws_instance" "bastion" {
count = var.bastion_enabled ? 1 : 0
# ...
}
```
Terraform needs to know how many resources to create during the plan phase or
the plan will fail with an error message like the one at the beginning of
this article.
### What is Known and Unknown at Plan Time, Part 1: The Obvious
In the planning phase, Terraform knows the current state of the infrastructure,
and some information provided to it via variables and data sources, but it does
not know the future state of the infrastructure. (The exact amount of data
available at plan time, particularly from data sources, and the freshness of
the data from data sources, has varied over time as Terraform has matured,
with the general direction being that more data is available at plan time
and less data remains unknown. For example, you can tell
Terraform to create a new compute instance, and you can tell it what IP address
to assign to that instance, at which point Terraform with know the IP of the
instance at plan time. Alternatively, you can not supply
an IP address, and the cloud provider will assign one, but then Terraform
does not know the IP address at plan time, either. In either case, Terraform
will not know the specific instance ID until it creates the instance.
It is important to note that:
- In terms of known versus unknown, `null` was not a special value
[prior to Terraform version 1.6](https://github.com/hashicorp/terraform/issues/26755#issuecomment-1771450399).
If you create a resource, its ID is not known at plan time. Even if you know that
successful creation of the resource will result in a non-null ID, Terraform
may not, and a test like `id == null` may fail as being unknown at plan
time.
- Because this behavior is changing in Terraform, but some people are still
using older versions or switching to open source forks due to licensing
issues, it is important for authors of reusable modules to be aware that
this limitation may exist for many users but be invisible to the module
author because they are testing their code with a newer version of Terraform.
- Passing values into or out of a module usually does affect whether Terraform
knows the value is `null` or not at plan time. For example, if you had a
module that would create an EBS volume when an instance is created, you
might have a snippet like this:
```hcl
resource "aws_ebs_volume" "example" {
count = module.ec2_instance.bastion.id != null ? 1 : 0
# ...
}
```
In this case, Terraform will not know the value of
`module.ec2_instance.bastion.id` at plan time. Passing the instance ID into
a module does not change that. (You could, in Terraform version 1.6,
declare
the input non-nullable (`nullable = false`), and then Terraform would know
that the value is not `null` at plan time, but if you did that, then you would
never get the `id == null` condition and always create the EBS volume, so
that is not a real solution, if you want to make the creation of the EBS
volume conditional)
It is on Hashicorp's roadmap to allow resource providers
to declare attributes to be null or non-null at plan time, but you should
not rely on that. Rather, you should be aware that your testing may not turn
up this issue because you are using later versions of Terraform and
providers, but you should still guard against it.
:::tip
Using the _value_ of a module input to conditionally create resources is a
common source of issues in a reusable module. When the
module is called with a configured value, as can be common when testing, the
module works fine, but if the value is not known at plan time, which is
common in actual use when the value is computed from other resources,
the module will fail. Using a `random_integer` with a `keeper` of `timestamp()`
can help you simulate the behavior of a value that is not known at plan
time during testing and catch these kinds of issues ahead of time.
Best practice is to either use a separate boolean input (e.g.
`ebs_volume_enabled`) to condition the resource creation, or to take the
optional value as an element of a list and use the length of the list to
determine whether to create the resource. See [Use feature flags or lists to
control optional behaviors](/best-practices/terraform#use-feature-flags-to-enabledisable-functionality)
for more information.
:::
### What is Known and Unknown at Plan Time, Part 2: The Less Obvious
#### The State of the Infrastructure Is Known at Plan Time
Hopefully it is obvious to see why Terraform would complain about not knowing
how many null resources to create in the following example:
```hcl
resource "random_integer" "example" {
min = 1
max = 2
}
resource "null_resource" "example" {
count = random_integer.example.result
}
```
(Here we use `random_integer` to represent a computed value unknown at
plan time, and `null_resoure` to represent a dependent resource, so
that you can easily try these examples on your own.)
Terraform does not know if it should create one or two null resources until
it knows the value of `random_integer.example.result`. However, if you run
```
terraform apply -target=random_integer.example -auto-approve
```
so that the `random_integer.example` resource is created, it becomes
part of the infrastructure, and Terraform has a known value for
`random_integer.example.result` at plan time. Therefore, if you run
`terraform apply` after that, it will succeed.
This is both the good news and the bad news:
- The good news is that Terraform will not complain about theoretically
unknown values in most cases where it can figure out the value during the
plan phase. (Not in all cases, though, [as we will see below](#the-results-are-the-same-but-the-path-to-get-there-is-different).)
This means that if you use a module that uses an input variable to
determine how many resources to create, and you provide a value for that
input variable that is known at plan time, then Terraform will not complain.
- The bad news is that Terraform will not complain about theoretically unknown
values in most cases where it can figure out the value during the plan phase.
This means that you can write Terraform code that will work in some cases
and not in others, and you may not realize it because it works when you
try it.
#### Known Values Can Be Transformed Into Unknown Values in Non-Obvious Ways
##### The Results are the Same, but the Path to Get There is Different
The following example is a bit more subtle. Using the same `random_integer`
resource as above, say we want to choose from 1 of 2 configurations,
rather than directly affect the number of resources created. For
example, we want to configure subnet IDs based on whether the
resource is in public or private subnets. Consider the
following code:
```hcl
locals {
visibility = random_integer.example.result == 1 ? "public" : "private"
config_map = {
public = {
subnets = ["subnet-0abcd1234efgh5678", "subnet-1abcd1234efgh5679"]
}
private = {
subnets = ["subnet-2abcd1234efgh5678", "subnet-3abcd1234efgh5679"]
}
}
}
resource "null_resource" "example" {
count = length(local.config_map[local.visibility].subnets)
}
```
This will fail, even though the value of `random_integer.example.result` is
irrelevant to the number of resources created. Regardless of the value
of `random_integer.example.result`, Terraform should create 2 resources.
However, Terraform is not so sophisticated that it can figure out
that the keys of the possible maps yield lists of the same length,
regardless of which map is chosen. Instead, it will say that the value of
`random_integer.example.result` is unknown, so the element of `local.config_map`
is unknown, and so on.
This is what we referred to above as where Terraform will complain about
theoretically unknown values in some cases where it actually could figure out
the value during the plan phase.
###### One Solution to This Particular Problem
```hcl
locals {
visibility = random_integer.example.result == 1 ? "public" : "private"
config_map = {
public = {
subnets = ["subnet-0abcd1234efgh5678", "subnet-1abcd1234efgh5679"]
}
private = {
subnets = ["subnet-2abcd1234efgh5678", "subnet-3abcd1234efgh5679"]
}
}
subnets = [local.config_map[local.visibility].subnets[0], local.config_map[local.visibility].subnets[1]]
}
resource "null_resource" "example" {
count = length(local.subnets)
}
```
:::tip
We saw before that Terraform could not deduce that the length of
`local.config_map[local.visibility].subnets` was 2 regardless of the value of
`local.visibility`. However, by explicitly creating a list with 2 entries,
Terraform knows the length of the list, and the plan will succeed. This is
not the only way to get around the problem, but it is a common one.
:::
##### Explicit Transformations of Lists
One good thing about using the length of a list to determine the count is
that the length of the list can be known even if the values of the list are
not. As we saw in the previous example, the length of `local.subnets` was
known even if the subnet IDs in the list were not, ant that was sufficient.
On the other hand, transformations of a list with unknown values can make the
length of the list once again unknown.
- `compact` will remove `null` values from a list and `flatten` will remove
empty nested lists (`length(flatten(1, [], 2)` is 2), so the length of the
list will become unknown unless all of the values are known, even if you
as a human can tell there are no `null` values or empty nested lists.
- `distinct` and `toset()` will remove duplicate values from a list, so again
the length of the list will become unknown unless all of the values are known.
- `sort`, prior to Terraform version 1.6, [would make the length of the list
unknown](https://github.com/hashicorp/terraform/issues/31035) unless all of the values were known.
:::tip
When providing a list that will be used to determine the number of resources
to create, it is important to avoid using any transformations that can cause
the length of the list to become unknown.
:::
##### Implicit Transformations of Maps
For reasons detailed in [Terraform Count vs For Each](/learn/component-development/terraform-in-depth/terraform-count-vs-for-each),
it is usually preferable to use `for_each` rather than `count` to create
multiple resources. However, when using `for_each`, it is required that all
of the keys be known at plan time. If you use a list of strings to make the
keys, such as via `zipmap` or a `for` expression, then the list is implicitly
transformed into a set via the equivalent of `distinct(compact(list))`. As
explained [above](#explicit-transformations-of-lists), this will make the
length of the list unknown unless all of the values are known at plan time.
In general, keys to map inputs should be user-supplied configuration values
given as inputs, and not computed values. Most of the benefits of using maps
over lists are lost if you use computed values as keys, so only use lists
where it is likely that users can supply the keys.
---
## Learn the Concepts
import Intro from '@site/src/components/Intro';
import Steps from '@site/src/components/Steps';
import StepNumber from '@site/src/components/StepNumber';
import Step from '@site/src/components/Step';
import ReactPlayer from 'react-player';
Your platform ensures consistent service delivery every time. A well-designed platform seamlessly integrates with your monitoring, security, and compliance systems, building on your established foundation. Automated software delivery pipelines deploy new services quickly and easily. The reference architecture supports AWS EKS, Amazon ECS, and Lambda functions.
AI generated voice
## Setup Prerequisites
## Onboard Yourself
## Review the Toolchain
## Familiarize Yourself with our Conventions
## Develop Your Own Components
Once you're done setting up your platform, our attention will shift to how you ship your software by leveraging GitHub Actions and GitHub Action Workflows.
---
## Conventions
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import ReactPlayer from 'react-player'
Here’s a summary of all of our conventions across Terraform, Stacks, Catalogs, etc.
## SweetOps Conventions
SweetOps is built on top of a number of high-level concepts and terminology that are critical to understanding prior to getting started. In this document, we break down these concepts to help you better understand our conventions as we introduce them.
### Components
[Components](/components) are opinionated, self-contained units of infrastructure as code that solve one, specific problem or use-case. SweetOps has two flavors of components:
1. **Terraform:** Stand-alone root modules that implement some piece of your infrastructure. For example, typical components might be an EKS cluster, RDS cluster, EFS filesystem, S3 bucket, DynamoDB table, etc. You can find the [full library of SweetOps Terraform components on GitHub](https://github.com/cloudposse/terraform-aws-components). We keep these types of components in the `components/terraform/` directory within the infrastructure repository.
2. **Helmfiles**: Stand-alone, applications deployed using `helmfile` to Kubernetes. For example, typical helmfiles might deploy the DataDog agent, cert-manager controller, nginx-ingress controller, etc. Similarly, the [full library of SweetOps Helmfile components is on GitHub](https://github.com/cloudposse/helmfiles). We keep these types of components in the `components/helmfile/` directory within the infrastructure repository.
One important distinction about components that is worth noting: components are opinionated “root” modules that typically call other child modules. Components are the building blocks of your infrastructure. This is where you define all the business logic for how to provision some common piece of infrastructure like ECR repos (with the [ecr](/components/library/aws/ecr/) component) or EKS clusters (with the [eks/cluster](/components/library/aws/eks/cluster/) component). Our convention is to stick components in the `components/terraform` directory and to use a `modules/` subfolder to provide child modules intended to be called by the components.
:::caution
We do not recommend consuming one terraform component inside of another as that would defeat the purpose; each component is intended to be a loosely coupled unit of IaC with its own lifecycle. Further more, since components define a state backend, it’s not supported in terraform to call it from other modules.
:::
#### Additional Considerations
- Components should be opinionated. They define how _your_ company wants to deliver a service.
- Components should try to not rely on more than 2 providers, in order to have the most modular configuration. Terraform does not support passing a list of providers via variable, instead, all the providers should be statically listed inside the module. So Using 1-2 providers ensures a simple way exists to create any number of architectures with a given component (e.g. “primary” and “delegated” resources). There are few if any architectures with ternary/quaternary/etc relationships between accounts, which is why we recommend sticking to 1-2 providers.
[https://github.com/hashicorp/terraform/issues/24476](https://github.com/hashicorp/terraform/issues/24476)
- Components should not have a configuration setting in their names (e.g. `components/terraform/eks-prod` is poor convention). Prod is a type of configuration and the component shouldn’t differ by stage, only by configuration. The acceptable exception to the convention is naming conventions `...-root` which can only be provisioned in the root account (E.g. AWS Organizations).
- Components should try to expose the same variables as the upstream child modules unless it would lead to naming conflicts.
- Components should use `context.tf` [Terraform](/resources/legacy/fundamentals/terraform).
- Components should have a `README.md` with sample usage
- Components should be well formatted (e.g. `terraform fmt`)
- Components should use `remote-state` where possible to obtain values automatically from other components. All `remote-state` lookups belong in the `remote-state.tf` file. See [How to Use Terraform Remote State](/learn/maintenance/tutorials/how-to-use-terraform-remote-state).
- Components should try to upstream as much business logic as possible to child modules to promote reuse.
- Components should use strict version pinning in components and lower-bound pinning in terraform modules. See [our best practice for this](/best-practices/terraform#use-miminum-version-pinning-on-all-providers). See [How to Keep Everything Up to Date](/learn/maintenance/upgrades/how-to-keep-everything-up-to-date) after pinning. See [Proposed: Use Strict Provider Pinning in Components](/resources/adrs/proposed/proposed-use-strict-provider-pinning-in-components) for more context.
### Stacks
We use [Stacks](/resources/legacy/fundamentals/stacks) to define and organize configurations. We place terraform “root” modules in the `components/terraform` directory (e.g. `components/terraform/s3-bucket`). Then we define one or more catalog archetypes for using the component (e.g. `catalog/s3-bucket/logs.yaml` and `catalog/s3-bucket/artifacts`).
Stacks are a way to express the complete infrastructure needed for an environment using a standard YAML configuration format that has been developed by Cloud Posse. Stacks consist of components and the variables inputs into those components. For example, you configure a stack for each AWS account and then reference the components which comprise that stack. The more modular the components, the easier it is to quickly define a stack without writing any new code.
Here is an example stack defined for a Dev environment in the us-west-2 region:
```yaml
# Filename: stacks/uw2-dev.yaml
import:
- eks/eks-defaults
vars:
stage: dev
terraform:
vars: {}
helmfile:
vars:
account_number: "1234567890"
components:
terraform:
dns-delegated:
vars:
request_acm_certificate: true
zone_config:
- subdomain: dev
zone_name: example.com
vpc:
vars:
cidr_block: "10.122.0.0/18"
eks:
vars:
cluster_kubernetes_version: "1.19"
region_availability_zones: ["us-west-2b", "us-west-2c", "us-west-1d"]
public_access_cidrs: ["72.107.0.0/24"]
aurora-postgres:
vars:
instance_type: db.r4.large
cluster_size: 2
mq-broker:
vars:
apply_immediately: true
auto_minor_version_upgrade: true
deployment_mode: "ACTIVE_STANDBY_MULTI_AZ"
engine_type: "ActiveMQ"
helmfile:
external-dns:
vars:
installed: true
datadog:
vars:
installed: true
datadogTags:
- "env:uw2-dev"
- "region:us-west-2"
- "stage:dev"
```
Great, so what can you do with a stack? Stacks are meant to be a language and tool agnostic way to describe infrastructure, but how to use the stack configuration is up to you. We provide the following ways to utilize stacks today:
1. [atmos](https://github.com/cloudposse/atmos): atmos is a command-line tool that enables CLI-driven stack utilization and supports workflows around `terraform`, `helmfile`, and many other commands
2. [terraform-provider-utils](https://github.com/cloudposse/terraform-provider-utils): is our terraform provider for consuming stack configurations from within HCL/terraform.
3. [Spacelift](https://spacelift.io/): By using the [terraform-spacelift-cloud-infrastructure-automation module](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation) you can configure Spacelift continuously deliver components. Read up on why we [Use Spacelift for GitOps with Terraform](/resources/adrs/adopted/use-spacelift-for-gitops-with-terraform) .
### Global (Default) Region
The global region, annotated `gbl`, is an environment or region we use to deploy unique components. A component may be deployed in the `gbl` region for any of the following reasons:
1. The AWS Service itself is global (e.g. S3 bucket, Cloudfront)
1. The AWS Service is forced into a specific region (IAM, Route 53 - similar to AWS declaring something as "global")
1. The AWS Service should only be deployed exactly once across regions (AWS Identity Center)
1. The resource isn't in AWS, Kubernetes, or anywhere we can reasonably assign to a region. This is common with third-party providers such as Spacelift.
However, the AWS provider still needs a region to be defined. We set the global region to the primary region as default. This is intended to cause the least confusion when looking for resources, yet the "global region" can be any region.
### Catalogs
Catalogs in SweetOps are collections of sharable and reusable configurations. Think of the configurations in catalogs as defining archetypes (a very typical example of a certain thing) of configuration (E.g. `s3/public` and `s3/logs` would be two kinds of archetypes of S3 buckets). They are also convenient for managing [Terraform](/resources/legacy/fundamentals/terraform). These are typically YAML configurations that can be imported and provide solid baselines to configure security, monitoring, or other 3rd party tooling. Catalogs enable an organization to codify its best practices of configuration and share them. We use this pattern both with our public terraform modules as well as with our stack configurations (e.g. in the `stacks/catalog` folder).
SweetOps provides many examples of how to use the catalog pattern to get you started.
Today SweetOps provides a couple important catalogs:
1. [DataDog Monitors](https://github.com/cloudposse/terraform-datadog-monitor/tree/master/catalog/monitors): Quickly bootstrap your SRE efforts by utilizing some of these best practice DataDog application monitors.
2. [AWS Config Rules](https://github.com/cloudposse/terraform-aws-config/tree/master/catalog): Quickly bootstrap your AWS compliance efforts by utilizing hundreds of [AWS Config](https://aws.amazon.com/config/) rules that automate security checks against many common services.
3. [AWS Service Control Policies](https://github.com/cloudposse/terraform-aws-service-control-policies/tree/master/catalog): define what permissions in your organization you want to permit or deny in member accounts.
In the future, you’re likely to see additional open-source catalogs for OPA rules and tools to make sharing configurations even easier. But it is important to note that how you use catalogs is really up to you to define, and the best catalogs will be specific to your organization.
### Collections
Collections are groups of stacks.
### Segments
Segments are interconnected networks. For example, a production segment connects all production-tier stacks, while a non-production segment connects all non-production stacks.
### Primary vs Delegated
Primary vs Delegated is a common implementation pattern in SweetOps. This is most easily described when looking at the example of domain and DNS usage in a multi-account AWS organization: SweetOps takes the approach that the root domain (e.g. `example.com`) is owned by a **primary** AWS account where the apex zone resides. Subdomains on that domain (e.g. `dev.example.com`) are then **delegated** to the other AWS accounts via an `NS` record on the primary hosted zone which points to the delegated hosted zone’s name servers.
You can see examples of this pattern in the [dns-primary](/components/library/aws/dns-primary/), [dns-delegated](/components/library/aws/dns-delegated/) and [iam-primary-roles](https://github.com/cloudposse/terraform-aws-components/tree/main/deprecated/iam-primary-roles) / [iam-delegated-roles](https://github.com/cloudposse/terraform-aws-components/tree/main/deprecated/iam-delegated-roles) components.
### Live vs Model (or Synthetic)
Live represents something that is actively being used. It differs from stages like “Production” and “Staging” in the sense that both stages are “live” and in-use. While terms like “Model” and “Synthetic” refer to something which is similar, but not in use by end-users. For example, a live production vanity domain of `acme.com` might have a synthetic vanity domain of `acme-prod.net`.
### Docker Based Toolbox (aka Geodesic)
In the landscape of developing infrastructure, there are dozens of tools that we all need on our personal machines to do our jobs. In SweetOps, instead of having you install each tool individually, we use Docker to package all of these tools into one convenient image that you can use as your infrastructure automation toolbox. We call it [Geodesic](/resources/legacy/fundamentals/geodesic) and we use it as our DevOps automation shell and as the base Docker image for all of our DevOps tooling.
Geodesic is a DevOps Linux Distribution packaged as a Docker image that provides users the ability to utilize `atmos`, `terraform`, `kubectl`, `helmfile`, the AWS CLI, and many other popular tools that compromise the SweetOps methodology without having to invoke a dozen `install` commands to get started. It’s intended to be used as an interactive cloud automation shell, a base image, or in CI/CD workflows to ensure that all systems are running the same set of versioned, easily accessible tools.
### Vendoring
Vendoring is a strategy of importing external dependencies into a local source tree or VCS. Many languages (e.g. NodeJS, Golang) natively support the concept. However, there are many other tools which do not address how to do vendoring, namely `terraform`.
There are a few reasons to do vendoring. Sometimes the tools we use do not support importing external sources. Other times, we need to make sure to have full-control over the lifecycle or versioning of some code in case the external dependencies go away.
Our current approach to vendoring of thirdparty software dependencies is to use [vendir](https://github.com/vmware-tanzu/carvel-vendir) when needed.
Example use-cases for Vendoring:
1. Terraform is one situation where it’s needed. While terraform supports child modules pulled from remote sources, components (aka root modules) cannot be pulled from remotes.
2. GitHub Actions do not currently support importing remote workflows. Using `vendir` we can easily import remote workflows.
### Generators
Generators in SweetOps are the pattern of producing code or configuration when existing tools have shortcomings that cannot be addressed through standard IaC. This is best explained through our use-cases for generators today:
1. In order to deploy AWS Config rules to every region enabled in an AWS Account, we need to specify a provider block and consume a compliance child module for each region. Unfortunately, [Terraform does not currently support the ability loop over providers](https://github.com/hashicorp/terraform/issues/19932), which results in needing to manually create these provider blocks for each region that we’re targeting. On top of that, not every organization uses the same types of accounts so a hardcoded solution is not easily shared. Therefore, to avoid tedious manual work we use the generator pattern to create the `.tf` files which specify a provider block for each module and the corresponding AWS Config child module.
2. Many tools for AWS work best when profiles have been configured in the AWS Configuration file (`~/.aws/config`). If we’re working with dozens of accounts, keeping this file current on each developer’s machine is error prone and tedious. Therefore we use a generator to build this configuration based on the accounts enabled.
3. Terraform backends do not support interpolation. Therefore, we define the backend configuration in our YAML stack configuration and use `atmos` as our generator to build the backend configuration files for all components.
### The 4-Layers of Infrastructure
We believe that infrastructure fundamentally consists of 4-layers of infrastructure. We build infrastructure starting from the bottom layer and work our way up.
Each layer builds on the previous one and our structure is only as solid as our foundation. The tools at each layer vary and augment the underlying layers. Every layer has it’s own SDLC and is free to update independently of the other layers. The 4th and final layer is where your applications are deployed. While we believe in using terraform for layers 1-3, we believe it’s acceptable to introduce another layer of tools to support application developers (e.g. Serverless Framework, CDK, etc) are all acceptable since we’ve built a solid, consistent foundation.
## Terraform
### Mixins
Terraform does not natively support the object-oriented concepts of multiple inheritances or [mixins](https://en.wikipedia.org/wiki/Mixin), but we can simulate by using convention. For our purposes, we define a mixin in terraform as a controlled way of adding functionality to modules. When a mixin file is dropped into a folder of a module, the code in the mixin starts to interact with the code in the module. A module can have as many mixins as needed. Since terraform does not directly, we instead use a convention of exporting what we want to reuse.
We achieve this currently using something we call an `export` in our terraform modules, which publish some reusable terraform code that we copy verbatim into modules as needed. We use this pattern with our `terraform-null-label` using the `context.tf` file pattern (See below). We also use this pattern in our `terraform-aws-security-group` module with the [https://github.com/cloudposse/terraform-aws-security-group/blob/main/exports/security-group-variables.tf](https://github.com/cloudposse/terraform-aws-security-group/blob/main/exports/security-group-variables.tf).
To follow this convention, create an `export/` folder with the mixin files you wish to export to other modules. Then simply copy them over (E.g. with `curl`). We recommend calling the installed files something `.mixin.tf` so it’s clear it's an external asset.
### Resource Factories
Resource Factories provide a custom declarative interface for defining multiple resources using YAML and then terraform for implementing the business logic. Most of our new modules are developed using this pattern so we can decouple the architecture requirements from the implementation.
See [https://medium.com/google-cloud/resource-factories-a-descriptive-approach-to-terraform-581b3ebb59c](https://medium.com/google-cloud/resource-factories-a-descriptive-approach-to-terraform-581b3ebb59c) for a related discussion.
To better support this pattern, we implemented native support for deep merging in terraform using our [https://github.com/cloudposse/terraform-provider-utils](https://github.com/cloudposse/terraform-provider-utils) provider as well as implemented a module to standardize how we use YAML configurations [https://github.com/cloudposse/terraform-yaml-config](https://github.com/cloudposse/terraform-yaml-config).
Examples of modules using Resource Factory convention:
- [https://github.com/cloudposse/terraform-aws-service-control-policies](https://github.com/cloudposse/terraform-aws-service-control-policies)
- [https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation)
- [https://github.com/cloudposse/terraform-datadog-platform](https://github.com/cloudposse/terraform-datadog-platform)
- [https://github.com/cloudposse/terraform-opsgenie-incident-management](https://github.com/cloudposse/terraform-opsgenie-incident-management)
- [https://github.com/cloudposse/terraform-aws-config](https://github.com/cloudposse/terraform-aws-config)
### Naming Conventions (and the `terraform-null-label` Module)
Naming things is hard. We’ve made it easier by defining a programmatically consistent naming convention, which we use in everything we provision. It is designed to generate consistent human-friendly names and tags for resources. We implement this using a terraform module which accepts a number of standardized inputs and produces an output with the fully disambiguate ID. This module establishes the common interface we use in all of our terraform modules in the Cloud Posse ecosystem. Use `terraform-null-label` to implement a strict naming convention. We use it in all of our [Components](/components) and export something we call the `context.tf` pattern.
[https://github.com/cloudposse/terraform-null-label](https://github.com/cloudposse/terraform-null-label)
Here’s video where we talk about it.
There are 6 inputs considered "labels" or "ID elements" (because the labels are used to construct the ID):
1. `namespace`
2. `tenant`
3. `environment`
4. `stage`
5. `name`
6. `attributes`
This module generates IDs using the following convention by default: `{namespace}-{environment}-{stage}-{name}-{attributes}`. However, it is highly configurable. The delimiter (e.g. `-`) is configurable. Each label item is optional (although you must provide at least one).
#### Tenants
`tenants` are a Cloud Posse construct used to describe a collection of accounts within an Organizational Unit (OU). An OU may have multiple tenants, and each tenant may have multiple AWS accounts. For example, the `platform` OU might have two tenants named `dev` and `prod`. The `dev` tenant can contain accounts for the `staging`, `dev`, `qa`, and `sandbox` environments, while the `prod` tenant only has one account for the `prod` environment.
By separating accounts into these logical groupings, we can organize accounts at a higher level, follow AWS Well-Architected Framework recommendations, and enforce environment boundaries easily.
### The `context.tf` Mixin Pattern
Cloud Posse Terraform modules all share a common `context` object that is meant to be passed from module to module. A `context` object is a single object that contains all the input values for `terraform-null-label` and every `cloudposse/terraform-*` module uses it to ensure a common interface to all of our modules. By convention, we install this file as `context.tf` which is why we call it the `context.tf` pattern. By default, we always provide an instance of it accessible via `module.this`, which makes it always easy to get your _context._ 🙂
Every input value can also be specified individually by name as a standard Terraform variable, and the value of those variables, when set to something other than `null`, will override the value in the context object. In order to allow chaining of these objects, where the context object input to one module is transformed and passed on to the next module, all the variables default to `null` or empty collections.
### Atmos CLI
We predominantly call `terraform` from `atmos`, however, by design all of our infrastructure code runs without any task runners. This is in contrast to tools like `terragrunt` that manipulate the state of infrastructure code at run time.
See [How to use Atmos](/learn/maintenance/tutorials/how-to-use-atmos)
## Helm
### Charts as an Interface
Typically, in a service-oriented architecture (SOA) aka microservices architecture, there will be dozens of very similar services. Traditionally, companies would develop a “unique” helm chart for each of these services. In reality, the charts were generated by running the `helm create` ([https://helm.sh/docs/helm/helm_create/](https://helm.sh/docs/helm/helm_create/) ) command that would generate all the boilerplate. As a result, the services would share 99% of their DNA with each other (e.g. like monkeys and humans), and 1% would differ. This led to a lot of tech debt, sprawl, and copy & paste 🍝 mistakes.
For proprietary apps deployed by your organization, we recommend taking a different tactic when developing helm charts. Instead, treat charts like an interface - the way you want to deploy apps to Kubernetes. Develop 1-2 charts based on the patterns you want your developers to use (e.g. microservice, batchjob, etc). Then parameterize things like the `image`, `env` values, `resource` limits, `healthcheck` endpoints, etc. Think of charts like developing your own Heroku-like mechanism to deploy an application. Instead of writing 2 dozen charts, maintain one. Make your apps conform to the convention. Push back on changes to the convention unless necessary.
**What if we need more than one deployment (or XYZ) in a chart?** That’s easy. You have a few options: a) Deploy the same chart twice; b) Decide if as an organization you want to support that interface and then extend the chart; c) Develop an additional chart interface.
**What if we want to add a feature to our chart and don’t want to disrupt other services?** No problem. Charts are versioned. So think of the version of a chart as the version of your interface. Every time you change the chart, bump the version. Ensure all your services pin to a version of the chart. Now when you change the version of the chart in your service, you know that your upgrading your interface as well.
**What if we need some backing services?** Good question. You can still use the features of umbrella charts, and even feature flag common things like deploying a database backend for development environments by using a `condition` in the `requirements.yaml` that can be toggled in the `values.yaml`. _Pro-tip:_ Use [https://artifacthub.io/](https://artifacthub.io/) to find ready-made charts you can use.
```yaml
- name: elasticsearch
version: ^1.17.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: elasticsearch.enabled
- name: kibana
version: ^1.1.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: kibana.enabled
```
---
## Upgrade & Maintain
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## The Problem
Even if we codify our infrastructure, that doesn't mean our job is done.
Time needs to be spent updating components, adding features, and fixing bugs.
Over time this churn can create significant tech debt or worse, stagnation.
Moreover, open source software isn't a silver bullet of 'free updates'
( ie. projects can become stale or abandoned ).
## Our Solution
We discuss many of the common tasks, best-practices, and the means of automation, along
with guidance on how to lean on that automation without overdoing it early on.
The trick with ownership is to take things in stride, starting slow and building up
automation.
## What needs to be maintained
### Making decisions
Read through your ADR documents and make sure that they are up to date.
When you make a deliberate effort to change something, try in earnest to document those changes.
This doesn't mean you need to write up components, but certainly express when one technology should
be preferred over another, and discuss patterns that your team should adopt and reherse.
Good ADR Examples:
- Technology choices (adding, removing)
- Scaling up infrastructure (new orgs, new regions, etc)
- Analysis, direction, or principles of patterns
### Creating components
We have a [separate guide on authoring components](/learn/component-development) that
can guide you through the methods, but we should also talk about the impact on maintenance
separately.
[Make sure your component doesn't already exist](https://github.com/cloudposse/terraform-aws-components)
and that includes [checking for PRs](https://github.com/cloudposse/terraform-aws-components/pulls).
Maximize the use of modules by searching the [Terraform Registry](https://registry.terraform.io/) and
remember that there is a high cost to components:
- Keep your providers up to date:
- GitOps and Spacelift solutions will warn you about deprecation in their logs. Read them and
deligently create issues on them.
- Dependencies should be carefully considered:
- Avoid mixing global and regional resources. Two smaller components will compose better
- If you need to make many instances of a resource, consider drying that up with Atmos
- i.e. if you need to make 4 VPCs, then make 4 instances of a component that produces the VPC
- It is significantly easier to use Atmos for DRYing configuration rather than Terraform
- Maintenance will include disabling/enabling components. Make sure that your component respects this
flag or it could be very difficult to update and extend.
- Consider versioning and maintaining components outside of your infrastructure repo. If you plan
for other organizations to use your component, make sure you practice vendoring.
### Updating components
Components in Atmos support vendoring. This means you can version them independently of your
infrastructure to best manage the operational cost of updating them.
Make sure you [read up on how vendoring works in Atmos](https://atmos.tools/core-concepts/components/vendoring)
and carefully read [release information](https://github.com/cloudposse/terraform-aws-components/releases)
for risks and breaking changes.
### Updating infrastructure
When you are working on altering your `stacks` folder, Atmos has several features to help
manage the sprawl. [Be sure to read up on how Atmos manages stacks](https://atmos.tools/core-concepts/stacks).
Some key patterns for success while maintaining stacks:
- [Validate stacks often while configuring them](https://atmos.tools/core-concepts/stacks/validation)
- [Use the `describe` command to look at imported files](https://atmos.tools/core-concepts/stacks/describing)
- Try to dry up catalog entries after `atmos terraform plan` is working, not before
- Often, catalog patterns emerge once your components are configured in many environments
- Mixin and layer patterns emerge over many PRs and with maturity. Rushing them can lead to significant tech debt
### Operational Headaches
Some situations you should plan for include:
- Expect `atmos terraform destroy` to fail. Test with `enabled=false`, then destroy.
- Updating runners for things like GitOps, GitHub Actions, or Spacelift can be a catch-22. Carefully consider that while you replace them, they could destroy themselves or otherwise mess up state locks.
- What order of operations does a set of infrastructure pieces take?
- Document all required clickops. Many apis like AWS still have these
- Tools like [Spacelift](https://docs.spacelift.io/concepts/stack/stack-dependencies) understand dependencies.
[You can use Atmos to make sure they are tracked](https://atmos.tools/cli/commands/describe/dependents)
- Consider [Atmos Workflows](https://atmos.tools/cli/commands/workflow) when steps need to manage resources in peculiar fashions such as using the [Terraform `-target` flag](https://developer.hashicorp.com/terraform/cli/commands/plan#target-address)
- It's always easier to add/remove than to mutate. Prefer replacing components whenever you are making complex changes.
- If availability or global dependencies are a concern:
- ADR Docs should be present to discuss risks and describe how they are mitigated
- Consider using a new stage. The [AWS Well-Architected](https://aws.amazon.com/architecture/well-architected/) and
[12-factor](https://12factor.net/) patterns go over the patterns of a good platform.
_You are maintaining a platform_.
### Secrets rotation
Simply put, SSM Parameter Store is very helpful, but it won't let you know about rotation and drift.
- Consider [using developer automation features in 1Password](https://1password.com/developers/secrets-management) to help with secret rotation
- You can also use an ADR to document credentials that should expire and when
- [Atmos workflows](https://atmos.tools/cli/commands/workflow) can be used to rotate secrets
- [Terraform has the `time_rotating` resource](https://registry.terraform.io/providers/hashicorp/time/latest/docs/reference/rotating.html)
- Make sure you are using bot accounts or applications for GitHub secrets.
If any of the tokens in your 1Password vault are personal, there will be foreseeable problems.
You can use the `gh auth status` command from the `gh` cli to verify the user of each token.
## Automation and tooling
### Renovate
Renovate is a swiss-army knife for keeping abreast of changes in open source software.
Some leading patterns and best practices include:
- [Renovate can watch for releases and notify you on a dashboard](https://docs.renovatebot.com/key-concepts/dashboard/)
- [Renovate can watch Dockerfiles](https://docs.renovatebot.com/modules/datasource/docker/)
- [Renovate can notify EndOfLife cycles](https://docs.renovatebot.com/modules/datasource/endoflife-date/)
- Make sure you consider your platform, such as Kubernetes or AMI distributions
- Consider the aforementioned "dashboard" feature so you avoid alert fatigue
- [Renovate can watch terraform modules](https://docs.renovatebot.com/modules/datasource/terraform-module)
- [Renovate can watch terraform providers](https://docs.renovatebot.com/modules/datasource/terraform-provider)
Since it can be daunting to configure Renovate for everything, we recommend
starting with only the basic and most crucial sources of tech debt:
- Make sure Geodesic updates create PRs
- You'll get a lot of automated updates from this alone, including patches to Terraform and `aws-cli`
- Create module and provider rules for custom components
### UpdateCLI
[UpdateCLI](https://www.updatecli.io/) is a tool that can be used to update
many different types of software and can implement auto-discovery.
While the configuration is more complex than Renovate, it can be customized to
do much more in-depth automation.
Considerations:
- Auto-discovery quickly leads to alert fatigue. Consider it for high churn
- Updating stacks is possible, and you can even update AMI searches or db versions, but make sure
you have a good understanding of the impact of the change before you automate it.
## Atmos Component Updater
Atmos has a [Component Updater](https://atmos.tools/integrations/github-actions/component-updater)
which can be enabled as a GitHub action.
The Atmos Component Updater will automatically suggest pull requests in your new repository. To do so, we need to create and install a GitHub App and allow GitHub Actions to create and approve pull requests within your GitHub Organization. For more on the Atmos Component Updater, see [atmos.tools](https://atmos.tools/integrations/github-actions/component-updater).
1. Ensure [all requirements are met](https://atmos.tools/integrations/github-actions/component-updater/#requirements).
1. Set up a [Github App](https://atmos.tools/integrations/github-actions/component-updater/#using-a-github-app) with
permission to create Pull Requests. We use a GitHub App because Pull Requests will only trigger other GitHub Action
Workflows if the Pull Request is created by a GitHub App or PAT.
1. Create a
[GitHub Environment](https://atmos.tools/integrations/github-actions/component-updater/#using-github-environments).
With environments, the Atmos Component Updater workflow will be required to follow any branch protection rules before
running or accessing the environment's secrets. Plus, GitHub natively organizes these Deployments separately in the
GitHub UI
## Maturing infrastructure
Many of the topics above concern maturing infrastructure. As you grow, you will
find many patterns in how your platform responds to business needs.
This takes time.
Make sure that you retro your platform regularly. Patterns to consider for maturing your
infrastructure include:
- Monthly meetings to sync on tech debt, outages, and vulnerabilities
- Rotating ownership of components
- Reviewing telemetry and auditing PRs
## References
- [Renovate](https://www.mend.io/renovate/)
- [UpdateCLI](https://www.updatecli.io/)
- [Atmos](https://atmos.tools/)
- [AWS Well-Architected](https://aws.amazon.com/architecture/well-architected/)
- [12-factor](https://12factor.net/)
## FAQs
### How can I quickly patch a newly vendored component?
We recommend using [Terraform override files](https://developer.hashicorp.com/terraform/language/files/override)
to quickly patch a component.
### What if the resources in my component need to move after vendoring?
Consider using [Terraform `moved` configuration](https://developer.hashicorp.com/terraform/tutorials/configuration-language/move-config#move-your-resources-with-the-moved-configuration-block),
understanding that the state commands can also be codified in Atmos workflows.
### Should I teach my infrastructure to update itself?
It's best to first do as much manual work as possible. Once you feel like you
have a well analyzed pattern, consider making a PR to add an ADR and discuss.
If the ADR holds up to criticism, it should encapsulate what you plan to automate.
### Developers want to iterate on infrastructure. How do I manage this?
If developers want to use your platform in a way that affects terraform state:
- Can these resources be released from state? Then give developers access with `aws-teams`
- If developers want to codify their own infrastructure outside of your platform:
- Do they just need extra environments? Codify them using [Atmos template imports](https://atmos.tools/core-concepts/stacks/imports/#go-templates-in-imports)
- Can they manage components in a separate repo? Then vendor their component repo. They can use the `sandbox` account to test their components.
Mostly, the platform you make will need room to iterate, but this can get costly quickly.
Make sure to start small and set goals to drive when you can increase the cost of the platform.
---
## Customize the Geodesic Shell
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
Geodesic solves a lot of problems teams have, but it’s opinionated and probably doesn’t work exactly the way you like your shell to work. Maybe you would like more `alias` definitions, or maybe you don’t like our `alias` definitions. Maybe you want to change the colors of `ls` output (e.g. `DIR_COLORS`) or you just can’t stand using `vim` as the default editor.
## Solution
Our own developers couldn’t agree on what the best look-and-feel was for `geodesic` so we added support for customizations. This is how to customize Geodesic at launch time.
:::tip
Several features of Geodesic can be customized at launch time (rather than during the build of the Docker image) so that people can share an image yet still have things set up the way they like. This document describes how to configure the customization.
:::
:::caution
### WARNING
One of the key benefits of Geodesic is that it provides a consistent environment for all users regardless of their local machine. It eliminates the "it works on my machine" excuse. While these customization options can be great productivity enhancements as well as provide the opportunity to install new features to try them out before committing to installing them permanently, they can also create the kind of divergence in environments that brings back the "it works on my machine" problem.
Therefore, we have included an option to disable the customization files: the preferences, the overrides, and the docker environment files. Simply set and export the host environment variable `$GEODESIC_CUSTOMIZATION_DISABLED` to any value other than "false" before launching Geodesic.
:::
The way it works is users can place bash shell scripts on their host computer, which are read in either at the start of the `bash` profile script processing or at the end of it. These shell scripts can set up environment variables, command aliases, shell functions, etc., and through setting environment variables, can cause Geodesic to enable or disable certain features.
Users can also choose whether to have a single `bash` history file for all containers or to have separate history files. This is convenient if working with multiple geodesic containers.
### Root Directory for Configuration
All configuration files are stored under `$GEODESIC_CONFIG_HOME`, which defaults to `/localhost/.geodesic`. At this time, `/localhost` is mapped to the host `$HOME` directory and this cannot be configured yet, so all configuration files must be under `$HOME`, but within that limitation, they can be placed anywhere. So if you set `$GEODESIC_CONFIG_HOME` to `/localhost/work/config/geodesic`, then files would go in `~/work/config/geodesic/` and below on your Docker host machine.
### Resources
There are currently 3 resources used for configuration:
- **Preferences**, which are shell scripts loaded very early in the launch of the Geodesic shell.
- **Overrides**, which are shell scripts loaded very late in the launch of the Geodesic shell.
- `Bash` **History Files**, which store `bash` command line history.
Additionally, when Geodesic exits normally, it will run the host command `geodesic_on_exit` if it is available. This is intended to be a script that you write and install anywhere on your PATH to do whatever cleanup you want. For example, change the window title.
Both preferences and overrides can be either a single file, named `preferences` and `overrides` respectively, or can be a collection of files in directories named `preferences.d` and `overrides.d`. If they are directories, all the visible files in the directories will be sourced, except for hidden files and files with names matching the `GEODESIC_AUTO_LOAD_EXCLUSIONS` regex, which defaults to `(~|.bak|.log|.old|.orig|.disabled)$`.
`bash` history is always stored in a single file named `history`, never a directory of files nor files with any other name. If you want to use a separate history file for one Geodesic-based Docker image not shared by other Geodesic-based Docker images, you must create an empty `history` file in the image-specific configuration directory (see below).
### Configuration by File Placement
Resources can be in several places and will be loaded from most general to most specific, according to the name of the docker container image.
- The most general resources are the ones directly in `$GEODESIC_CONFIG_HOME`. These are applied first. To keep the top-level directory less cluttered and to avoid name clashes, you can put them in a subdirectory named `defaults`. If that subdirectory exists, then `GEODESIC_CONFIG_HOME` itself is not searched.
- The `DOCKER_IMAGE` name is then parsed. Everything before the final `/` is considered the "company" name and everything after is, following the Cloudposse reference architecture, referred to as the "stage" name. So for the `DOCKER_IMAGE` name `cloudposse/geodesic`, the company name is `cloudposse` and the stage name is `geodesic`
- The next place searched for resources is the directory with the same name as the "company". In our example, that would be `~/.geodesic/cloudposse`. Resources here would apply to all containers from the same company.
- The next place searched for resources is the directory with the same name as the "stage", which is generally the name of the project. In our example, that would be `~/.geodesic/geodesic`. Resources here would apply to all containers with the same base name, perhaps various forks of the same project.
- The final place searched is the directory with the full name of the Docker image: `$GEODESIC_CONFIG_HOME/$DOCKER_IMAGE`, i.e. `~/.geodesic/cloudposse/geodesic`. Files here are the most specific to this container.
By loading them in this order, you can put your defaults at one level and then override/customize them at another, minimizing the amount of duplication needed to customize a wide range of containers.
### Usage details
Preferences and Overrides are loaded in the order specified above and all that are found are loaded. For history files, only the last one found is used. To start keeping separate history, just create an empty history file in the appropriate place.
While Preferences and Override files themselves must be `bash` scripts and will be directly loaded into the top-level Geodesic shell, they can of course call other programs. You can even use them to pull configurations out of other places.
Symbolic links must be relative if you want them to work both inside Geodesic and outside of it. Symbolic links that reference directories that are not below `$HOME` on the host will not work.
When possible, Geodesic mounts the host `$HOME` directory as `/localhost` and creates a symbolic link from `$HOME` to `/localhost` so that files under `$HOME` on the host can be referenced by the exact same absolute path both on the host computer and inside Geodesic. For example, if the host `$HOME` is `/Users/fred`, then `/Users/fred/src/example.sh` will refer to the same file both on the host and from inside the Geodesic shell.
In general, you should put most of your customization in the Preferences files. Geodesic (usually) takes care to respect and adapt to preferences set before it starts adding on top of them. The primary use for overrides is if you need the results of the initialization process as inputs to your configuration, or if you need to undo something Geodesic does not yet provide a configuration option for not doing in the first place.
### Example: Adding Aliases and Environment Variables
Add the following to `~/.geodesic/defaults/preferences`
```
# Add an alias for `kubectl`
alias kc='kubectl'
alias ll='ls -al'
# Add an alias to easily run `geodesic` inside of kubernetes
alias debugpod='kubectl run remote-shell-example --image=public.ecr.aws/cloudposse/geodesic:latest-debian --rm=true -i -t --restart=Never --env="BANNER=Geodesic" -- -l'
export AWS_ASSUME_ROLE_TTL=8h
export AWS_CHAINED_SESSION_TOKEN_TTL=8h
if [[ "$USE_AWS_VAULT" = "true" ]] ; then
export AWS_VAULT_SERVER_ENABLED=true
export AWS_VAULT_ASSUME_ROLE_TTL=8h
# Install the Debian package from cloudposse/packages for `aws-vault`
apt-get install -y aws-vault
fi
```
### Example: Customize the command prompt
You can set each of the 4 glyphs used by the command line prompt, plus the host file system designator, individually:
- `ASSUME_ROLE_ACTIVE_MARK` is the glyph to show when you have AWS credentials active. Defaults to a green, bold, '√' SQUARE ROOT (looks like a check mark): `$'\u221a'`
- `ASSUME_ROLE_INACTIVE_MARK` is the glyph to show when you do not have AWS credentials active. Defaults to a red, bold, '✗' BALLOT X: `$'\u2717'`
- `BLACK_RIGHTWARDS_ARROWHEAD` is the glyph at the end of the prompt. The troublesome default is '⨠' Z NOTATION SCHEMA PIPING: `$'\u2a20'`
- `BANNER_MARK` is the glyph at the start of the first line of a 2-line prompt that introduces the `BANNER` text. Defaults to '⧉', TWO JOINED SQUARES: `$'\u29c9'`
- `PROMPT_HOST_MARK` is added to the command line when the current working directory is on the host computer (via a bind mount) and not in the container. Defaults to '(HOST)' with "HOST" in red bold. Disable this feature by setting `export PROMPT_HOST_MARK=""`.
The default `BLACK_RIGHTWARDS_ARROWHEAD` is from the Unicode Supplemental plane and therefore may be missing from some systems. You can use `->` instead by adding
```
export BLACK_RIGHTWARDS_ARROWHEAD="->"
```
to your `~/.geodesic/defaults/preferences` file.
_Cautions_:
- You can set these variables to multiple characters, and use ANSI escape codes to change their colors, but in order for command line editing to continue to work properly, any non-printing characters must be escaped. We have had the best luck with starting the escape with `$'\x01'` and ending it with `$'\x02'`, but there a couple of other, similar options. If you fail to do this, or do it incorrectly, your cursor will be incorrectly positioned when you edit a command line from your history and what you see will not be what is executed.
- Command line editing can also be affected by ["ambiguous width characters"](https://www.unicode.org/reports/tr11/tr11-39.html). Unicode characters can be "narrow" or "wide" or "ambiguous". In practice in Roman (e.g. English) scripts, a "wide" character takes up the space of 2 "narrow" characters, and the standard Roman letters are all narrow. In older versions of Unicode, many Emoji are ambiguous width. If you use an ambiguous width character that prints wide but is interpreted as narrow, command line editing will suffer from incorrect cursor placement, as with non-printing characters above. This can be worked around by finding and selecting a preference such as "Treat ambiguous characters as wide" (if you can find such a preference), but we recommend just avoiding characters that cause problems.
### Troubleshooting
If customizations are not being found or are not working as expected, you can set the host environment variable `$GEODESIC_TRACE` to "custom" before launching Geodesic and a trace of the customization process will be output to the console.
---
## How to Define Stacks for Multiple Regions?
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
## Problem
We want to deploy a component or set of to multiple regions. The components might need specific settings depending on the region. We want to be as DRY as possible but not compromise on the customization of the configuration.
## Solution
First, make sure you’re familiar with [Stacks](/resources/legacy/fundamentals/stacks) and [How to Use Imports and Catalogs in Stacks](/learn/maintenance/tutorials/how-to-use-imports-and-catalogs-in-stacks) with [Components](/components) Inheritance.
:::tip
Define one stack configuration for every region and simply import the catalog configuration with all components you want to reuse per region.
:::
Let’s say we want to deploy the [vpc](/components/library/aws/vpc/) into the AWS `us-east-1` and `us-west-2` regions in the `dev` account. We’ll want to customize the CIDR block, region, and availability zones used. Here’s how to do it...
1. Define a catalog entry for the common `vpc` configuration. This is where we can define our organizations best-practices for a VPC.
```yaml
# stacks/catalog/vpc.yaml
components:
terraform:
vpc:
backend:
s3:
workspace_key_prefix: vpc
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
subnet_type_tag_key: acme.net/subnet/type
vpc_flow_logs_enabled: true
#vpc_flow_logs_bucket_environment_name: uw2
vpc_flow_logs_bucket_stage_name: audit
vpc_flow_logs_bucket_tenant_name: mgmt
vpc_flow_logs_traffic_type: ALL
```
2. Now define a stack configuration for the `us-east-1` region.
```yaml
# stacks/ue1-dev.yaml
import:
- catalog/vpc
# Define the global variables for this region
vars:
region: us-east-1
environment: ue1
components:
terraform:
vpc:
vars:
cidr_block: 10.1.0.0/18
vpc_flow_logs_bucket_environment_name: ue1
availability_zones:
- "us-east-1a"
- "us-east-1b"
- "us-east-1c"
```
3. Then repeat the process and define a stack configuration for the `us-west-2` region.
```yaml
# stacks/uw2-dev.yaml
import:
- catalog/vpc
# Define the global variables for this region
vars:
region: us-west-2
environment: uw2
components:
terraform:
vpc:
vars:
cidr_block: 10.2.0.0/18
vpc_flow_logs_bucket_environment_name: uw2
availability_zones:
- "us-west-2a"
- "us-west-2b"
- "us-west-2c"
```
Now use the standard [Atmos](/resources/legacy/fundamentals/atmos) commands to plan and apply the stack configurations.
---
## How to Document a New Design Decision
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import Steps from '@site/src/components/Steps';
## Problem
During the course of building and managing infrastructure, lots of decisions are made along the way frequently through informal/casual conversations via Slack, Zoom calls, and whiteboarding. Persons involved in making the decisions may come and go. When new team members are onboarded, they lack all the context from previous decisions and need a way to quickly come up to speed. Plus, with so many decisions it’s hard sometimes to remember why a certain decision was made a the time. Usually, we make the best decisions based on the information, best practices, and options available at the time. However, as technology evolves, these options change and it might not be obvious anymore why a particular decision was made or even recalling what options were considered or ruled out.
## Solution
Design Decisions are anything we need to confirm architecturally or strategically before performing the implementation. They should include as much context as possible about what options are considered or ruled out. As part of this process, we’ll want to ask the right questions so we gather the necessary requirements for implementation. Once a decision is made, an should be written to capture the decision. Learn [How to write ADRs](/learn/maintenance/tutorials/how-to-write-adrs).
## Process
1. **Identify the layer that the Design Decision is associated with.**
1. Review the other decisions to make sure there’s not one that is similar enough. In that case, we should enrich the context of that decision, rather than create a new one.
1. **Create the Design Decision.**
1. Title/Summary _must_ always begin with “Decide on” so that our automation will automatically recognize it as a Design Decision
2. Add the following 3 sections: (see template)
```markdown
## Status
U