## Variables
### Required Variables
- `policy_name` (`string`) required
-
The name of the policy to create. Should be unique across the spacelift account.
- `space_id` (`string`) required
-
The `space_id` (slug) of the space the policy is in.
- `type` (`string`) required
-
The type of the policy to create.
### Optional Variables
- `body` (`string`) optional
-
The body of the policy to create. Mutually exclusive with `var.body_url` and `var.body_file_path`.
**Default value:** `null`
- `body_file_path` (`string`) optional
-
The local path to the file containing the policy body. Mutually exclusive with `var.body` and `var.body_url`.
**Default value:** `null`
- `body_url` (`string`) optional
-
The URL of file containing the body of policy to create. Mutually exclusive with `var.body` and `var.body_file_path`.
**Default value:** `null`
- `body_url_version` (`string`) optional
-
The optional policy version injected using a %s in `var.body_url`. This can be pinned to a version tag or a branch.
**Default value:** `"master"`
- `labels` (`set(string)`) optional
-
List of labels to add to the policy.
**Default value:** `[ ]`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
- `additional_tag_map` (`map(string)`) optional
-
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
- `attributes` (`list(string)`) optional
-
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
- `context` (`any`) optional
-
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
- `delimiter` (`string`) optional
-
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
- `descriptor_formats` (`any`) optional
-
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
- `enabled` (`bool`) optional
-
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
- `environment` (`string`) optional
-
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
- `id_length_limit` (`number`) optional
-
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
- `label_key_case` (`string`) optional
-
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
- `label_order` (`list(string)`) optional
-
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
- `label_value_case` (`string`) optional
-
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
- `labels_as_tags` (`set(string)`) optional
-
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
- `name` (`string`) optional
-
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
- `namespace` (`string`) optional
-
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
- `regex_replace_chars` (`string`) optional
-
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
- `stage` (`string`) optional
-
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
- `tags` (`map(string)`) optional
-
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
- `tenant` (`string`) optional
-
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
- `id`
-
The ID of the created policy.
- `policy`
-
The created policy.
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `http`, version: `>= 3.0`
- `spacelift`, version: `>= 0.1.31`
### Providers
- `http`, version: `>= 3.0`
- `spacelift`, version: `>= 0.1.31`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`spacelift_policy.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/policy) (resource)
## Data Sources
The following data sources are used by this module:
- [`http_http.this`](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) (data source)
## Requirements
| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | >= 1.0 |
| [http](#requirement\_http) | >= 3.0 |
| [spacelift](#requirement\_spacelift) | >= 0.1.31 |
## Providers
| Name | Version |
|------|---------|
| [http](#provider\_http) | >= 3.0 |
| [spacelift](#provider\_spacelift) | >= 0.1.31 |
## Modules
| Name | Source | Version |
|------|--------|---------|
| [this](#module\_this) | cloudposse/label/null | 0.25.0 |
## Resources
| Name | Type |
|------|------|
| [spacelift_policy.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/policy) | resource |
| [http_http.this](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | data source |
## Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.This is for some rare cases where resources want additional configuration of tagsand therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,in the order they appear in the list. New attributes are appended to theend of the list. The elements of the list are joined by the `delimiter`and treated as a single ID element. | `list(string)` | `[]` | no |
| [body](#input\_body) | The body of the policy to create. Mutually exclusive with `var.body_url` and `var.body_file_path`. | `string` | `null` | no |
| [body\_file\_path](#input\_body\_file\_path) | The local path to the file containing the policy body. Mutually exclusive with `var.body` and `var.body_url`. | `string` | `null` | no |
| [body\_url](#input\_body\_url) | The URL of file containing the body of policy to create. Mutually exclusive with `var.body` and `var.body_file_path`. | `string` | `null` | no |
| [body\_url\_version](#input\_body\_url\_version) | The optional policy version injected using a %s in `var.body_url`. This can be pinned to a version tag or a branch. | `string` | `"master"` | no |
| [context](#input\_context) | Single object for setting entire context at once.See description of individual variables for details.Leave string and numeric variables as `null` to use default value.Individual variable settings (non-null) override settings in context object,except for attributes, tags, and additional\_tag\_map, which are merged. | `any` | \{ "additional_tag_map": \{\}, "attributes": [], "delimiter": null, "descriptor_formats": \{\}, "enabled": true, "environment": null, "id_length_limit": null, "label_key_case": null, "label_order": [], "label_value_case": null, "labels_as_tags": [ "unset" ], "name": null, "namespace": null, "regex_replace_chars": null, "stage": null, "tags": \{\}, "tenant": null\}
| no |
| [delimiter](#input\_delimiter) | Delimiter to be used between ID elements.Defaults to `-` (hyphen). Set to `""` to use no delimiter at all. | `string` | `null` | no |
| [descriptor\_formats](#input\_descriptor\_formats) | Describe additional descriptors to be output in the `descriptors` output map.Map of maps. Keys are names of descriptors. Values are maps of the form`{ format = string labels = list(string)}`(Type is `any` so the map values can later be enhanced to provide additional options.)`format` is a Terraform format string to be passed to the `format()` function.`labels` is a list of labels, in order, to pass to `format()` function.Label values will be normalized before being passed to `format()` so they will beidentical to how they appear in `id`.Default is `{}` (`descriptors` output will be empty). | `any` | `{}` | no |
| [enabled](#input\_enabled) | Set to false to prevent the module from creating any resources | `bool` | `null` | no |
| [environment](#input\_environment) | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | `string` | `null` | no |
| [id\_length\_limit](#input\_id\_length\_limit) | Limit `id` to this many characters (minimum 6).Set to `0` for unlimited length.Set to `null` for keep the existing setting, which defaults to `0`.Does not affect `id_full`. | `number` | `null` | no |
| [label\_key\_case](#input\_label\_key\_case) | Controls the letter case of the `tags` keys (label names) for tags generated by this module.Does not affect keys of tags passed in via the `tags` input.Possible values: `lower`, `title`, `upper`.Default value: `title`. | `string` | `null` | no |
| [label\_order](#input\_label\_order) | The order in which the labels (ID elements) appear in the `id`.Defaults to ["namespace", "environment", "stage", "name", "attributes"].You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. | `list(string)` | `null` | no |
| [label\_value\_case](#input\_label\_value\_case) | Controls the letter case of ID elements (labels) as included in `id`,set as tag values, and output by this module individually.Does not affect values of tags passed in via the `tags` input.Possible values: `lower`, `title`, `upper` and `none` (no transformation).Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.Default value: `lower`. | `string` | `null` | no |
| [labels](#input\_labels) | List of labels to add to the policy. | `set(string)` | `[]` | no |
| [labels\_as\_tags](#input\_labels\_as\_tags) | Set of labels (ID elements) to include as tags in the `tags` output.Default is to include all labels.Tags with empty values will not be included in the `tags` output.Set to `[]` to suppress all generated tags.**Notes:** The value of the `name` tag, if included, will be the `id`, not the `name`. Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be changed in later chained modules. Attempts to change it will be silently ignored. | `set(string)` | [ "default"]
| no |
| [name](#input\_name) | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.This is the only ID element not also included as a `tag`.The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input. | `string` | `null` | no |
| [namespace](#input\_namespace) | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | `string` | `null` | no |
| [policy\_name](#input\_policy\_name) | The name of the policy to create. Should be unique across the spacelift account. | `string` | n/a | yes |
| [regex\_replace\_chars](#input\_regex\_replace\_chars) | Terraform regular expression (regex) string.Characters matching the regex will be removed from the ID elements.If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits. | `string` | `null` | no |
| [space\_id](#input\_space\_id) | The `space_id` (slug) of the space the policy is in. | `string` | n/a | yes |
| [stage](#input\_stage) | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | `string` | `null` | no |
| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
| [type](#input\_type) | The type of the policy to create. | `string` | n/a | yes |
## Outputs
| Name | Description |
|------|-------------|
| [id](#output\_id) | The ID of the created policy. |
| [policy](#output\_policy) | The created policy. |
---
## spacelift-space
# Module: `spacelift-space`
Terraform module to provisions a [Spacelift](https://docs.spacelift.io/concepts/spaces/index.html) space.
## Usage
Here's how to invoke this module in your project:
```hcl
provider "spacelift" {}
module "space" {
source = "cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-space"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
space_name = "test"
description = "A space for our test infrasturcture"
parent_space_id = "root"
inherit_entities_from_parent = false
labels = ["test", "space"]
```
## Examples
Here is an example of using this module:
- [`../../examples/spacelift-space`](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/tree/master/examples/spacelift-space) - complete example of using this module
## Variables
### Required Variables
- `space_name` (`string`) required
-
Name of the space
### Optional Variables
- `description` (`string`) optional
-
Description of the space
**Default value:** `null`
- `inherit_entities_from_parent` (`bool`) optional
-
Flag to indicate whether this space inherits read access to entities from the parent space.
**Default value:** `false`
- `labels` (`set(string)`) optional
-
List of labels to add to the space.
**Default value:** `[ ]`
- `parent_space_id` (`string`) optional
-
ID of the parent space
**Default value:** `"root"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
- `additional_tag_map` (`map(string)`) optional
-
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
- `attributes` (`list(string)`) optional
-
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
- `context` (`any`) optional
-
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
- `delimiter` (`string`) optional
-
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
- `descriptor_formats` (`any`) optional
-
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
- `enabled` (`bool`) optional
-
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
- `environment` (`string`) optional
-
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
- `id_length_limit` (`number`) optional
-
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
- `label_key_case` (`string`) optional
-
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
- `label_order` (`list(string)`) optional
-
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
- `label_value_case` (`string`) optional
-
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
- `labels_as_tags` (`set(string)`) optional
-
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
- `name` (`string`) optional
-
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
- `namespace` (`string`) optional
-
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
- `regex_replace_chars` (`string`) optional
-
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
- `stage` (`string`) optional
-
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
- `tags` (`map(string)`) optional
-
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
- `tenant` (`string`) optional
-
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
- `space`
-
The created space
- `space_id`
-
The ID of the created space
## Dependencies
### Requirements
- `terraform`, version: `>= 1.0`
- `http`, version: `>= 3.0`
- `spacelift`, version: `>= 0.1.31`
### Providers
- `spacelift`, version: `>= 0.1.31`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`spacelift_space.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/space) (resource)
## Data Sources
The following data sources are used by this module:
## Requirements
| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | >= 1.0 |
| [http](#requirement\_http) | >= 3.0 |
| [spacelift](#requirement\_spacelift) | >= 0.1.31 |
## Providers
| Name | Version |
|------|---------|
| [spacelift](#provider\_spacelift) | >= 0.1.31 |
## Modules
| Name | Source | Version |
|------|--------|---------|
| [this](#module\_this) | cloudposse/label/null | 0.25.0 |
## Resources
| Name | Type |
|------|------|
| [spacelift_space.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/space) | resource |
## Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.This is for some rare cases where resources want additional configuration of tagsand therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,in the order they appear in the list. New attributes are appended to theend of the list. The elements of the list are joined by the `delimiter`and treated as a single ID element. | `list(string)` | `[]` | no |
| [context](#input\_context) | Single object for setting entire context at once.See description of individual variables for details.Leave string and numeric variables as `null` to use default value.Individual variable settings (non-null) override settings in context object,except for attributes, tags, and additional\_tag\_map, which are merged. | `any` | \{ "additional_tag_map": \{\}, "attributes": [], "delimiter": null, "descriptor_formats": \{\}, "enabled": true, "environment": null, "id_length_limit": null, "label_key_case": null, "label_order": [], "label_value_case": null, "labels_as_tags": [ "unset" ], "name": null, "namespace": null, "regex_replace_chars": null, "stage": null, "tags": \{\}, "tenant": null\}
| no |
| [delimiter](#input\_delimiter) | Delimiter to be used between ID elements.Defaults to `-` (hyphen). Set to `""` to use no delimiter at all. | `string` | `null` | no |
| [description](#input\_description) | Description of the space | `string` | `null` | no |
| [descriptor\_formats](#input\_descriptor\_formats) | Describe additional descriptors to be output in the `descriptors` output map.Map of maps. Keys are names of descriptors. Values are maps of the form`{ format = string labels = list(string)}`(Type is `any` so the map values can later be enhanced to provide additional options.)`format` is a Terraform format string to be passed to the `format()` function.`labels` is a list of labels, in order, to pass to `format()` function.Label values will be normalized before being passed to `format()` so they will beidentical to how they appear in `id`.Default is `{}` (`descriptors` output will be empty). | `any` | `{}` | no |
| [enabled](#input\_enabled) | Set to false to prevent the module from creating any resources | `bool` | `null` | no |
| [environment](#input\_environment) | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | `string` | `null` | no |
| [id\_length\_limit](#input\_id\_length\_limit) | Limit `id` to this many characters (minimum 6).Set to `0` for unlimited length.Set to `null` for keep the existing setting, which defaults to `0`.Does not affect `id_full`. | `number` | `null` | no |
| [inherit\_entities\_from\_parent](#input\_inherit\_entities\_from\_parent) | Flag to indicate whether this space inherits read access to entities from the parent space. | `bool` | `false` | no |
| [label\_key\_case](#input\_label\_key\_case) | Controls the letter case of the `tags` keys (label names) for tags generated by this module.Does not affect keys of tags passed in via the `tags` input.Possible values: `lower`, `title`, `upper`.Default value: `title`. | `string` | `null` | no |
| [label\_order](#input\_label\_order) | The order in which the labels (ID elements) appear in the `id`.Defaults to ["namespace", "environment", "stage", "name", "attributes"].You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. | `list(string)` | `null` | no |
| [label\_value\_case](#input\_label\_value\_case) | Controls the letter case of ID elements (labels) as included in `id`,set as tag values, and output by this module individually.Does not affect values of tags passed in via the `tags` input.Possible values: `lower`, `title`, `upper` and `none` (no transformation).Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.Default value: `lower`. | `string` | `null` | no |
| [labels](#input\_labels) | List of labels to add to the space. | `set(string)` | `[]` | no |
| [labels\_as\_tags](#input\_labels\_as\_tags) | Set of labels (ID elements) to include as tags in the `tags` output.Default is to include all labels.Tags with empty values will not be included in the `tags` output.Set to `[]` to suppress all generated tags.**Notes:** The value of the `name` tag, if included, will be the `id`, not the `name`. Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be changed in later chained modules. Attempts to change it will be silently ignored. | `set(string)` | [ "default"]
| no |
| [name](#input\_name) | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.This is the only ID element not also included as a `tag`.The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input. | `string` | `null` | no |
| [namespace](#input\_namespace) | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | `string` | `null` | no |
| [parent\_space\_id](#input\_parent\_space\_id) | ID of the parent space | `string` | `"root"` | no |
| [regex\_replace\_chars](#input\_regex\_replace\_chars) | Terraform regular expression (regex) string.Characters matching the regex will be removed from the ID elements.If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits. | `string` | `null` | no |
| [space\_name](#input\_space\_name) | Name of the space | `string` | n/a | yes |
| [stage](#input\_stage) | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | `string` | `null` | no |
| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
## Outputs
| Name | Description |
|------|-------------|
| [space](#output\_space) | The created space |
| [space\_id](#output\_space\_id) | The ID of the created space |
---
## spacelift-stack
# Module: `spacelift-stack`
Terraform module to provisions a [Spacelift](https://docs.spacelift.io/concepts/spaces/index.html) space.
## Usage
Here's how to invoke this module in your project:
```hcl
provider "spacelift" {}
module "stack" {
source = "cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-space"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
atmos_stack_name = "plat-ue1-prod-test-component"
stack_name = "plat-ue1-prod-test-component"
component_name = "test-component"
component_root = "examples/test-component"
repository = "spacelift-demo"
branch = "main"
autodeploy = true
terraform_version = "1.4.6"
```
## Examples
Here is an example of using this module:
- [`../../examples/spacelift-stack`](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/tree/master/examples/spacelift-stack) - complete example of using this module
## Variables
### Required Variables
- `atmos_stack_name` (`string`) required
-
The name of the atmos stack
- `component_name` (`string`) required
-
The name of the concrete component (typically a directory name)
- `component_root` (`string`) required
-
The path, relative to the root of the repository, where the component can be found
- `repository` (`string`) required
-
The name of your infrastructure repo
- `stack_name` (`string`) required
-
The name of the Spacelift stack
### Optional Variables
- `administrative` (`bool`) optional
-
Whether this stack can manage other stacks
**Default value:** `false`
- `after_apply` (`list(string)`) optional
-
List of after-apply scripts
**Default value:** `[ ]`
- `after_destroy` (`list(string)`) optional
-
List of after-destroy scripts
**Default value:** `[ ]`
- `after_init` (`list(string)`) optional
-
List of after-init scripts
**Default value:** `[ ]`
- `after_perform` (`list(string)`) optional
-
List of after-perform scripts
**Default value:** `[ ]`
- `after_plan` (`list(string)`) optional
-
List of after-plan scripts
**Default value:** `[ ]`
- `autodeploy` (`bool`) optional
-
Controls the Spacelift 'autodeploy' option for a stack
**Default value:** `false`
- `autoretry` (`bool`) optional
-
Controls the Spacelift 'autoretry' option for a stack
**Default value:** `false`
- `aws_role_arn` (`string`) optional
-
ARN of the AWS IAM role to assume and put its temporary credentials in the runtime environment
**Default value:** `null`
- `aws_role_enabled` (`bool`) optional
-
Flag to enable/disable Spacelift to use AWS STS to assume the supplied IAM role and put its temporary credentials in the runtime environment
**Default value:** `false`
- `aws_role_external_id` (`string`) optional
-
Custom external ID (works only for private workers). See https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html for more details
**Default value:** `null`
- `aws_role_generate_credentials_in_worker` (`bool`) optional
-
Flag to enable/disable generating AWS credentials in the private worker after assuming the supplied IAM role
**Default value:** `true`
- `azure_devops` (`map(any)`) optional
-
Azure DevOps VCS settings
**Default value:** `null`
- `before_apply` (`list(string)`) optional
-
List of before-apply scripts
**Default value:** `[ ]`
- `before_destroy` (`list(string)`) optional
-
List of before-destroy scripts
**Default value:** `[ ]`
- `before_init` (`list(string)`) optional
-
List of before-init scripts
**Default value:** `[ ]`
- `before_perform` (`list(string)`) optional
-
List of before-perform scripts
**Default value:** `[ ]`
- `before_plan` (`list(string)`) optional
-
List of before-plan scripts
**Default value:** `[ ]`
- `bitbucket_cloud` (`map(any)`) optional
-
Bitbucket Cloud VCS settings
**Default value:** `null`
- `bitbucket_datacenter` (`map(any)`) optional
-
Bitbucket Datacenter VCS settings
**Default value:** `null`
- `branch` (`string`) optional
-
Specify which branch to use within your infrastructure repo
**Default value:** `"main"`
- `cloudformation` (`map(any)`) optional
-
CloudFormation-specific configuration. Presence means this Stack is a CloudFormation Stack.
**Default value:** `null`
- `commit_sha` (`string`) optional
-
The commit SHA for which to trigger a run. Requires `var.spacelift_run_enabled` to be set to `true`
**Default value:** `null`
- `component_env` (`any`) optional
-
Map of component ENV variables
**Default value:** `{ }`
- `component_vars` (`any`) optional
-
All Terraform values to be applied to the stack via a mounted file
**Default value:** `{ }`
- `context_attachments` (`list(string)`) optional
-
A list of context IDs to attach to this stack
**Default value:** `[ ]`
- `description` (`string`) optional
-
Specify description of stack
**Default value:** `null`
- `drift_detection_enabled` (`bool`) optional
-
Flag to enable/disable drift detection on the infrastructure stacks
**Default value:** `false`
- `drift_detection_reconcile` (`bool`) optional
-
Flag to enable/disable infrastructure stacks drift automatic reconciliation. If drift is detected and `reconcile` is turned on, Spacelift will create a tracked run to correct the drift
**Default value:** `false`
- `drift_detection_schedule` (`list(string)`) optional
-
List of cron expressions to schedule drift detection for the infrastructure stacks
**Default value:**
```hcl
[
"0 4 * * *"
]
```
- `drift_detection_timezone` (`string`) optional
-
Timezone in which the schedule is expressed. Defaults to UTC.
**Default value:** `"UTC"`
- `github_enterprise` (`map(any)`) optional
-
GitHub Enterprise (self-hosted) VCS settings
**Default value:** `null`
- `gitlab` (`map(any)`) optional
-
GitLab VCS settings
**Default value:** `null`
- `labels` (`list(string)`) optional
-
A list of labels for the stack
**Default value:** `[ ]`
- `local_preview_enabled` (`bool`) optional
-
Indicates whether local preview runs can be triggered on this Stack
**Default value:** `false`
- `manage_state` (`bool`) optional
-
Flag to enable/disable manage_state setting in stack
**Default value:** `true`
- `policy_ids` (`list(string)`) optional
-
List of Rego policy IDs to attach to this stack
**Default value:** `[ ]`
- `protect_from_deletion` (`bool`) optional
-
Flag to enable/disable deletion protection.
**Default value:** `false`
- `pulumi` (`map(any)`) optional
-
Pulumi-specific configuration. Presence means this Stack is a Pulumi Stack.
**Default value:** `null`
- `runner_image` (`string`) optional
-
The full image name and tag of the Docker image to use in Spacelift
**Default value:** `null`
- `showcase` (`map(any)`) optional
-
Showcase settings
**Default value:** `null`
- `space_id` (`string`) optional
-
Place the stack in the specified space_id.
**Default value:** `"root"`
- `spacelift_run_enabled` (`bool`) optional
-
Enable/disable creation of the `spacelift_run` resource
**Default value:** `false`
- `spacelift_stack_dependency_enabled` (`bool`) optional
-
If enabled, the `spacelift_stack_dependency` Spacelift resource will be used to create dependencies between stacks instead of using the `depends-on` labels. The `depends-on` labels will be removed from the stacks and the trigger policies for dependencies will be detached
**Default value:** `false`
- `stack_destructor_enabled` (`bool`) optional
-
Flag to enable/disable the stack destructor to destroy the resources of the stack before deleting the stack itself
**Default value:** `false`
- `terraform_smart_sanitization` (`bool`) optional
-
Whether or not to enable [Smart Sanitization](https://docs.spacelift.io/vendors/terraform/resource-sanitization) which will only sanitize values marked as sensitive.
**Default value:** `false`
- `terraform_version` (`string`) optional
-
Specify the version of Terraform to use for the stack
**Default value:** `null`
- `terraform_workflow_tool` (`string`) optional
-
Defines the tool that will be used to execute the workflow. This can be one of OPEN_TOFU, TERRAFORM_FOSS or CUSTOM. Defaults to TERRAFORM_FOSS.
**Default value:** `"TERRAFORM_FOSS"`
- `terraform_workspace` (`string`) optional
-
Specify the Terraform workspace to use for the stack
**Default value:** `null`
- `webhook_enabled` (`bool`) optional
-
Flag to enable/disable the webhook endpoint to which Spacelift sends the POST requests about run state changes
**Default value:** `false`
- `webhook_endpoint` (`string`) optional
-
Webhook endpoint to which Spacelift sends the POST requests about run state changes
**Default value:** `null`
- `webhook_secret` (`string`) optional
-
Webhook secret used to sign each POST request so you're able to verify that the requests come from Spacelift
**Default value:** `null`
- `worker_pool_id` (`string`) optional
-
The immutable ID (slug) of the worker pool
**Default value:** `null`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
- `additional_tag_map` (`map(string)`) optional
-
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
- `attributes` (`list(string)`) optional
-
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
- `context` (`any`) optional
-
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
- `delimiter` (`string`) optional
-
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
- `descriptor_formats` (`any`) optional
-
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
- `enabled` (`bool`) optional
-
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
- `environment` (`string`) optional
-
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
- `id_length_limit` (`number`) optional
-
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
- `label_key_case` (`string`) optional
-
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
- `label_order` (`list(string)`) optional
-
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
- `label_value_case` (`string`) optional
-
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
- `labels_as_tags` (`set(string)`) optional
-
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
- `name` (`string`) optional
-
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
- `namespace` (`string`) optional
-
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
- `regex_replace_chars` (`string`) optional
-
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
- `stage` (`string`) optional
-
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
- `tags` (`map(string)`) optional
-
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
- `tenant` (`string`) optional
-
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
- `id`
-
The stack id
- `stack`
-
The created stack
## Dependencies
### Requirements
- `terraform`, version: `>= 0.13.0`
- `spacelift`, version: `>= 0.1.31`
### Providers
- `spacelift`, version: `>= 0.1.31`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`spacelift_aws_role.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/aws_role) (resource)
- [`spacelift_context_attachment.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/context_attachment) (resource)
- [`spacelift_drift_detection.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/drift_detection) (resource)
- [`spacelift_environment_variable.component_env_vars`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/environment_variable) (resource)
- [`spacelift_environment_variable.component_name`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/environment_variable) (resource)
- [`spacelift_environment_variable.stack_name`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/environment_variable) (resource)
- [`spacelift_mounted_file.stack_config`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/mounted_file) (resource)
- [`spacelift_policy_attachment.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/policy_attachment) (resource)
- [`spacelift_run.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/run) (resource)
- [`spacelift_stack.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/stack) (resource)
- [`spacelift_stack_dependency.default`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/stack_dependency) (resource)
- [`spacelift_stack_destructor.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/stack_destructor) (resource)
- [`spacelift_webhook.this`](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/webhook) (resource)
## Data Sources
The following data sources are used by this module:
## Requirements
| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | >= 0.13.0 |
| [spacelift](#requirement\_spacelift) | >= 0.1.31 |
## Providers
| Name | Version |
|------|---------|
| [spacelift](#provider\_spacelift) | >= 0.1.31 |
## Modules
| Name | Source | Version |
|------|--------|---------|
| [this](#module\_this) | cloudposse/label/null | 0.25.0 |
## Resources
| Name | Type |
|------|------|
| [spacelift_aws_role.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/aws_role) | resource |
| [spacelift_context_attachment.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/context_attachment) | resource |
| [spacelift_drift_detection.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/drift_detection) | resource |
| [spacelift_environment_variable.component_env_vars](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/environment_variable) | resource |
| [spacelift_environment_variable.component_name](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/environment_variable) | resource |
| [spacelift_environment_variable.stack_name](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/environment_variable) | resource |
| [spacelift_mounted_file.stack_config](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/mounted_file) | resource |
| [spacelift_policy_attachment.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/policy_attachment) | resource |
| [spacelift_run.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/run) | resource |
| [spacelift_stack.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/stack) | resource |
| [spacelift_stack_dependency.default](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/stack_dependency) | resource |
| [spacelift_stack_destructor.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/stack_destructor) | resource |
| [spacelift_webhook.this](https://registry.terraform.io/providers/spacelift-io/spacelift/latest/docs/resources/webhook) | resource |
## Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.This is for some rare cases where resources want additional configuration of tagsand therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
| [administrative](#input\_administrative) | Whether this stack can manage other stacks | `bool` | `false` | no |
| [after\_apply](#input\_after\_apply) | List of after-apply scripts | `list(string)` | `[]` | no |
| [after\_destroy](#input\_after\_destroy) | List of after-destroy scripts | `list(string)` | `[]` | no |
| [after\_init](#input\_after\_init) | List of after-init scripts | `list(string)` | `[]` | no |
| [after\_perform](#input\_after\_perform) | List of after-perform scripts | `list(string)` | `[]` | no |
| [after\_plan](#input\_after\_plan) | List of after-plan scripts | `list(string)` | `[]` | no |
| [atmos\_stack\_name](#input\_atmos\_stack\_name) | The name of the atmos stack | `string` | n/a | yes |
| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,in the order they appear in the list. New attributes are appended to theend of the list. The elements of the list are joined by the `delimiter`and treated as a single ID element. | `list(string)` | `[]` | no |
| [autodeploy](#input\_autodeploy) | Controls the Spacelift 'autodeploy' option for a stack | `bool` | `false` | no |
| [autoretry](#input\_autoretry) | Controls the Spacelift 'autoretry' option for a stack | `bool` | `false` | no |
| [aws\_role\_arn](#input\_aws\_role\_arn) | ARN of the AWS IAM role to assume and put its temporary credentials in the runtime environment | `string` | `null` | no |
| [aws\_role\_enabled](#input\_aws\_role\_enabled) | Flag to enable/disable Spacelift to use AWS STS to assume the supplied IAM role and put its temporary credentials in the runtime environment | `bool` | `false` | no |
| [aws\_role\_external\_id](#input\_aws\_role\_external\_id) | Custom external ID (works only for private workers). See https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html for more details | `string` | `null` | no |
| [aws\_role\_generate\_credentials\_in\_worker](#input\_aws\_role\_generate\_credentials\_in\_worker) | Flag to enable/disable generating AWS credentials in the private worker after assuming the supplied IAM role | `bool` | `true` | no |
| [azure\_devops](#input\_azure\_devops) | Azure DevOps VCS settings | `map(any)` | `null` | no |
| [before\_apply](#input\_before\_apply) | List of before-apply scripts | `list(string)` | `[]` | no |
| [before\_destroy](#input\_before\_destroy) | List of before-destroy scripts | `list(string)` | `[]` | no |
| [before\_init](#input\_before\_init) | List of before-init scripts | `list(string)` | `[]` | no |
| [before\_perform](#input\_before\_perform) | List of before-perform scripts | `list(string)` | `[]` | no |
| [before\_plan](#input\_before\_plan) | List of before-plan scripts | `list(string)` | `[]` | no |
| [bitbucket\_cloud](#input\_bitbucket\_cloud) | Bitbucket Cloud VCS settings | `map(any)` | `null` | no |
| [bitbucket\_datacenter](#input\_bitbucket\_datacenter) | Bitbucket Datacenter VCS settings | `map(any)` | `null` | no |
| [branch](#input\_branch) | Specify which branch to use within your infrastructure repo | `string` | `"main"` | no |
| [cloudformation](#input\_cloudformation) | CloudFormation-specific configuration. Presence means this Stack is a CloudFormation Stack. | `map(any)` | `null` | no |
| [commit\_sha](#input\_commit\_sha) | The commit SHA for which to trigger a run. Requires `var.spacelift_run_enabled` to be set to `true` | `string` | `null` | no |
| [component\_env](#input\_component\_env) | Map of component ENV variables | `any` | `{}` | no |
| [component\_name](#input\_component\_name) | The name of the concrete component (typically a directory name) | `string` | n/a | yes |
| [component\_root](#input\_component\_root) | The path, relative to the root of the repository, where the component can be found | `string` | n/a | yes |
| [component\_vars](#input\_component\_vars) | All Terraform values to be applied to the stack via a mounted file | `any` | `{}` | no |
| [context](#input\_context) | Single object for setting entire context at once.See description of individual variables for details.Leave string and numeric variables as `null` to use default value.Individual variable settings (non-null) override settings in context object,except for attributes, tags, and additional\_tag\_map, which are merged. | `any` | \{ "additional_tag_map": \{\}, "attributes": [], "delimiter": null, "descriptor_formats": \{\}, "enabled": true, "environment": null, "id_length_limit": null, "label_key_case": null, "label_order": [], "label_value_case": null, "labels_as_tags": [ "unset" ], "name": null, "namespace": null, "regex_replace_chars": null, "stage": null, "tags": \{\}, "tenant": null\}
| no |
| [context\_attachments](#input\_context\_attachments) | A list of context IDs to attach to this stack | `list(string)` | `[]` | no |
| [delimiter](#input\_delimiter) | Delimiter to be used between ID elements.Defaults to `-` (hyphen). Set to `""` to use no delimiter at all. | `string` | `null` | no |
| [description](#input\_description) | Specify description of stack | `string` | `null` | no |
| [descriptor\_formats](#input\_descriptor\_formats) | Describe additional descriptors to be output in the `descriptors` output map.Map of maps. Keys are names of descriptors. Values are maps of the form`{ format = string labels = list(string)}`(Type is `any` so the map values can later be enhanced to provide additional options.)`format` is a Terraform format string to be passed to the `format()` function.`labels` is a list of labels, in order, to pass to `format()` function.Label values will be normalized before being passed to `format()` so they will beidentical to how they appear in `id`.Default is `{}` (`descriptors` output will be empty). | `any` | `{}` | no |
| [drift\_detection\_enabled](#input\_drift\_detection\_enabled) | Flag to enable/disable drift detection on the infrastructure stacks | `bool` | `false` | no |
| [drift\_detection\_reconcile](#input\_drift\_detection\_reconcile) | Flag to enable/disable infrastructure stacks drift automatic reconciliation. If drift is detected and `reconcile` is turned on, Spacelift will create a tracked run to correct the drift | `bool` | `false` | no |
| [drift\_detection\_schedule](#input\_drift\_detection\_schedule) | List of cron expressions to schedule drift detection for the infrastructure stacks | `list(string)` | [ "0 4 * * *"]
| no |
| [drift\_detection\_timezone](#input\_drift\_detection\_timezone) | Timezone in which the schedule is expressed. Defaults to UTC. | `string` | `"UTC"` | no |
| [enabled](#input\_enabled) | Set to false to prevent the module from creating any resources | `bool` | `null` | no |
| [environment](#input\_environment) | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | `string` | `null` | no |
| [github\_enterprise](#input\_github\_enterprise) | GitHub Enterprise (self-hosted) VCS settings | `map(any)` | `null` | no |
| [gitlab](#input\_gitlab) | GitLab VCS settings | `map(any)` | `null` | no |
| [id\_length\_limit](#input\_id\_length\_limit) | Limit `id` to this many characters (minimum 6).Set to `0` for unlimited length.Set to `null` for keep the existing setting, which defaults to `0`.Does not affect `id_full`. | `number` | `null` | no |
| [label\_key\_case](#input\_label\_key\_case) | Controls the letter case of the `tags` keys (label names) for tags generated by this module.Does not affect keys of tags passed in via the `tags` input.Possible values: `lower`, `title`, `upper`.Default value: `title`. | `string` | `null` | no |
| [label\_order](#input\_label\_order) | The order in which the labels (ID elements) appear in the `id`.Defaults to ["namespace", "environment", "stage", "name", "attributes"].You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. | `list(string)` | `null` | no |
| [label\_value\_case](#input\_label\_value\_case) | Controls the letter case of ID elements (labels) as included in `id`,set as tag values, and output by this module individually.Does not affect values of tags passed in via the `tags` input.Possible values: `lower`, `title`, `upper` and `none` (no transformation).Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.Default value: `lower`. | `string` | `null` | no |
| [labels](#input\_labels) | A list of labels for the stack | `list(string)` | `[]` | no |
| [labels\_as\_tags](#input\_labels\_as\_tags) | Set of labels (ID elements) to include as tags in the `tags` output.Default is to include all labels.Tags with empty values will not be included in the `tags` output.Set to `[]` to suppress all generated tags.**Notes:** The value of the `name` tag, if included, will be the `id`, not the `name`. Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be changed in later chained modules. Attempts to change it will be silently ignored. | `set(string)` | [ "default"]
| no |
| [local\_preview\_enabled](#input\_local\_preview\_enabled) | Indicates whether local preview runs can be triggered on this Stack | `bool` | `false` | no |
| [manage\_state](#input\_manage\_state) | Flag to enable/disable manage\_state setting in stack | `bool` | `true` | no |
| [name](#input\_name) | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.This is the only ID element not also included as a `tag`.The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input. | `string` | `null` | no |
| [namespace](#input\_namespace) | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | `string` | `null` | no |
| [policy\_ids](#input\_policy\_ids) | List of Rego policy IDs to attach to this stack | `list(string)` | `[]` | no |
| [protect\_from\_deletion](#input\_protect\_from\_deletion) | Flag to enable/disable deletion protection. | `bool` | `false` | no |
| [pulumi](#input\_pulumi) | Pulumi-specific configuration. Presence means this Stack is a Pulumi Stack. | `map(any)` | `null` | no |
| [regex\_replace\_chars](#input\_regex\_replace\_chars) | Terraform regular expression (regex) string.Characters matching the regex will be removed from the ID elements.If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits. | `string` | `null` | no |
| [repository](#input\_repository) | The name of your infrastructure repo | `string` | n/a | yes |
| [runner\_image](#input\_runner\_image) | The full image name and tag of the Docker image to use in Spacelift | `string` | `null` | no |
| [showcase](#input\_showcase) | Showcase settings | `map(any)` | `null` | no |
| [space\_id](#input\_space\_id) | Place the stack in the specified space\_id. | `string` | `"root"` | no |
| [spacelift\_run\_enabled](#input\_spacelift\_run\_enabled) | Enable/disable creation of the `spacelift_run` resource | `bool` | `false` | no |
| [spacelift\_stack\_dependency\_enabled](#input\_spacelift\_stack\_dependency\_enabled) | If enabled, the `spacelift_stack_dependency` Spacelift resource will be used to create dependencies between stacks instead of using the `depends-on` labels. The `depends-on` labels will be removed from the stacks and the trigger policies for dependencies will be detached | `bool` | `false` | no |
| [stack\_destructor\_enabled](#input\_stack\_destructor\_enabled) | Flag to enable/disable the stack destructor to destroy the resources of the stack before deleting the stack itself | `bool` | `false` | no |
| [stack\_name](#input\_stack\_name) | The name of the Spacelift stack | `string` | n/a | yes |
| [stage](#input\_stage) | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | `string` | `null` | no |
| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
| [terraform\_smart\_sanitization](#input\_terraform\_smart\_sanitization) | Whether or not to enable [Smart Sanitization](https://docs.spacelift.io/vendors/terraform/resource-sanitization) which will only sanitize values marked as sensitive. | `bool` | `false` | no |
| [terraform\_version](#input\_terraform\_version) | Specify the version of Terraform to use for the stack | `string` | `null` | no |
| [terraform\_workflow\_tool](#input\_terraform\_workflow\_tool) | Defines the tool that will be used to execute the workflow. This can be one of OPEN\_TOFU, TERRAFORM\_FOSS or CUSTOM. Defaults to TERRAFORM\_FOSS. | `string` | `"TERRAFORM_FOSS"` | no |
| [terraform\_workspace](#input\_terraform\_workspace) | Specify the Terraform workspace to use for the stack | `string` | `null` | no |
| [webhook\_enabled](#input\_webhook\_enabled) | Flag to enable/disable the webhook endpoint to which Spacelift sends the POST requests about run state changes | `bool` | `false` | no |
| [webhook\_endpoint](#input\_webhook\_endpoint) | Webhook endpoint to which Spacelift sends the POST requests about run state changes | `string` | `null` | no |
| [webhook\_secret](#input\_webhook\_secret) | Webhook secret used to sign each POST request so you're able to verify that the requests come from Spacelift | `string` | `null` | no |
| [worker\_pool\_id](#input\_worker\_pool\_id) | The immutable ID (slug) of the worker pool | `string` | `null` | no |
## Outputs
| Name | Description |
|------|-------------|
| [id](#output\_id) | The stack id |
| [stack](#output\_stack) | The created stack |
---
## spacelift-stacks-from-atmos-config
# Module: `spacelift-stacks-from-atmos-config`
Terraform module to extract the [Spacelift Stack](https://docs.spacelift.io/concepts/stack/) configuration from atmos
config. In addition, the results can be filtered by various criteria, such as tenant, environment, stack labels, etc.
## Usage
Here's how to invoke this module in your project:
```hcl
provider "spacelift" {}
module "spacelift_stacks" {
source = "cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-stacks-from-atmos-config"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
space_name = "test"
description = "A space for our test infrasturcture"
parent_space_id = "root"
inherit_entities_from_parent = false
labels = ["test", "space"]
```
## Examples
Here is an example of using this module:
- [`../../examples/spacelift-config-from-atmos-config`](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/tree/master/examples/spacelift-config-from-atmos-config) - complete example of using this module
## Variables
### Required Variables
- `context_filters` required
-
Context filters to output stacks matching specific criteria.
**Type:**
```hcl
object({
namespaces = optional(list(string), [])
environments = optional(list(string), [])
tenants = optional(list(string), [])
stages = optional(list(string), [])
tags = optional(map(string), {})
administrative = optional(bool)
root_administrative = optional(bool)
})
```
### Optional Variables
- `component_deps_processing_enabled` (`bool`) optional
-
Boolean flag to enable/disable processing stack config dependencies for the components in the provided stack
**Default value:** `true`
- `excluded_context_filters` optional
-
Context filters to exclude from stacks matching specific criteria of `var.context_filters`.
**Type:**
```hcl
object({
namespaces = optional(list(string), [])
environments = optional(list(string), [])
tenants = optional(list(string), [])
stages = optional(list(string), [])
tags = optional(map(string), {})
})
```
**Default value:** `{ }`
- `imports_processing_enabled` (`bool`) optional
-
Enable/disable processing stack imports
**Default value:** `false`
- `stack_config_path_template` (`string`) optional
-
Stack config path template
**Default value:** `"stacks/%s.yaml"`
- `stack_deps_processing_enabled` (`bool`) optional
-
Boolean flag to enable/disable processing all stack dependencies in the provided stack
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
- `additional_tag_map` (`map(string)`) optional
-
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
- `attributes` (`list(string)`) optional
-
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
- `context` (`any`) optional
-
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
- `delimiter` (`string`) optional
-
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
- `descriptor_formats` (`any`) optional
-
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
- `enabled` (`bool`) optional
-
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
- `environment` (`string`) optional
-
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
- `id_length_limit` (`number`) optional
-
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
- `label_key_case` (`string`) optional
-
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
- `label_order` (`list(string)`) optional
-
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
- `label_value_case` (`string`) optional
-
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
- `labels_as_tags` (`set(string)`) optional
-
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
- `name` (`string`) optional
-
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
- `namespace` (`string`) optional
-
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
- `regex_replace_chars` (`string`) optional
-
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
- `stage` (`string`) optional
-
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
- `tags` (`map(string)`) optional
-
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
- `tenant` (`string`) optional
-
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
- `spacelift_stacks`
-
Generated stacks
- `stacks`
-
Generated stacks
## Dependencies
### Requirements
- `terraform`, version: `>= 0.13.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`spacelift_config` | 1.8.0 | [`cloudposse/stack-config/yaml//modules/spacelift`](https://registry.terraform.io/modules/cloudposse/stack-config/yaml/modules/spacelift/1.8.0) | Convert infrastructure stacks from YAML configs into Spacelift stacks
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Requirements
| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | >= 0.13.0 |
## Providers
No providers.
## Modules
| Name | Source | Version |
|------|--------|---------|
| [spacelift\_config](#module\_spacelift\_config) | cloudposse/stack-config/yaml//modules/spacelift | 1.5.0 |
| [this](#module\_this) | cloudposse/label/null | 0.25.0 |
## Resources
No resources.
## Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.This is for some rare cases where resources want additional configuration of tagsand therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,in the order they appear in the list. New attributes are appended to theend of the list. The elements of the list are joined by the `delimiter`and treated as a single ID element. | `list(string)` | `[]` | no |
| [component\_deps\_processing\_enabled](#input\_component\_deps\_processing\_enabled) | Boolean flag to enable/disable processing stack config dependencies for the components in the provided stack | `bool` | `true` | no |
| [context](#input\_context) | Single object for setting entire context at once.See description of individual variables for details.Leave string and numeric variables as `null` to use default value.Individual variable settings (non-null) override settings in context object,except for attributes, tags, and additional\_tag\_map, which are merged. | `any` | \{ "additional_tag_map": \{\}, "attributes": [], "delimiter": null, "descriptor_formats": \{\}, "enabled": true, "environment": null, "id_length_limit": null, "label_key_case": null, "label_order": [], "label_value_case": null, "labels_as_tags": [ "unset" ], "name": null, "namespace": null, "regex_replace_chars": null, "stage": null, "tags": \{\}, "tenant": null\}
| no |
| [context\_filters](#input\_context\_filters) | Context filters to output stacks matching specific criteria. | object(\{ namespaces = optional(list(string), []) environments = optional(list(string), []) tenants = optional(list(string), []) stages = optional(list(string), []) tags = optional(map(string), \{\}) administrative = optional(bool) root_administrative = optional(bool) \})
| n/a | yes |
| [delimiter](#input\_delimiter) | Delimiter to be used between ID elements.Defaults to `-` (hyphen). Set to `""` to use no delimiter at all. | `string` | `null` | no |
| [descriptor\_formats](#input\_descriptor\_formats) | Describe additional descriptors to be output in the `descriptors` output map.Map of maps. Keys are names of descriptors. Values are maps of the form`{ format = string labels = list(string)}`(Type is `any` so the map values can later be enhanced to provide additional options.)`format` is a Terraform format string to be passed to the `format()` function.`labels` is a list of labels, in order, to pass to `format()` function.Label values will be normalized before being passed to `format()` so they will beidentical to how they appear in `id`.Default is `{}` (`descriptors` output will be empty). | `any` | `{}` | no |
| [enabled](#input\_enabled) | Set to false to prevent the module from creating any resources | `bool` | `null` | no |
| [environment](#input\_environment) | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | `string` | `null` | no |
| [excluded\_context\_filters](#input\_excluded\_context\_filters) | Context filters to exclude from stacks matching specific criteria of `var.context_filters`. | object(\{ namespaces = optional(list(string), []) environments = optional(list(string), []) tenants = optional(list(string), []) stages = optional(list(string), []) tags = optional(map(string), \{\}) \})
| `{}` | no |
| [id\_length\_limit](#input\_id\_length\_limit) | Limit `id` to this many characters (minimum 6).Set to `0` for unlimited length.Set to `null` for keep the existing setting, which defaults to `0`.Does not affect `id_full`. | `number` | `null` | no |
| [imports\_processing\_enabled](#input\_imports\_processing\_enabled) | Enable/disable processing stack imports | `bool` | `false` | no |
| [label\_key\_case](#input\_label\_key\_case) | Controls the letter case of the `tags` keys (label names) for tags generated by this module.Does not affect keys of tags passed in via the `tags` input.Possible values: `lower`, `title`, `upper`.Default value: `title`. | `string` | `null` | no |
| [label\_order](#input\_label\_order) | The order in which the labels (ID elements) appear in the `id`.Defaults to ["namespace", "environment", "stage", "name", "attributes"].You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. | `list(string)` | `null` | no |
| [label\_value\_case](#input\_label\_value\_case) | Controls the letter case of ID elements (labels) as included in `id`,set as tag values, and output by this module individually.Does not affect values of tags passed in via the `tags` input.Possible values: `lower`, `title`, `upper` and `none` (no transformation).Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.Default value: `lower`. | `string` | `null` | no |
| [labels\_as\_tags](#input\_labels\_as\_tags) | Set of labels (ID elements) to include as tags in the `tags` output.Default is to include all labels.Tags with empty values will not be included in the `tags` output.Set to `[]` to suppress all generated tags.**Notes:** The value of the `name` tag, if included, will be the `id`, not the `name`. Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be changed in later chained modules. Attempts to change it will be silently ignored. | `set(string)` | [ "default"]
| no |
| [name](#input\_name) | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.This is the only ID element not also included as a `tag`.The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input. | `string` | `null` | no |
| [namespace](#input\_namespace) | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | `string` | `null` | no |
| [regex\_replace\_chars](#input\_regex\_replace\_chars) | Terraform regular expression (regex) string.Characters matching the regex will be removed from the ID elements.If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits. | `string` | `null` | no |
| [stack\_config\_path\_template](#input\_stack\_config\_path\_template) | Stack config path template | `string` | `"stacks/%s.yaml"` | no |
| [stack\_deps\_processing\_enabled](#input\_stack\_deps\_processing\_enabled) | Boolean flag to enable/disable processing all stack dependencies in the provided stack | `bool` | `false` | no |
| [stage](#input\_stage) | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | `string` | `null` | no |
| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
## Outputs
| Name | Description |
|------|-------------|
| [spacelift\_stacks](#output\_spacelift\_stacks) | Generated stacks |
| [stacks](#output\_stacks) | Generated stacks |
---
## Spacelift(Spacelift)
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
Explore modules designed to manage Spacelift configurations and workflows with Terraform. These modules enhance your continuous integration and delivery pipelines.
---
## label(Label)
# Module: `label`
## Deprecated
This module was an experimental fork and is now obsolete and will not be maintained.
Any projects using `terraform-terraform-label` are encouraged to switch to using
[terraform-null-label](https://github.com/cloudposse/terraform-null-label),
which is actively maintained and used by all current Cloud Posse Terraform modules.
This module was a fork of [terraform-null-label](https://github.com/cloudposse/terraform-null-label), made
at a time when that project was using the Terraform `null` provider (hence the "null" in the name), in order
to remove the `null` provider dependency. This was accomplished by removing outputs that required the `null`
provider.
With the features that became available in Terraform 0.12, the `terraform-null-label` project was able
to retain all of its features and also
[remove the `null` provider](https://github.com/cloudposse/terraform-null-label/commit/d6d24b80d687e76e029f39f444d0163b42c5d5e0),
removing any incentive to further develop `terraform-terraform-label`.
With the key distinguishing feature of `terraform-terraform-label` no longer being a distinguishing feature,
this module was no longer necessary, and all focus returned to maintaining and enhancing `terraform-null-label`,
which now far surpasses this module in functionality.
### Historical Description
Terraform module designed to generate consistent label names and tags for resources. Use `terraform-terraform-label` to implement a strict naming convention.
#### `terraform-terraform-label` is a fork of [terraform-null-label](https://github.com/cloudposse/terraform-null-label) which uses only the core Terraform provider.
A label follows the following convention: `{namespace}-{stage}-{name}-{attributes}`. The delimiter (e.g. `-`) is interchangeable.
It's recommended to use one `terraform-terraform-label` module for every unique resource of a given resource type.
For example, if you have 10 instances, there should be 10 different labels.
However, if you have multiple different kinds of resources (e.g. instances, security groups, file systems, and elastic IPs), then they can all share the same label assuming they are logically related.
All [Cloud Posse modules](https://github.com/cloudposse?utf8=%E2%9C%93&q=terraform-&type=&language=) use
the related [terraform-null-label](https://github.com/cloudposse/terraform-null-label) module to ensure resources can be instantiated multiple times within an account and without conflict.
**NOTE:** The second `terraform` word in `terraform-terraform-label` refers to the primary Terraform provider used in this module.
## Usage
### Simple Example
Include this repository as a module in your existing terraform code:
```hcl
module "eg_prod_bastion_label" {
source = "cloudposse/label/terraform"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
namespace = "eg"
stage = "prod"
name = "bastion"
attributes = ["public"]
delimiter = "-"
tags = {
"BusinessUnit" = "XYZ",
"Snapshot" = "true"
}
}
```
This will create an `id` with the value of `eg-prod-bastion-public`.
Now reference the label when creating an instance (for example):
```hcl
resource "aws_instance" "eg_prod_bastion_public" {
instance_type = "t1.micro"
tags = module.eg_prod_bastion_label.tags
}
```
Or define a security group:
```hcl
resource "aws_security_group" "eg_prod_bastion_public" {
vpc_id = var.vpc_id
name = module.eg_prod_bastion_label.id
tags = module.eg_prod_bastion_label.tags
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
```
### Advanced Example
Here is a more complex example with two instances using two different labels. Note how efficiently the tags are defined for both the instance and the security group.
```hcl
module "eg_prod_bastion_abc_label" {
source = "cloudposse/label/terraform"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
namespace = "eg"
stage = "prod"
name = "bastion"
attributes = ["abc"]
delimiter = "-"
tags = {
"BusinessUnit" = "ABC"
}
}
resource "aws_security_group" "eg_prod_bastion_abc" {
name = module.eg_prod_bastion_abc_label.id
tags = module.eg_prod_bastion_abc_label.tags
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "eg_prod_bastion_abc" {
instance_type = "t1.micro"
tags = module.eg_prod_bastion_abc_label.tags
vpc_security_group_ids = [aws_security_group.eg_prod_bastion_abc.id]
}
module "eg_prod_bastion_xyz_label" {
source = "cloudposse/label/terraform"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
namespace = "eg"
stage = "prod"
name = "bastion"
attributes = ["xyz"]
delimiter = "-"
tags = {
"BusinessUnit" = "XYZ"
}
}
resource "aws_security_group" "eg_prod_bastion_xyz" {
name = module.eg_prod_bastion_xyz_label.id
tags = module.eg_prod_bastion_xyz_label.tags
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "eg_prod_bastion_xyz" {
instance_type = "t1.micro"
tags = module.eg_prod_bastion_xyz_label.tags
vpc_security_group_ids = [aws_security_group.eg_prod_bastion_xyz.id]
}
```
## Variables
### Required Variables
### Optional Variables
- `convert_case` (`bool`) optional
-
Convert fields to lower case
**Default value:** `true`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
- `attributes` (`list(string)`) optional
-
Additional attributes (e.g. `1`)
**Required:** No
**Default value:** `[ ]`
- `delimiter` (`string`) optional
-
Delimiter to be used between `namespace`, `stage`, `name` and `attributes`
**Required:** No
**Default value:** `"-"`
- `enabled` (`bool`) optional
-
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `true`
- `name` (`string`) optional
-
Solution name, e.g. `app` or `jenkins`
**Required:** No
**Default value:** `""`
- `namespace` (`string`) optional
-
Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'
**Required:** No
**Default value:** `""`
- `stage` (`string`) optional
-
Stage, e.g. 'prod', 'staging', 'dev'
**Required:** No
**Default value:** `""`
- `tags` (`map(string)`) optional
-
Additional tags (e.g. `map('BusinessUnit','XYZ')`)
**Required:** No
**Default value:** `{ }`
## Outputs
- `attributes`
-
Normalized attributes
- `delimiter`
-
Delimiter between `namespace`, `stage`, `name` and `attributes`
- `id`
-
Disambiguated ID
- `name`
-
Normalized name
- `namespace`
-
Normalized namespace
- `stage`
-
Normalized stage
- `tags`
-
Normalized Tag map
## Dependencies
### Requirements
- `terraform`, version: `>= 0.13.0`
---
## Terraform
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
Access core Terraform modules that provide foundational resources and configurations. These modules form the basis of infrastructure management using Terraform.
---
## ssh-key-pair
# Module: `ssh-key-pair`
Terraform module for generating an SSH public key file.
## Usage
```hcl
module "ssh_key_pair" {
source = "cloudposse/ssh-key-pair/tls"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
namespace = "eg"
stage = "test"
name = "app"
ssh_public_key_path = "/secrets"
private_key_extension = ".pem"
public_key_extension = ".pub"
chmod_command = "chmod 600 %v"
}
```
## Variables
### Required Variables
- `ssh_public_key_path` (`string`) required
-
Path to SSH public key directory (e.g. `/secrets`)
### Optional Variables
- `chmod_command` (`string`) optional
-
Template of the command executed on the private key file
**Default value:** `"chmod 600 %v"`
- `private_key_extension` (`string`) optional
-
Private key extension
**Default value:** `""`
- `private_key_output_enabled` (`bool`) optional
-
Add the private key as a terraform output private_key
**Default value:** `false`
- `public_key_extension` (`string`) optional
-
Public key extension
**Default value:** `".pub"`
- `ssh_key_algorithm` (`string`) optional
-
SSH key algorithm
**Default value:** `"RSA"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
- `additional_tag_map` (`map(string)`) optional
-
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
- `attributes` (`list(string)`) optional
-
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
- `context` (`any`) optional
-
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
- `delimiter` (`string`) optional
-
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
- `descriptor_formats` (`any`) optional
-
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
- `enabled` (`bool`) optional
-
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
- `environment` (`string`) optional
-
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
- `id_length_limit` (`number`) optional
-
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
- `label_key_case` (`string`) optional
-
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
- `label_order` (`list(string)`) optional
-
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
- `label_value_case` (`string`) optional
-
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
- `labels_as_tags` (`set(string)`) optional
-
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
- `name` (`string`) optional
-
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
- `namespace` (`string`) optional
-
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
- `regex_replace_chars` (`string`) optional
-
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
- `stage` (`string`) optional
-
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
- `tags` (`map(string)`) optional
-
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
- `tenant` (`string`) optional
-
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
- `key_name`
-
Name of SSH key
- `private_key`
-
Content of the generated private key
- `public_key`
-
Content of the generated public key
## Dependencies
### Requirements
- `terraform`, version: `>= 0.13.0`
- `local`, version: `>= 1.3`
- `null`, version: `>= 2.1`
- `tls`, version: `>= 2.0`
### Providers
- `local`, version: `>= 1.3`
- `null`, version: `>= 2.1`
- `tls`, version: `>= 2.0`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
- [`local_file.private_key_pem`](https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file) (resource)
- [`local_file.public_key_openssh`](https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file) (resource)
- [`null_resource.chmod`](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) (resource)
- [`tls_private_key.default`](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) (resource)
## Data Sources
The following data sources are used by this module:
---
## TLS
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
Use our Terraform modules to manage TLS certificates and encryption settings. These modules ensure secure communication within your infrastructure.
---
## config(3)
# Module: `config`
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps.
## Introduction
The module accepts paths to local and remote YAML configuration template files
and converts the templates into Terraform lists and maps for consumption in other Terraform modules.
The module can accept a map of parameters for interpolation within the YAML config templates.
The module also supports a top-level `import` attribute in map configuration templates, which will include the file and perform a deep merge.
Up to 10 levels of imports hierarchy are supported, and all imported maps are deep merged into a final configuration map.
For example, if you have a config file like this (e.g. `myconfig.yaml`):
```yaml
import:
- file1
- file2
```
Then, this module will deep merge `file1.yaml` and `file2.yaml` into `myconfig.yaml`.
__Note:__ Do not include the extensions (e.g. `.yaml`) in the imports.
### Attributions
Big thanks to [Imperative Systems Inc.](https://github.com/Imperative-Systems-Inc)
for the excellent [deepmerge](https://github.com/Imperative-Systems-Inc/terraform-modules/tree/master/deepmerge) Terraform module
to perform a deep map merge of standard Terraform maps and objects.
## Usage
For a complete example, see [examples/complete](https://github.com/cloudposse/terraform-yaml-config/tree/main/examples/complete).
For automated tests of the complete example using [bats](https://github.com/bats-core/bats-core) and [Terratest](https://github.com/gruntwork-io/terratest)
(which tests and deploys the example on Datadog), see [test](https://github.com/cloudposse/terraform-yaml-config/tree/main/test).
For an example of using local config maps with `import` and deep merging into a final configuration map, see [examples/imports-local](https://github.com/cloudposse/terraform-yaml-config/tree/main/examples/imports-local).
For an example of using remote config maps with `import` and deep merging into a final configuration map, see [examples/imports-remote](https://github.com/cloudposse/terraform-yaml-config/tree/main/examples/imports-remote).
## Examples
### Example of local and remote maps and lists configurations with interpolation parameters
```hcl
module "yaml_config" {
source = "cloudposse/config/yaml"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
map_config_local_base_path = "./config"
map_config_paths = [
"map-configs/*.yaml",
"https://raw.githubusercontent.com/cloudposse/terraform-opsgenie-incident-management/master/examples/config/resources/services.yaml",
"https://raw.githubusercontent.com/cloudposse/terraform-opsgenie-incident-management/master/examples/config/resources/team_routing_rules.yaml"
]
list_config_local_base_path = "./config"
list_config_paths = [
"list-configs/*.yaml",
"https://raw.githubusercontent.com/cloudposse/terraform-aws-service-control-policies/master/examples/complete/policies/organization-policies.yaml"
]
parameters = {
param1 = "1"
param2 = "2"
}
context = module.this.context
}
```
### Example of local maps configurations with `import` and deep merging
In the example, we use two levels of imports,
and the module deep merges the local config files `imports-level-3.yaml`, `imports-level-2.yaml`, and `imports-level-1.yaml`
into a final config map.
See [examples/imports-local](https://github.com/cloudposse/terraform-yaml-config/tree/main/examples/imports-local) for more details.
```hcl
module "yaml_config" {
source = "cloudposse/config/yaml"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
map_config_local_base_path = "./config"
map_config_paths = [
"imports-level-1.yaml"
]
context = module.this.context
}
```
### Example of remote maps configurations with with `import` and deep merging
In the example, we use two levels of imports,
and the module deep merges the remote config files `globals.yaml`, `ue2-globals.yaml`, and `ue2-prod.yaml`
into a final config map.
See [examples/imports-remote](https://github.com/cloudposse/terraform-yaml-config/tree/main/examples/imports-remote) for more details.
```hcl
module "yaml_config" {
source = "cloudposse/config/yaml"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
map_config_remote_base_path = "https://raw.githubusercontent.com/cloudposse/atmos/master/example/stacks"
map_config_paths = [
"https://raw.githubusercontent.com/cloudposse/atmos/master/example/stacks/ue2-prod.yaml"
]
context = module.this.context
}
```
## Variables
### Required Variables
### Optional Variables
- `append_list_enabled` (`bool`) optional
-
A boolean flag to enable/disable appending lists instead of overwriting them.
**Default value:** `false`
- `deep_copy_list_enabled` (`bool`) optional
-
A boolean flag to enable/disable merging of list elements one by one.
**Default value:** `false`
- `list_config_local_base_path` (`string`) optional
-
Base path to local YAML configuration files of list type
**Default value:** `""`
- `list_config_paths` (`list(string)`) optional
-
Paths to YAML configuration files of list type
**Default value:** `[ ]`
- `list_config_remote_base_path` (`string`) optional
-
Base path to remote YAML configuration files of list type
**Default value:** `""`
- `map_config_local_base_path` (`string`) optional
-
Base path to local YAML configuration files of map type
**Default value:** `""`
- `map_config_paths` (`list(string)`) optional
-
Paths to YAML configuration files of map type
**Default value:** `[ ]`
- `map_config_remote_base_path` (`string`) optional
-
Base path to remote YAML configuration files of map type
**Default value:** `""`
- `map_configs` (`any`) optional
-
List of existing configurations of map type. Deep-merging of the existing map configs takes precedence over the map configs loaded from YAML files
**Default value:** `[ ]`
- `parameters` (`map(string)`) optional
-
Map of parameters for interpolation within the YAML config templates
**Default value:** `{ }`
- `remote_config_selector` (`string`) optional
-
String to detect local vs. remote config paths
**Default value:** `"://"`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
- `additional_tag_map` (`map(string)`) optional
-
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
- `attributes` (`list(string)`) optional
-
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
- `context` (`any`) optional
-
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
- `delimiter` (`string`) optional
-
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
- `descriptor_formats` (`any`) optional
-
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
- `enabled` (`bool`) optional
-
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
- `environment` (`string`) optional
-
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
- `id_length_limit` (`number`) optional
-
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
- `label_key_case` (`string`) optional
-
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
- `label_order` (`list(string)`) optional
-
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
- `label_value_case` (`string`) optional
-
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
- `labels_as_tags` (`set(string)`) optional
-
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
- `name` (`string`) optional
-
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
- `namespace` (`string`) optional
-
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
- `regex_replace_chars` (`string`) optional
-
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
- `stage` (`string`) optional
-
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
- `tags` (`map(string)`) optional
-
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
- `tenant` (`string`) optional
-
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
- `all_imports_list`
-
List of all imported YAML configurations
- `all_imports_map`
-
Map of all imported YAML configurations
- `list_configs`
-
Terraform lists from YAML configurations
- `map_configs`
-
Terraform maps from YAML configurations
## Dependencies
### Requirements
- `terraform`, version: `>= 0.13.0`
- `http`, version: `>= 2.0`
- `local`, version: `>= 1.3`
- `template`, version: `>= 2.2`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`maps_deepmerge` | latest | [`./modules/deepmerge`](https://registry.terraform.io/modules/./modules/deepmerge/) | n/a
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
`yaml_config_1` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_10` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_2` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_3` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_4` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_5` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_6` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_7` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_8` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
`yaml_config_9` | latest | [`./modules/yaml-config`](https://registry.terraform.io/modules/./modules/yaml-config/) | n/a
---
## stack-config
# Module: `stack-config`
Terraform module that loads and processes an opinionated ["stack" configuration](#examples) from YAML sources
using the [`terraform-provider-utils`](https://github.com/cloudposse/terraform-provider-utils) Terraform provider.
It supports deep-merged variables, settings, ENV variables, backend config, remote state, and [Spacelift](https://spacelift.io/) stacks config outputs for Terraform and helmfile components.
## Introduction
The module is composed of the following sub-modules:
- [vars](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/vars) - accepts stack configuration and returns deep-merged variables for a Terraform or helmfile component.
- [settings](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/settings) - accepts stack configuration and returns deep-merged settings for a Terraform or helmfile component.
- [env](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/env) - accepts stack configuration and returns deep-merged ENV variables for a Terraform or helmfile component.
- [backend](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/backend) - accepts stack configuration and returns backend config for a Terraform component.
- [remote-state](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/remote-state) - accepts stack configuration and returns remote state outputs for a Terraform component.
The module supports `s3` and `remote` (Terraform Cloud) backends.
- [spacelift](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/spacelift) - accepts infrastructure stack configuration and transforms it into Spacelift stacks.
## Usage
For a complete example, see [examples/complete](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/examples/complete).
For automated tests of the complete example using [bats](https://github.com/bats-core/bats-core) and [Terratest](https://github.com/gruntwork-io/terratest),
see [test](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/test).
For an example on how to configure remote state for Terraform components in YAML config files and then read the components outputs from the remote state,
see [examples/remote-state](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/examples/remote-state).
For an example on how to process `vars`, `settings`, `env` and `backend` configurations for all Terraform and helmfile components for a list of stacks,
see [examples/stacks](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/examples/stacks).
## Examples
Here's an example of a stack configuration file:
```yaml
import:
- ue2-globals
vars:
stage: dev
terraform:
vars: {}
helmfile:
vars: {}
components:
terraform:
vpc:
backend:
s3:
workspace_key_prefix: "vpc"
vars:
cidr_block: "10.102.0.0/18"
eks:
backend:
s3:
workspace_key_prefix: "eks"
vars: {}
helmfile:
nginx-ingress:
vars:
installed: true
```
The `import` section refers to other stack configurations that are automatically deep merged.
### Complete example
This example loads the stack config `my-stack` (which in turn imports other YAML config dependencies)
and returns variables and backend config for the Terraform component `my-vpc` from the stack `my-stack`.
```hcl
module "vars" {
source = "cloudposse/stack-config/yaml//modules/vars"
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component_type = "terraform"
component = "my-vpc"
context = module.this.context
}
module "backend" {
source = "cloudposse/stack-config/yaml//modules/backend"
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component_type = "terraform"
component = "my-vpc"
context = module.this.context
}
module "settings" {
source = "cloudposse/stack-config/yaml//modules/settings"
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component_type = "terraform"
component = "my-vpc"
context = module.this.context
}
module "env" {
source = "cloudposse/stack-config/yaml//modules/env"
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component_type = "terraform"
component = "my-vpc"
context = module.this.context
}
```
The example returns the following deep-merged `vars`, `settings`, `env`, and `backend` configurations for the `my-vpc` Terraform component:
```hcl
backend = {
"acl" = "bucket-owner-full-control"
"bucket" = "eg-ue2-root-tfstate"
"dynamodb_table" = "eg-ue2-root-tfstate-lock"
"encrypt" = true
"key" = "terraform.tfstate"
"region" = "us-east-2"
"role_arn" = "arn:aws:iam::xxxxxxxxxxxx:role/eg-gbl-root-terraform"
"workspace_key_prefix" = "vpc"
}
backend_type = "s3"
base_component = "vpc"
env = {
"ENV_TEST_1" = "test1_override"
"ENV_TEST_2" = "test2_override"
"ENV_TEST_3" = "test3"
"ENV_TEST_4" = "test4"
}
settings = {
"spacelift" = {
"autodeploy" = true
"branch" = "test"
"triggers" = [
"1",
"2",
]
"workspace_enabled" = true
}
"version" = 1
}
vars = {
"availability_zones" = [
"us-east-2a",
"us-east-2b",
"us-east-2c",
]
"cidr_block" = "10.132.0.0/18"
"environment" = "ue2"
"level" = 3
"namespace" = "eg"
"param" = "param4"
"region" = "us-east-2"
"stage" = "prod"
"subnet_type_tag_key" = "example/subnet/type"
"test_map" = {
"a" = "a_override_2"
"b" = "b_override"
"c" = [
1,
2,
3,
]
"map2" = {
"atr1" = 1
"atr2" = 2
"atr3" = [
"3a",
"3b",
"3c",
]
}
}
"var_1" = "1_override"
"var_2" = "2_override"
"var_3" = "3a"
}
```
See [examples/complete](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/examples/complete) for more details.
### Remote state example
This example accepts a stack config `my-stack` (which in turn imports other YAML config dependencies)
and returns remote state outputs from the `s3` backend for `my-vpc` and `eks` Terraform components.
__NOTE:__ The backend type (`s3`) and backend configuration for the components are defined in the stack YAML config files.
```hcl
module "remote_state_my_vpc" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component = "my-vpc"
}
module "remote_state_eks" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component = "eks"
}
```
See [examples/remote-state](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/examples/remote-state) for more details.
## Variables
### Required Variables
- `stack_config_local_path` (`string`) required
-
Path to local stack configs
- `stacks` (`list(string)`) required
-
A list of infrastructure stack names
### Optional Variables
- `component_deps_processing_enabled` (`bool`) optional
-
Boolean flag to enable/disable processing stack config dependencies for the components in the provided stack
**Default value:** `false`
- `stack_deps_processing_enabled` (`bool`) optional
-
Boolean flag to enable/disable processing all stack dependencies in the provided stack
**Default value:** `false`
### Context Variables
The following variables are defined in the `context.tf` file of this module and part of the [terraform-null-label](https://registry.terraform.io/modules/cloudposse/label/null) pattern.
- `additional_tag_map` (`map(string)`) optional
-
Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
**Required:** No
**Default value:** `{ }`
- `attributes` (`list(string)`) optional
-
ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element.
**Required:** No
**Default value:** `[ ]`
- `context` (`any`) optional
-
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
**Required:** No
**Default value:**
```hcl
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
```
- `delimiter` (`string`) optional
-
Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
**Required:** No
**Default value:** `null`
- `descriptor_formats` (`any`) optional
-
Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`\{
format = string
labels = list(string)
\}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty).
**Required:** No
**Default value:** `{ }`
- `enabled` (`bool`) optional
-
Set to false to prevent the module from creating any resources
**Required:** No
**Default value:** `null`
- `environment` (`string`) optional
-
ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT'
**Required:** No
**Default value:** `null`
- `id_length_limit` (`number`) optional
-
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`.
**Required:** No
**Default value:** `null`
- `label_key_case` (`string`) optional
-
Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
**Required:** No
**Default value:** `null`
- `label_order` (`list(string)`) optional
-
The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
**Required:** No
**Default value:** `null`
- `label_value_case` (`string`) optional
-
Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`.
**Required:** No
**Default value:** `null`
- `labels_as_tags` (`set(string)`) optional
-
Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
**Required:** No
**Default value:**
```hcl
[
"default"
]
```
- `name` (`string`) optional
-
ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input.
**Required:** No
**Default value:** `null`
- `namespace` (`string`) optional
-
ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique
**Required:** No
**Default value:** `null`
- `regex_replace_chars` (`string`) optional
-
Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
**Required:** No
**Default value:** `null`
- `stage` (`string`) optional
-
ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release'
**Required:** No
**Default value:** `null`
- `tags` (`map(string)`) optional
-
Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module.
**Required:** No
**Default value:** `{ }`
- `tenant` (`string`) optional
-
ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for
**Required:** No
**Default value:** `null`
## Outputs
- `config`
-
Stack configurations
## Dependencies
### Requirements
- `terraform`, version: `>= 0.14.0`
- `external`, version: `>= 2.0`
- `local`, version: `>= 1.3`
- `utils`, version: `>= 1.7.1`
### Providers
- `utils`, version: `>= 1.7.1`
### Modules
Name | Version | Source | Description
--- | --- | --- | ---
`this` | 0.25.0 | [`cloudposse/label/null`](https://registry.terraform.io/modules/cloudposse/label/null/0.25.0) | n/a
## Resources
The following resources are used by this module:
## Data Sources
The following data sources are used by this module:
- [`utils_stack_config_yaml.config`](https://registry.terraform.io/providers/cloudposse/utils/latest/docs/data-sources/stack_config_yaml) (data source)
---
## backend
Terraform module that accepts stack configuration and returns backend config for a component.
## Usage
The following example loads the stack config `my-stack` (which in turn imports other YAML config dependencies)
and returns the backend config for the component `my-vpc`.
```hcl
module "backend" {
source = "cloudposse/stack-config/yaml//modules/backend"
# version = "x.x.x"
stack = "my-stack"
component = "my-vpc"
context = module.this.context
}
```
The example returns the following `backend` configuration:
```hcl
backend_type = s3
backend = {
"acl" = "bucket-owner-full-control"
"bucket" = "eg-ue2-root-tfstate"
"dynamodb_table" = "eg-ue2-root-tfstate-lock"
"encrypt" = true
"key" = "terraform.tfstate"
"region" = "us-east-2"
"role_arn" = "arn:aws:iam::xxxxxxxxxxxx:role/eg-gbl-root-terraform"
"workspace_key_prefix" = "vpc"
}
```
See [examples/complete](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/backend/../../examples/complete) for more details.
---
## env
Terraform module that accepts stack configuration and returns deep-merged ENV variables for a Terraform or helmfile component.
## Usage
The following example loads the stack config `my-stack` (which in turn imports other YAML config dependencies)
and returns ENV variables for Terraform component `my-vpc`.
```hcl
module "vars" {
source = "cloudposse/stack-config/yaml//modules/env"
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component_type = "terraform"
component = "my-vpc"
context = module.this.context
}
```
See [examples/complete](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/env/../../examples/complete) for more details.
---
## remote-state
Terraform module that accepts a component and a stack name and returns remote state outputs for the component.
The module supports all backends supported by Terraform and OpenTofu, plus the Atmos-specific `static` backend.
### Errors
:::note
If you experience an error from the `terraform_remote_state` data source,
this is most likely not an error in this module, but rather an error in the
`remote_state` configuration in the referenced stack. This module performs
no validation on the remote state configuration, and only modifies the configuration
for the `remote` backend (to set the workspace name) and,
_only when `var.privileged` is set to `true`_, the `s3` configuration (to remove
settings for assuming a role). If `var.privileged` is left at the default value of `false`
and you are not using the `remote` backend, then this module will not modify the backend
configuration in any way.
:::
### "Local" Backend
:::important
If the local backend has a relative path, it will be resolved
relative to the current working directory, which is usually a root module
referencing the remote state. However, when the local backend is created,
the current working directory is the directory where the target root module
is defined. This can cause the lookup to fail if the source is not reachable
from the client directory as `../source`.
:::
For example, if your directory structure looks like this:
```text
project
├── components
│ ├── client
│ │ └── main.tf
│ └── complex
│ └── source
│ └── main.tf
└── local-state
└── complex
└── source
└── terraform.tfstate
```
Terraform code in `project/components/complex/source` can create its local state
file (`terraform.tfstate`) in the `local-state/complex/source`
directory using `path = "../../../local-state/complex/source/terraform.tfstate"`.
However, Terraform code in `project/components/client` that references the same
local state using the same backend configuration will fail because the current
working directory is `project/components/client` and the relative path will not
resolve correctly.
## Usage
The following example accepts a stack config `my-stack` (which in turn imports other YAML config dependencies)
and returns remote state outputs from the `s3` backend for `my-vpc` and `eks` Terraform components.
__NOTE:__ The backend type (`s3`) and backend configuration for the components are defined in the stack YAML config files.
```hcl
module "remote_state_my_vpc" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
stack = "my-stack"
component = "my-vpc"
}
module "remote_state_eks" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
stack = "my-stack"
component = "eks"
}
```
See [examples/remote-state](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/remote-state/../../examples/remote-state) for more details.
---
## settings
Terraform module that accepts stack configuration and returns deep-merged settings for a Terraform or helmfile component.
## Usage
The following example loads the stack config `my-stack` (which in turn imports other YAML config dependencies)
and returns settings for Terraform component `my-vpc`.
```hcl
module "vars" {
source = "cloudposse/stack-config/yaml//modules/settings"
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component_type = "terraform"
component = "my-vpc"
context = module.this.context
}
```
See [examples/complete](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/settings/../../examples/complete) for more details.
---
## spacelift(3)
Terraform module that accepts infrastructure stack configurations and transforms it into Spacelift stacks.
## Usage
The following example loads the infrastructure YAML stack configs and returns Spacelift stack configurations:
```hcl
module "spacelift" {
source = "../../modules/spacelift"
stack_config_path_template = "stacks/%s.yaml"
component_deps_processing_enabled = true
context = module.this.context
}
```
See [examples/spacelift](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/spacelift/../../examples/spacelift) for more details.
---
## stack
Terraform module that constructs stack names.
If `var.stack` is specified, will be returned as is.
If not specified, the output will be calculated using the provided `context`.
---
## vars
Terraform module that accepts stack configuration and returns deep-merged variables for a Terraform or helmfile component.
## Usage
The following example loads the stack config `my-stack` (which in turn imports other YAML config dependencies)
and returns variables and backend config for Terraform component `my-vpc`.
```hcl
module "vars" {
source = "cloudposse/stack-config/yaml//modules/vars"
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component_type = "terraform"
component = "my-vpc"
context = module.this.context
}
module "backend" {
source = "cloudposse/stack-config/yaml//modules/backend"
# version = "x.x.x"
stack_config_local_path = "./stacks"
stack = "my-stack"
component_type = "terraform"
component = "my-vpc"
context = module.this.context
}
```
The example returns the following `vars` and `backend` configurations for `my-stack` stack and `my-vpc` Terraform component:
```hcl
vars = {
"availability_zones" = [
"us-east-2a",
"us-east-2b",
"us-east-2c",
]
"cidr_block" = "10.132.0.0/18"
"environment" = "ue2"
"level" = 3
"namespace" = "eg"
"param" = "param4"
"region" = "us-east-2"
"stage" = "prod"
"subnet_type_tag_key" = "example/subnet/type"
"test_map" = {
"a" = "a_override_2"
"b" = "b_override"
"c" = [
1,
2,
3,
]
"map2" = {
"atr1" = 1
"atr2" = 2
"atr3" = [
"3a",
"3b",
"3c",
]
}
}
"var_1" = "1_override"
"var_2" = "2_override"
"var_3" = "3a"
}
backend_type = s3
backend = {
"acl" = "bucket-owner-full-control"
"bucket" = "eg-ue2-root-tfstate"
"dynamodb_table" = "eg-ue2-root-tfstate-lock"
"encrypt" = true
"key" = "terraform.tfstate"
"region" = "us-east-2"
"role_arn" = "arn:aws:iam::xxxxxxxxxxxx:role/eg-gbl-root-terraform"
"workspace_key_prefix" = "vpc"
}
```
See [examples/complete](https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/vars/../../examples/complete) for more details.
---
## YAML
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
Utilize these Terraform modules to manage and generate YAML configurations. These modules help in organizing and maintaining YAML files within your infrastructure as code practices.
---
## Terraform Modules(Modules)
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
This is a collection of reusable Terraform Modules. In this library, you'll find real-world examples of how we've implemented reusable Terraform Modules.
---
## Action Items(Quickstart)
import Intro from '@site/src/components/Intro';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
import Admonition from '@theme/Admonition'
To get a head start on your infrastructure as code journey, we recommend completing the following action items while you wait on Cloud Posse to deliver your configurations. These steps will help you set up your environment and prepare for the provisioning process.
## Prepare a New AWS Organization (root account)
We recommend that you start with a new AWS Organization (e.g. a new payer account). As part of the provisioning process, you will be terraforming your entire organization, creating 12-plus accounts, and building everything from the ground up. You will be configuring SSO, fine-grained IAM roles, and more, all with Terraform. We recommend a net-new Organization, so you do not jeopardize any of your current production operations.
Create a new AWS root account and add the root credentials to 1Password.
## Create GitHub Repository
Create a new repository in your GitHub organization that you will use as your Infrastructure as Code repository.
## AWS IAM Identity Center (AWS SSO)
In order connect your chosen IdP to AWS IAM Identity Center (AWS SSO), we will to configure your provider and create a metadata file. Please follow the relevant linked guide and follow the steps for the Identity Provider.
- [GSuite](https://aws.amazon.com/blogs/security/how-to-use-g-suite-as-external-identity-provider-aws-sso/)
- [Office 365](/layers/identity/aws-sso/#configure-your-identity-provider)
- [JumpCloud](https://jumpcloud.com/support/integrate-with-aws-iam-identity-center)
- [Other AWS supported IdPs: Azure AD, CyberArk, Okta, OneLogin, Ping Identity](https://docs.aws.amazon.com/singlesignon/latest/userguide/supported-idps.html)
- GSuite does not automatically sync Users and Groups with AWS Identity Center without additional configuration! If using GSuite as an IdP, considering deploying the [ssosync tool](https://github.com/awslabs/ssosync).
- The official AWS documentation for setting up JumpCloud with AWS IAM Identity Center is not accurate. Instead, please refer to the [JumpCloud official documentation](https://jumpcloud.com/support/integrate-with-aws-iam-identity-center)
## Configure AWS SAML (Optional)
If deploying AWS SAML as an alternative to AWS SSO, you will need a separate configuration and metadata file. Again, please refer to the relevant linked guide.
- [GSuite](https://aws.amazon.com/blogs/desktop-and-application-streaming/setting-up-g-suite-saml-2-0-federation-with-amazon-appstream-2-0/): Follow Steps 1 through 7. This document refers to Appstream, but the process will be the same for AWS.
- [Office 365](/layers/identity/tutorials/how-to-setup-saml-login-to-aws-from-office-365)
- [JumpCloud](https://support.jumpcloud.com/support/s/article/getting-started-applications-saml-sso2)
- [Okta](https://help.okta.com/en-us/Content/Topics/DeploymentGuides/AWS/aws-configure-identity-provider.htm)
## Purchase Domains (Optional)
If you plan to use the `core-dns` account to register domains, make sure to add a credit card directly to that individual account. When the account is ready, please add a credit card to the `core-dns` account following the [AWS documentation](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cc.html#Add-cc).
## Start Implementing the Reference Architecture
Now that you're armed with everything you need, you can start implementing the reference architecture.
We recommend starting with the [Foundation Layer](/layers/foundation) and working your way up from there.
Get Started
## GitHub Actions
### Self Hosted Github Runners on EKS
If you are deploying the Actions Runner Controller solution for Self-Hosted Github Runners, please generate the required secrets following the
[GitHub Action Runner Controller setup docs](/layers/github-actions/eks-github-actions-controller/#requirements).
### Self Hosted Github Runners with Philips Labs (ECS)
If you have chosen ECS as a platform, we recommend deploying Philips Labs GitHub Action Runners. Please read through the [Philips Labs GitHub Action Runners Setup Requirements](/layers/github-actions/philips-labs-github-runners#requirements).
In particular, you will need a new GitHub App including a Private Key, an App ID, and an App Installation ID. We recommend that you store these secrets in 1Password.
### Atmos Component Updater Requirements
The Atmos component updater GitHub Action will automatically suggest pull requests in your new repository, when new versions of Atmos components are available.
If you plan to leverage it, you will need to create and install a GitHub App and allow GitHub Actions to create and approve pull requests within your GitHub Organization. For more on the Atmos Component Updater, see [atmos.tools](https://atmos.tools/integrations/github-actions/component-updater).
### Create and install a GitHub App for Atmos
1. Create a new GitHub App
2. Name this new app whatever you prefer. For example, `Atmos Component Updater`.
3. List a Homepage URL of your choosing. This is required by GitHub, but you can use any URL. For example use our documentation page: `https://atmos.tools/integrations/github-actions/component-updater/`
4. (Optional) Add an icon for your new app (example provided below)
5. Assign only the following Repository permissions:
```diff
+ Contents: Read and write
+ Pull Requests: Read and write
+ Metadata: Read-only
```
6. Generate a new private key [following the GitHub documentation](https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/managing-private-keys-for-github-apps#generating-private-keys).
7. Save both the App ID and the new private key in 1Password
Feel free to download and use our Atmos icon with your GitHub App!

### Allow GitHub Actions to create and approve pull requests
1. Go to `https://github.com/organizations/YOUR_ORG/settings/actions`
2. Check "Allow GitHub Actions to create and approve pull requests"
### Create `atmos` GitHub Environment
We recommend creating a new GitHub environment for Atmos. With environments, the Atmos Component Updater workflow will be required to follow any branch protection rules before running or accessing the environment's secrets. Plus, GitHub natively organizes these Deployments separately in the GitHub UI.
1. Open "Settings" for your repository
1. Navigate to "Environments"
1. Select "New environment"
1. Name the new environment, "atmos".
1. In the drop-down next to "Deployment branches and tags", select "Protected branches only"
1. In "Environment secrets", create the two required secrets for App ID and App Private Key created above and in 1Password. This will be accessed from GitHub Actions with `secrets.ATMOS_APP_ID` and `secrets.ATMOS_PRIVATE_KEY` respectively.
## Optional Integrations
The reference architecture supports multiple integrations. Depending on your requirements, you may need a few subscriptions set up. Please subscribe only to the services you plan to use!
### Datadog
Sign up for Datadog following the [How to Sign Up for Datadog?](/layers/monitoring/datadog/tutorials/how-to-sign-up-for-datadog) documentation.
---
## Quickstart FAQ
### What is the difference between a Service Discovery Domain and a Vanity Domain?
This is an extremely common question. Please see [What is the difference between a Vanity and a Service Domain?](/layers/network/faq/#what-is-the-difference-between-a-vanity-and-a-service-domain)
### Do we have to use 1Password?
No, you can use whichever password manager you prefer. For Cloud Posse engagements, we use 1Password exclusively to share secrets.
### Do we have to create a new Organization?
Yes! We recommend registering for a new AWS Organization. You will be terraforming your entire organization, creating 12-plus accounts, and doing everything from the ground up. You'll be configuring SSO, fine-grained IAM roles, and more. We recommend a net-new Organization, so you do not jeopardize any of your current productoin operations.
### How many email addresses do we need to create?
Only one email with `+` addressing is required. This email will be used to create your AWS accounts. For example, `aws+%s@acme.com`.
### What is plus email addressing?
Plus email addressing, also known as plus addressing or subaddressing, is a feature offered by some email providers that allows users to create multiple variations of their email address by adding a "+" sign and a unique identifier after their username and before the "@" symbol.
For example, if the email address is "john.doe@example.com", a user can create variations such as "john.doe+newsletter@example.com" or "john.doe+work@example.com". Emails sent to these variations will still be delivered to the original email address, but the unique identifier can be used to filter or organize incoming emails.
### How can we customize our architecture?
Customizations are out of scope typically, but we can assess each on a case-by-case basis.
You will learn your environment and be confident to make customizations on your own.
Often we can deploy an example of the customization, but it's up to you to implement the full deployment
### What if we need more help?
Cloud Posse offers multiple levels of support designed to fit your needs and budget. For more information, see [Support](/support).
For everything else, we provide fanatical support via our Professional Services, exclusive to reference architecture customers. We can help with anything from architecture reviews, security audits, custom development, migrations, and more.
Please [book a call](https://cloudposse.com/meet) to discuss your needs.
---
## Handoffs
import Handoffs from '@site/docs/jumpstart/handoffs.mdx';
---
## Kick Off with Cloud Posse(Quickstart)
import Link from "@docusaurus/Link";
import KeyPoints from "@site/src/components/KeyPoints";
import Steps from "@site/src/components/Steps";
import Step from "@site/src/components/Step";
import StepNumber from "@site/src/components/StepNumber";
import Intro from "@site/src/components/Intro";
import ActionCard from "@site/src/components/ActionCard";
import PrimaryCTA from "@site/src/components/PrimaryCTA";
import TaskList from "@site/src/components/TaskList";
import Admonition from "@theme/Admonition";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
The kickoff call for [Quickstarts](/intro/path) ensures you get a smooth start
with clear expectations. During the call, we will confirm your design
decisions. After the call, you'll be ready to start with some the action items.
You'll quickly receive all the configurations customized to your requirements
in about 2-3 days.
- **Review Design Decisions** Confirm your requirements answer any questions you may have
- **Cover Next Steps** Review what to expect after the call, and what you need to get started
- **Introduce Support Options** in the event you get stuck
## Preparing for the Kickoff Meeting
This document outlines what to expect from your first call with Cloud Posse. In order to make the most of this meeting, please read through this document and come prepared with questions. In particular, please review the following:
1. Decide who will lead the project in your organization
2. Ensure everyone who needs to be on the call is added to the invitation
3. Read through the [Design Decisions](#-review-design-decisions) and prepare questions and decisions
4. Review the list of [Actions Items](#action-items) following this call
---
## Kickoff Meeting Agenda
The following is the agenda for our Kick off call with you. We'll schedule this call as soon as you've purchased a [Quickstart](/intro/path) and submitted your [Design Decisions](#-review-design-decisions).
### Introductions
Here we will review who is on the call, what their roles are, and identify our technical point of contact at Cloud Posse. We will also review the working timezones of the teams.
### Project Overview
After your kick off call with Cloud Posse and receiving the configurations customized to your design decisions, you will be ready to begin deploying your infrastructure starting with the [foundation](/layers/foundation). The Reference Architecture is a collection of best practices for building a secure, scalable, and highly available infrastructure on AWS. The Reference Architecture is constantly evolving as we learn from our customers and the community.
You will deploy your infrastructure in _layers_. These layers are designed to manage collections of deliverables and will be a mix of generated content from a private reference, vendored Terraform from open source libraries, and any customization for your Organization. Because we are delivering an entire infrastructure repository, these initial PRs will be massive; a complete infrastructure set up requires dozens of components, each with Terraform modules, configuration, account setup, and documentation. You are absolutely welcome to follow along, but we do not intend for your team to be required to review these massive PRs. Cloud Posse internally reviews these PRs extensively to ensure that the final product works as intended.
Before you begin provisioning any layer, we recommend watching the video that explains the problems we faced, so you'll be better equipped to understand the solution. These videos go into detail on the subject to explain the problems we faced, the tradeoffs we considered and then at a high level how we solved them.
**Need a hand?** Our [Essential Support](/support/essential) provides weekly guidance—whether it's answering questions or troubleshooting issues. We're here to help!
### Introduce Shared Customer Workshops
> **When:** Thursdays, 7:00-7:30A PT/ 9:00-9:30A CT/ 10:00-10:30A ET
> **Where:** Zoom
> **Who:** [Essential Support Customers Only](/support/essential)
> **When:** Wednesdays, 2:30-3:00P PT/ 4:30-5:00P CT/ 5:30-6:00P ET
> **Where:** Zoom
> **Who:** [Essential Support Customers Only](/support/essential)
This is a great opportunity to get your questions answered and to get help with your project.
### Sign up for Community Office Hours
> **When:** Wednesdays, 11:30a-12:30p PT/ 1:30p-2:30p CT/ 2:30p-3:30p ET
> **Where:** Zoom
> **Who:** Anyone (open to the public!)
This is a good way to keep up with the latest developments and trends in the DevOps community.
Sign up at [cloudposse.com/office-hours](https://cloudposse.com/office-hours/)
### Join our SweetOps Slack Community
If you are looking for a community of like-minded DevOps practitioners, we invite you to join our [SweetOps Slack](https://slack.sweetops.com/).
### Review Design Decisions
Review the foundational Design Decisions.
- [ ] [Decide on Terraform Version](/layers/project/design-decisions/decide-on-terraform-version)
- [ ] [Decide on Namespace Abbreviation](/layers/project/design-decisions/decide-on-namespace-abbreviation)
- [ ] [Decide on Infrastructure Repository Name](/layers/project/design-decisions/decide-on-infrastructure-repository-name)
- [ ] [Decide on Email Address Format for AWS Accounts](/layers/accounts/design-decisions/decide-on-email-address-format-for-aws-accounts)
- [ ] [Decide on IdP](/layers/identity/design-decisions/decide-on-idp)
- [ ] [Decide on IdP Integration Method](/layers/identity/design-decisions/decide-on-idp-integration)
- [ ] [Decide on Primary AWS Region and Secondary AWS Region](/layers/network/design-decisions/decide-on-primary-aws-region)
- [ ] [Decide on CIDR Allocation Strategy](/layers/network/design-decisions/decide-on-cidr-allocation)
- [ ] [Decide on Service Discovery Domain](/layers/network/design-decisions/decide-on-service-discovery-domain)
- [ ] [Decide on Vanity Domain](/layers/network/design-decisions/decide-on-vanity-branded-domain)
These are the design decisions you can customize as part of the Quickstart package. All other decisions are pre-made for you, but you’re welcome to review them. If you’d like to make additional changes, [let us know—we’re happy to provide a quote](https://cloudposse.com/meet).
## How to Succeed
Cloud Posse has noticed several patterns that lead to successful projects.
### Come to Calls Prepared
Review six pagers and documentation before our calls. This will help you to know what questions to ask. Coming unprepared will lead to a lot of questions and back-and-forth. This will slow down material resulting in less time for new material.
### Take Initiative
The most successful customers take initiative to make customizations to their Reference Architecture. This is a great way to make the Reference Architecture your own. It also helps to build a deeper understanding of the Reference Architecture and how it works.
### Cameras On
We recommend that all participants have their cameras on during our Zoom calls. This helps to build trust and rapport. It also helps to keep everyone engaged and focused. This also lets us gauge how everyone is understanding the material. If you are having trouble understanding something, please ask questions.
### Ask Questions
We encourage you to ask questions. We want to make sure that everyone understands the material. We also want to make sure that we are providing the right level of detail. Our meetings are intended to be interactive and encourage conversation. Please feel free to interject at any time if you have a question or a comment to add to the discussion.
### Participate in our Slack Community
We encourage you to participate in [our public Slack channels](https://cloudposse.com/slack). This is a great way to get help and to learn from others. We have a lot of customers who have been through the same process and can provide valuable insights. We also have a lot of Cloud Posse engineers who are available to help answer questions.
### Attend Weekly Office Hours
Our free weekly [Community Office Hours](#community-office-hours) are great opportunity to ask questions and get help.
### Read our Documentation
You can always find how-to guides, design decisions, and other helpful pages at [docs.cloudposse.com](/)
### Take the Next Step
Don't wait! Keep the momentum going by taking the next step. If you have questions, ask them. If you need help, ask for it. We are here to help you succeed.
After our kickoff call, there are several action items for you to consider
based on your goals. Not every item may be relevant, but please review them
and take action on the ones that apply to you.
Next Step
---
## Quickstart
import Intro from '@site/src/components/Intro';
import ActionCard from '@site/src/components/ActionCard';
import PrimaryCTA from '@site/src/components/PrimaryCTA';
import SecondaryCTA from '@site/src/components/SecondaryCTA';
import PillBox from '@site/src/components/PillBox';
import Steps from '@site/src/components/Steps';
import Step from '@site/src/components/Step';
import StepNumber from '@site/src/components/StepNumber';
Do it Yourself (DIY)
This documentation will guide you through the end-to-end configuration of our reference architecture for your AWS organization. You can customize everything to suit your needs and implement it at your own pace. Many have completed the process in under a week, and we provide [tons of options for support](/support) if you get stuck.
## Get the Quickstart Configurations
All of this documentation refers to the prebuilt configurations we provide as part of the **Quickstart Package**, which is how we fund our open-source efforts. This is optional. You are welcome to follow along with the documentation and implement your configurations from scratch, but the Quickstart Package **will save you a lot of time**.
And guess what? It's just a one time fee.
Buy Quickstart Configurations
## Prepare Your Design Decisions
Design Decisions are how we customize the Quickstart Configurations to your needs. We'll send you a form to fill out with your requirements.
Review Design Decisions
## Schedule your Kickoff Call with Cloud Posse
Once you've submitted your Design Decisions, we'll schedule a call to review them with you.
This is an opportunity to review them with Cloud Posse, and ask any questions before you get started.
Review Agenda
## Start Building
After the call, you'll receive all the configurations customized to your requirements in about 2-3 days.
You'll use these configurations together with our documentation to get started with your first project.
Get Started
## Get Stuck? Try our Support
If you need assistance, we provide multiple options for support that fit your needs and budget.
Get Support
If you need more assistance, our [support options](/support) provide direct access to Cloud Posse to help you out.
Review Kick-off Agenda
Read the Documentation
---
## Cloud Posse Documentation License
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
This material may only be distributed subject to the terms and conditions set forth in the *[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/)* or later with the restrictions noted below (the latest version of the license is presently available at [Creative Commons v4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)).
- **Attribution** You must attribute the work in the manner specified by the author or licensor.
- **Noncommercial** The licensor permits others to copy, distribute and transmit the work. In return, licensees may not use the work for commercial purposes — unless they get the licensor's permission.
- **Share Alike** The licensor permits others to distribute derivative works only under the same license or one compatible with the one that governs the licensor's work.
## Copyright
Copyright 2017-2025 © Cloud Posse, LLC.
## Distribution
Distribution of the work or derivative of the work in any standard (paper) book form for commercial purposes is prohibited unless prior permission is obtained from the copyright holder.
Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copyright holder.
## Trademarks
All other trademarks referenced herein are the property of their respective owners.
:::important
This documentation is provided (and copyrighted) by Cloud Posse, LLC and is released via the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
The copyright holder has added the further requirement that Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copyright holder.
:::
---
## Infrastructure as Code Library
import DocCardList from '@theme/DocCardList';
import Intro from '@site/src/components/Intro';
Welcome to the Cloud Posse Library, your one-stop resource for all our open source infrastructure projects. Whether you're looking to deploy production-grade infrastructure, automate your workflows, or learn best practices, you'll find everything you need here. Our library is built on years of experience managing cloud infrastructure at scale, and we're excited to share these tools with you.
The Cloud Posse Library is a comprehensive collection of our open source projects designed to help you build, deploy, and manage cloud infrastructure. This library contains everything you need to implement Infrastructure as Code (IaC) best practices, including:
- **Terraform Components**: Production-grade, reusable "root" modules for AWS that provide complete solutions for common infrastructure patterns
- **Terraform Modules**: Reusable "child" modules that can be composed together to build custom infrastructure
- **GitHub Actions**: Automated workflows for continuous integration and delivery (CI/CD)
- **Resources**: Additional tools, scripts, and documentation to support your cloud infrastructure journey
All our projects are built with best practices in mind, including security, scalability, and maintainability. They are designed to be modular, composable, and follow the principle of least privilege.
---
## Adopted Architecture Decision Records
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
These are a collection of architectural decisions that have been adopted.
---
## Use API Gateway REST API vs HTTP API
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
:::
## Status
**DECIDED**
## Problem
When using the [AWS API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html), users must choose between creating a REST API or an HTTP API.
## Context
AWS supports multiple types of API Gateways and there are tradeoffs between them. This document will help decide which flavor of API Gateway is suitable for your use case.
### Differences
| **Category/Feature** | **HTTP API** (formerly known as v2) | **REST API** (formerly known as v1) |
| ---------------------------------------------------- | ---------------------------------------------------------------- | ----------------------------------- |
| **Authorizers** | | |
| Cognito | Partial Support. Cognito can be used only as a JWT token issuer. | ✔ |
| Native OpenID Connect / OAuth 2.0 | ✔ | |
| **Integration** | | |
| Private integrations with Application Load Balancers | ✔ | |
| Private integrations with AWS Cloud Map | ✔ | |
| Mock | | ✔ |
| **API Management** | | |
| Usage plans (e.g. rate limiting) | | ✔ |
| API keys | | ✔ |
| TLS | ✔ (Does not support TLS 1.0) | ✔ |
| **Development** | | |
| API caching | | ✔ |
| Request body transformation | | ✔ |
| Request / response validation | | ✔ |
| Test invocation (e.g. test backend via AWS console) | | ✔ |
| Automatic deployments | ✔ | |
| Default stage | ✔ | |
| Default route | ✔ | |
| Custom Gateway Responses | | ✔ |
| Canary Deployments | | ✔ |
| **Security** | | |
| Certificates for backend authentication | | ✔ |
| AWS WAF | | ✔ |
| Resource Policies | | ✔ |
| **Deployment Options** | | |
| Regional | ✔ | ✔ |
| Edge-Optimized (Cloudfront) | | ✔ |
| Private | | ✔ |
| **Monitoring** | | |
| Access logs to Amazon Kinesis Data Firehose | | ✔ |
| Execution logs | | ✔ |
| AWS X-Ray | | ✔ |
## Decision
**DECIDED**: Use REST API (formerly known as v1) because it checks all the boxes
---
## Use Custom AWS Region Codes
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
## Problem
The AWS Public Cloud spans **25 geographic regions** around the world, with announced plans for 8 more AWS Regions in Australia, India, Indonesia, Israel, New Zealand, Spain, Switzerland, and United Arab Emirates (UAE). Regions play a big factor in naming conventions for multi-region infrastructures for disambiguation and resource residency. See [Decide on Regional Naming Scheme](/layers/project/design-decisions/decide-on-regional-naming-scheme), [Decide on Namespace Abbreviation](/layers/project/design-decisions/decide-on-namespace-abbreviation) for context. Our [Terraform](/resources/legacy/fundamentals/terraform) is used to define programmatically consistent resource names with deterministic fields separated by a common delimiter (typically `-`), including a field for region (which we call `environment`). Since the AWS regions include a `-`, we do not want our region code to include it. Additionally, many AWS resource names are restricted to 32 or 64 characters making it all the more important to conserve characters for disambiguation of resource names.
## Solution
Cloud Posse provides two naming conventions to address AWS regions: `fixed` and `short`. They are defined in the `terraform-aws-utils` module, which exposes mapping outputs to use when working in AWS. It provides compact alternative codes for Regions, Availability Zones, and Local Zones that are guaranteed to use only digits and lower case letters: no hyphens. Conversions to and from official codes and alternative codes are handled via lookup maps.
The `short` abbreviations are variable-length (generally 4-6 characters, but length limits not guaranteed) and strictly algorithmically derived so that people can more easily interpret them.
The `fixed` abbreviations are always exactly 3 characters for regions and 4 characters for availability zones and local zones, but have some exceptional cases (China, Africa, Asia-Pacific South, US GovCloud) that have non-obvious abbreviations.
We currently support Local Zones but not Wavelength Zones. If we support Wavelength Zones in the future, it is likely that the fixed-length abbreviations for them will be non-intuitive.
The intention is that existing mapping will never change, and if new regions or zones are created that conflict with existing ones, they will be given non-standard mappings so as not to conflict.
## Region Codes
[https://github.com/cloudposse/terraform-aws-utils/blob/master/main.tf](https://github.com/cloudposse/terraform-aws-utils/blob/master/main.tf)
---
## Use Basic Provider Block for Root-level Components
**Date**: **19 Oct 2021**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
:::
## Status
**ACCEPTED**
## Context
We [Use Terraform Provider Block with compatibility for Role ARNs and Profiles](/resources/adrs/adopted/use-terraform-provider-block-with-compatibility-for-role-arns-an) in all components other than the root-level components. By _root-level_ we are referring to components that are provisioned in the top-level AWS account that we generally refer to as the `root` account.
The problem arises when working with the `root` account during a cold-start when there’s no SSO, Federated IAM or IAM roles provisioned, so if we used the `role_arn` or `profile` it would not work. That’s why we assume the administrator will use their current AWS session to provision these components, which is why we do not define the `role_arn` or `profile` in `provider { ... }` block for the components like [sso](/components/library/aws/identity-center/) or [account](/components/library/aws/account/) .
## Decision
**DECIDED**: Use the following basic provider block in root components.
```
provider "aws" {
region = var.region
}
```
## Consequences
- Update any root-level components to use this block
## References
- [Use Terraform Provider Block with compatibility for Role ARNs and Profiles](/resources/adrs/adopted/use-terraform-provider-block-with-compatibility-for-role-arns-an)
---
## Use Environment Variables for Configuration (12 Factor)
**Date**: **14 Dec 2021**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- This page is mostly correct but may not be entirely up-to-date.
:::
## Status
**ACCEPTED**
## Problem
We need a simple way to pass parameters to applications without hardcoding the settings. Multiple approaches exist with tradeoffs.
The 12 Factor pattern recommends using environment variables, but there are security implications everyone needs to be aware of.
## Context
## Considered Options
### **Option 1**: Use Environment Variables
#### Pros
- Portable configuration format supported by every mainstream language
- Easily understood (key/value) pairs
- Easily implemented
- Supported by Kubernetes, Docker, and ECS
- Compatible with SSM (via `chamber` and `external-secrets` operator), ASM (`external-secrets` operator) and HashiCorp Vault (via envconsul)
- The 12 Factor pattern recommends using environment variables [https://12factor.net/](https://12factor.net/)
#### Cons
- Environment variables are exposed via the /proc filesystem. any process on the system can read those settings
- Environment variables are harder to validate (e.g. typo an ENV, you won't get a warning in most applications, especially for optional settings)
- Environment variable sprawl: over time, you may end up with hundreds of ENVs as some of our customers have. they have products that have been around for a decade or longer and gone through generations of engineers
- Environment variables are harder to update. E.g. What updates the environment variables, such as CD?
- If your app still consumes configs, but you are parameterizing it with ENVS, it's tedious to update both the envs and the config file templating every time you add an env
- Environment variables are really only convenient for scalars. Serializing structures in YAML/JSON is ugly
- ECS task definitions are capped at 64K, meaning if you use a lot of ENVs (or long ENVs), you will hit this limit when you least expect it
- Kubernetes ConfigMaps are capped at 1MB, so if using ConfigMaps for ENVs, there’s still a practical limit.
- Legacy applications frequently do not support environment variables
### **Option 2**: Use Configuration Files
#### Pros
- Compatible with Kubernetes ConfigMaps, making them easy to mount into containers
[https://kubernetes.io/docs/concepts/storage/volumes/#configmap](https://kubernetes.io/docs/concepts/storage/volumes/#configmap)
- Compatible even with legacy applications that depend on configuration files
- Can use templates to generate configuration files from environment variables (e.g. not mutually exclusive)
- Easily deployed as part of CI/CD workflow
#### Cons
- There are a million configuration file formats, and no standardized way of defining them
- For ephemeral environments, configuration files need to be templatized, adding a layer of complexity
- Configuration files should be encrypted (e.g. see `sops-operator` for kubernetes)
## Decision
**DECIDED**: Use environment variables as standardized means of configuration
## Consequences
- Use `external-secrets` operator with Kubernetes to mount SSM/ASM parameters as environment variables
- Use ECS `envs` to pass environment variables to tasks in the service definition
## References
- See also [Use SSM over ASM for Infrastructure](/resources/adrs/adopted/use-ssm-over-asm-for-infrastructure)
---
## Use One OpsGenie Integration per Team
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
:::
## Context
OpsGenie Integrates with Datadog, on the OpsGenie Platform this creates an API key that when used, sends to that specific Integration. e.g. A Key that is used by datadog to send events to OpsGenie. Many integrations can be setup on OpsGenie, allowing Datadog to specify which integration to use. This is important because an integration on OpsGenie can be specified to handle events differently. This means datadog can send an event to `@OpsGenie-1` and its routed differently in Opsgenie than if the message contained `@OpsGenie-2`. The problem with adding more integrations, is creating them in Opsgenie can be done through terraform, but adding the generated API Key must be added to DataDog manually.
**TL;DR:** We support OpsGenie today and have a considerable investment in supporting it, but are open to implementing PagerDuty.
### OpsGenie Integration Per Team (Decided)
Create an opsgenie api integration of `type=datadog` per team which maps to a responding team in order to enable `@opsgenie-` to be inserted within a Datadog Monitor message. This will allow us to tag resources with a team and then set the message `@opsgenie-{{team.name}}` and depending on the incident rule per team, it will or will not declare an incident.
#### Pros
- Simple logic
- **This follows Datadog’s opsgenie guide** [https://docs.datadoghq.com/integrations/opsgenie/#create-acknowledge-and-close-opsgenie-alerts-from-datadog](https://docs.datadoghq.com/integrations/opsgenie/#create-acknowledge-and-close-opsgenie-alerts-from-datadog) ****
- We would have either no global alert policy or a generic one
#### Cons
- For each team, we’d have to clickops a Opsgenie integration in Datadog because Datadog API doesn’t support opsgenie integration.
(however, Only really a cold-start problem)
### One OpsGenie Integration with Global Routing Rules
We create a single opsgenie api integration of `type=datadog` which has a global alert policy per team because the alert policy cannot templatize the responder based on the incoming key value tag
#### Pros
- Single generic api integration because this will have to be clickops’ed in Datadog because of no Datadog API for opsgenie integration
- Terraform support for global alert policy per team
#### Cons
- Many global alert policies, specifically 1 per team minimum
## Decision
We have decided to go with an Integration per team. This is because it follows the recommended Datadog vs OpsGenie Integration method, it provides a clean approach to team routing, and it is hopeful that eventually the clickOps portion can be terraformed once the API is exposed.
---
## Use OpsGenie for Incident Management
Monitoring platforms like CloudWatch and Datadog historically provide very poor support for Incident Management. Incident Management is the art of ingesting, classifying, and escalating alerts to stakeholders based on rotations, teams, services, etc.
:::tip Latest!
The content in this ADR is up-to-date! For questions, please reach out to Cloud Posse
:::
## Context
There are quite a few incident management platforms available, with PagerDuty being the OG. Customers often ask why we selected OpsGenie over PagerDuty, this is our current rationale.
**TL;DR:** We support OpsGenie today and have a considerable investment in supporting it, but are open to implementing PagerDuty.
### OpsGenie (Decided)
[https://github.com/cloudposse/terraform-opsgenie-incident-management](https://github.com/cloudposse/terraform-opsgenie-incident-management)
#### Pros
- Most customers use Atlassian products, including Jira, Service Desk, and Confluence which are all tightly integrated with OpsGenie
- High feature parity with PagerDuty
- OpsGenie is by Atlassian and tightly integrated
- OpsGenie is less expensive than PagerDuty
- OpsGenie is tightly integrated with StatusPage
- Cloud Posse only has prior art for OpsGenie :smiley: (e.g. 20+ sprints executed on opsgenie, but none on pagerduty)
#### Cons
- Lacks some of the AI features now present in more modern Incident Management Platforms
### PagerDuty
Customers frequently ask if we have PagerDuty support. The short answer is not yet. The longer answer is, we’re open to supporting it, if someone sponsors the development. We support OpsGenie due to customer demand.
#### Pros
- Arguably the dominant platform for Incident Management
- Supports [Artificial Intelligence for IT operations (AIOps)](https://www.pagerduty.com/reference/learn/what-is-aiops/)
#### Cons
- More expensive than PagerDuty
### Datadog Incident Management
Datadog released its own [incident management platform at the tail end of 2020](https://www.datadoghq.com/blog/incident-response-with-datadog/). We’ve not had a chance to evaluate the platform, mostly because as of this writing, [terraform support is non-existent](https://registry.terraform.io/providers/DataDog/datadog/latest/docs). For this reason, we ruled it out.
### Alert Panda
Not Considered
### VictorOps
Not Considered
## Decision
- Use OpsGenie
## Consequences
- Customers who want to implement OpsGenie for Incident Management should [subscribe to the Standard or Enterprise plans.](https://www.atlassian.com/software/opsgenie/pricing)
---
## Use Spacelift for GitOps with Terraform
:::info
A public page is available at [https://cloudposse.com/faqs/why-do-you-recommend-spacelift/](https://cloudposse.com/faqs/why-do-you-recommend-spacelift/) which shares a lot of these points.
:::
Spacelift checks off all the boxes for managing extremely large environments with a lot of state management. Since Cloud Posse's focus is on deploying large-scale loosely coupled infrastructure components with Terraform, it's common to have several hundred terraform states under management.
Every successful business in existence uses accounting software to manage its finances and understand the health of its business. The sheer number of transactions makes it infeasible to reconcile the books by hand. The same is true of modern infrastructure. With hundreds of states managed programmatically with terraform, and modified constantly by different teams or individuals, the same kind of state reconciliation is required to know the health of its infrastructure. This need goes far beyond continuous delivery and few companies have solved it.
## **Major Benefits**
- **Reconciliation** helps you know what's deployed, what's failing, and what's queued.
- **Plan Approvals** ensures changes are released when you expect them
- **Policy-Driven Framework** based on OPA (open-source standard) is used to trigger runs and enforce permissions
- **Drift Detection** runs on a customizable schedule surfaces inconsistencies with what's deployed and what's in git on previously successful stacks
- **Terraform Graph Visualization** makes it easier to visualize the entire state across components
- **Audit Logs** of every change traced back to the commit and filterable by time
- **Affordable alternative** to other commercial offerings
- **Works with more than Terraform** (e.g. Pulumi)
- **Pull Request Previews** show what the proposed changes are before committing them
- **Decoupling of Deploy from Release** ensures we can merge to trunk and still control when those changes are propagated to environments
- **Ephemeral Environments** (Auto Deployment, Auto Destruction) enables us to bring up infrastructure with terraform and destroy it when it's no longer needed
- **Self-hosted Runners** ensure we're in full control over what is executed in our own VPC, with no public endpoints
## Concerns
### **What level of access do the Spacelift worker pools have?**
Spacelift Workers are deployed in your environment with the level of permission that we grant them via IAM instance profiles. When provisioning any infrastructure that requires modifying IAM, the minimum permission is administrative. Thus, workers are provisioned with administrative permissions in all accounts that we grant access to since the terraform we provision requires creating IAM roles and policies. Note, this is not a constraint of Spacelift; this is required regardless of the platform that performs the automation.
### **What happens if Spacelift as a product goes away?**
First off, while Spacelift might be a newer brand in the infrastructure space, it's used by publicly traded companies, Healthcare companies, banks, institutions, Fortune 500 companies, etc. So, Spacelift is not going away.
But just to entertain the hypothetical, let's consider what would happen. Since we manage all terraform states in S3, we have the “break glass” capability to leave the platform at any time and can always run terraform manually. Of course, we would lose all the benefits.
### **How tough would it be to move everything to a different platform?**
Fortunately, with Spacelift, we can still use S3 as our standard state backend. So if at any time we need to move off of the platform, it's easy. Of course, we'd give up all the benefits but the key here is we're not locked into it.
### **Why not just use Atlantis?**
We used to predominately recommend Atlantis but stopped doing so a number of years ago. The project was more or less dormant for 2-3 years, and only recently started accepting any Pull Requests. Atlantis was the first project to define a GitOps workflow for Terraform, but it's been left in the dust compared to newer alternatives.
- With Alantis, there is no regular reconciliation of what terraform state has been applied or not applied. So we really have no idea in atlantis the _actual_ state of anything. With a recent customer, we helped migrate them from Atlantis to Spacelift and it took 2 months to reconcile all the infrastructure that had drifted.
- With Atlantis, there's no drift detection, but with spacelift, we detect it nightly (or as frequently as we want)
- With Atlantis, there's no way to manage dependencies of components, so that when one component changes, any other components that depend on it should be updated.
- With Atlantis, there's no way to setup OPA policies to trigger runs. The OPA support in atlantis is very basic.
- With Atlantis, [anyone who can run a plan, can exfiltrate your root credentials](https://www.youtube.com/watch?v=H9KvPe09f5A). This [talked about by others](https://alex.kaskaso.li/post/terraform-plan-rce) and was recently [highlighted at the Defcon 2021 conference](https://www.youtube.com/watch?v=3ODhxYY9-9U).
- With Atlantis, there's no way to limit who can run terraform plan or apply. If you have access to the repo, you can run a terraform plan. If your plan is approved, you can run terraform apply. [Cloud Posse even tried to fix it](https://github.com/runatlantis/atlantis/issues/308) (and maintained our own fork for some time), but the discussion went nowhere and we moved on.
- With Atlantis, there's no way to restrict who has access to unlock workspaces via the web GUI. The only way is to install your own authentication proxy in front of it or restrict it in your load balancer.
- With Atlantis, you have to expose the webhook endpoint publicly to GitHub.
### **Why not use Terraform Cloud?**
[Terraform Cloud](https://www.terraform.io/cloud) is prohibitively expensive for most non-enterprise customers we work with, and possibly 10x the cost of Spacelift. Terraform Cloud for Teams doesn't permit self-hosted runners and requires hardcoded IAM credentials in each workspace. That's insane and we cannot recommend it. Terraform Cloud for Business (and higher) support self-hosted runners that can leverage AWS IAM Instance profiles, but the number of runners is a significant factor of the cost. When leveraging several hundred loosely-coupled terraform workspaces, there is a significant need for a lot of workers for short periods of time. Unfortunately, even if those are only online for a short period of time, you need to commit to paying for them for the full month on an annualized basis. Terraform Cloud also requires that you use their state backend, which means there's no way to “break glass” and run terraform if they are down. If you want to migrate off of Terraform Cloud, you need to migrate the state of hundreds of workspaces out of the platform and into another state backend.
## References
- https://www.spacelift.io/case-studies/cloud-posse
- https://spacelift.io/case-studies
---
## Use SSM over ASM for Infrastructure
:::tip Latest!
The content in this ADR is up-to-date! For questions, please reach out to Cloud Posse
:::
We primarily provision static credentials randomly generated by Terraform using the database provider and then write them to SSM, encrypted with KMS.
Amazon Secrets Manager (**ASM**) was launched well after Amazon Systems Manager ([formerly AWS Simple Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html#service-naming-history), hence **SSM**) Parameter Store. Both fulfill similar use-cases, except for Parameter Store was create more generally as a Key/Value store for parameters that can also be encrypted with KMS/CMK keys. ASM on the other hand was purpose-built to manage secrets and has the concept of automatic rotation using Lambdas.
Security best practices call for secrets to be rotated (changed) on a regular basis so that if a secret is improperly obtained by an adversary, the length of time the adversary can use that secret is limited. However, if everyone sharing the secret has to agree on what the secret is all the time, changing the secret presents a synchronization problem: all parties need to be updated with the new secret at exactly the same time. Parameter store only solves part of this problem: by having a single source of truth accessed by all parties, changing the secret once in parameter store makes the new secret available at the same time to all, but there is no mechanism to inform the parties of the change or to synchronize their adoption of the new secret.
ASM was built to solve this synchronization problem. It allows you to store and retrieve several versions of a secret, with one being designated the “current” one and one being designated the “previous” one. To take advantage of this, servers (whatever is requiring the secret to be presented for authentication) must allow clients (whatever is presenting secrets as authentication) to present either the current or previous secret. When this is done, the synchronization problem is solved by executing the following steps in order
1. Confirm all clients and servers are using the “current” secret
2. “Rotate” the secret by changing the label of the current secret from “current” to “previous” and creating a new “current” secret. AWS provides a Lambda function that implements this.
3. Update all servers to accept either secret. This allows old clients who only know about the now “previous” secret to continue to access the servers, while allowing new clients to pick up and use the new “current” secret. Servers should be designed to be able to be updated in this way while running, without causing a service interruption.
4. Have all clients pick up and start using the new “current” secret when it is convenient for them.
By maintaining 2 active secrets, clients can be designed to refresh their secrets when convenient, without significant time pressure. There is no need for special notification or synchronization features to be built into the clients. However, server-side support is critical: **If the servers do not support simultaneous use of 2 different secrets for the same purpose, there is no practical benefit to making the “previous” secret available.** One key reason to use ASM is that some Amazon services, such as RDS, have built-in integration with ASM to provide this simultaneous support.
After ASM was built, SSM was enhanced to provide similar capabilities, although without as streamlined an API or a Lambda to do secrets rotation. SSM can store up to 100 versions of a parameter, and the versions can be given symbolic tags, such as “current” and “previous”. So **if you are building your own servers, you can implement the same kind of secrets rotation strategy with SSM as with ASM.**
## ASM
### Pros
- Purpose-built to manage secrets
- Supports automatic rotation of secrets using Lambda functions
- AWS SDK support enables applications written to automatically pick up the new credentials
- Multiple Kubernetes Operators exist to synchronize SSM Parameters to Kubernetes `Secrets`
- Now supports 500K secrets per account [https://aws.amazon.com/about-aws/whats-new/2021/11/aws-secrets-manager-increases-secrets-limit-per-account/](https://aws.amazon.com/about-aws/whats-new/2021/11/aws-secrets-manager-increases-secrets-limit-per-account/)
- Supports cross-region replication of a secret
### Cons
- Only consumer applications that support dynamic credentials can take advantage of this functionality
- Most systems do not support the complex pattern of key rotation; namely, concentric keys need some period of overlapping validity. Therefore, in practice it’s used mostly just for RDMS systems.
- Still need to deploy the Lambda functions, which means all the CD/CD machinery to support lambdas (workflows, builds, integration tests, artifacts, deployments, etc)
- No way to aggregate all secrets with a prefix
- No built-in audit trail metadata (but it writes CloudTrail like any other AWS API).
### Other
- 5,000 get requests per second limit: [https://aws.amazon.com/about-aws/whats-new/2020/11/aws-secrets-manager-supports-5000-requests-per-second-for-getsecretvalue-api-operation/](https://aws.amazon.com/about-aws/whats-new/2020/11/aws-secrets-manager-supports-5000-requests-per-second-for-getsecretvalue-api-operation/)
## SSM
### Pros
- Very simple to operate with (true to it’s original name)
- Can easily aggregate all key/values with a given prefix
- Encrypted with KMS
- Multiple Kubernetes Operators exist to synchronize SSM Parameters to Kubernetes `Secrets`
- Built-in audit trail for every parameter in addition to CloudTrail
### Cons
- No built-in concept of key rotation like ASM; however, see the “Cons” of ASM
- Rate limits are lower than for ASM, but the access pattern of SSM is different so it’s not a 1:1 comparison. See `GetParametersByPath` and `GetParameters`
[https://docs.aws.amazon.com/general/latest/gr/ssm.html](https://docs.aws.amazon.com/general/latest/gr/ssm.html)
[https://aws.amazon.com/about-aws/whats-new/2019/04/aws_systems_manager_now_supports_use_of_parameter_store_at_higher_api_throughput/](https://aws.amazon.com/about-aws/whats-new/2019/04/aws_systems_manager_now_supports_use_of_parameter_store_at_higher_api_throughput/)
### Other
- Historical rate limits for SSM were very low, now they are up to 3,000.
## Related
- [Use Environment Variables for Configuration (12 Factor)](/resources/adrs/adopted/use-environment-variables-for-configuration-12-factor)
- [REFARCH-210 - Decide Whether to Use RDS IAM Authentication](/layers/data/design-decisions/decide-whether-to-use-rds-iam-authentication/)
---
## Use Terraform Provider Block with compatibility for Role ARNs and Profiles
**Date**: **19 Oct 2021**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- Please read through the detailed explanation in [Access Control Evolution](/layers/identity/docs/aws-access-control-evolution).
:::
## Status
**ACCEPTED**
## Context
Cloud Posse has used 2 conventions for assuming roles within the terraform `provider { ... }` block.
### The `role_arn` Method
This was our original method of assuming roles within `terraform`.
```
provider "aws" {
# The AWS provider to use to make changes in the DNS primary account
alias = "primary"
region = var.region
assume_role {
role_arn = coalesce(var.import_role_arn, module.iam_roles.dns_terraform_role_arn)
}
}
```
We used this for years together with AWS Federated IAM with SAML and had no issues. Then we started supporting AWS SSO and ran into some issues because with AWS SSO the role names are non-deterministic. As a result, we switched to the `profile` method below.
### The `profile` Method
With the `profile` method we offload the burden of determining the `role_arn` to some external script that would generate the `~/.aws/config` with profiles and role mappings. This allowed us to support simultaneously the AWS Federated IAM alongside the AWS SSO method of authentication. The downside was we had to use the _generator_ pattern to create the `~/.aws/config`, which we generally like to avoid. We painfully upgraded all of our components to use this method since we didn’t see a path forward with the `role_arn` method at the time.
```
provider "aws" {
region = var.region
# `terraform import` will not use data from a data source, so on import we have to explicitly specify the profile
profile = coalesce(var.import_profile_name, module.iam_roles.terraform_profile_name)
}
```
### The Hybrid Method
Now we support the hybrid method after having come full circle and once again wanting to move to use the `role_arn` everywhere so we do not need to generate the AWS config. However, we also need to support customers that use the `profile` method. Fortunately, @Jeremy Grodberg found a convenient way to support both methods.
```
provider "aws" {
region = var.region
profile = module.iam_roles.profiles_enabled ? coalesce(var.import_profile_name, module.iam_roles.terraform_profile_name) : null
dynamic "assume_role" {
for_each = module.iam_roles.profiles_enabled ? [] : ["role"]
content {
role_arn = coalesce(var.import_role_arn, module.iam_roles.terraform_role_arn)
}
}
}
```
## Decision
**DECIDED**: Use the Hybrid Method to support both `profile` or `role_arn` for backward compatibility
Note: Until [Proposed: Use AWS Federated IAM over AWS SSO](/resources/adrs/proposed/proposed-use-aws-federated-iam-over-aws-sso) is decided otherwise, our recommendation for new projects is to use `role_arn`, but we continue to use the hybrid provider in public components, and therefore in client components.
## Consequences
- Update all components to use the Hybrid Method.
## References
- [Proposed: Use AWS Federated IAM over AWS SSO](/resources/adrs/proposed/proposed-use-aws-federated-iam-over-aws-sso)
---
## Use Terraform to Manage Helm Releases
**Date**: **14 Dec 2021**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- Cloud Posse recommends using the `helm-release` module to manage operators and other infrastructure setup requirements but does not recommend using Terraform for application deployment.
:::
## Status
**DRAFT**
## Problem
Provisioning helm releases with `helmfile` has worked for a long time, but has a number of shortcomings:
- No integration with Spacelift
- No turnkey workflow for github actions (continuous delivery)
- No way to provision IAM roles needed by services
to solve these problems, we switched to using the Terraform `helm` provider, but this introduced a new set of problems:
- Manual changes made to cluster resources were not detected as drift
- Changes made to helm template files also were not detected as drift
We want to be able to provision helm releases and all dependencies with terraform and GitOps while not compromising on drift detection.
### Related Issues
- [https://github.com/databus23/helm-diff/issues/176](https://github.com/databus23/helm-diff/issues/176)
- [https://github.com/hashicorp/terraform-provider-helm/pull/702](https://github.com/hashicorp/terraform-provider-helm/pull/702)
- [https://github.com/databus23/helm-diff/issues/176#issuecomment-572952711](https://github.com/databus23/helm-diff/issues/176#issuecomment-572952711)
- [https://github.com/databus23/helm-diff/pull/304](https://github.com/databus23/helm-diff/pull/304)
(this might fix the problem in helm-diff but it was not accepted)
## Context
| **Architecture** | **Detect Changes to Non-chart YAML Values?** | **Detect Changes to Local Chart Files?** | **Detect Manual Changes to Deployed Resources?**(e.g. `kubctl edit`) |
| -------------------------------------- | -------------------------------------------- | ---------------------------------------- | ------------------------------------------------------------------------- |
| `helm_release` with `manifest=true` | Yes | Yes | No |
| `helm_release` without `manifest=true` | Yes | No | No |
| `kubernetes_manifest` | Yes | No | No |
| `helmfile_release` | Yes | No | No |
### Testing Methodology
:::caution
Note, changing the port in a running service is not a good test as it fails even with `kubectl apply`
:::
#### Part 1: `echo-server`
- Modify any value from any of the local template files (YAML files within `echo-server/charts/echo-server/`). Then, check that that change is detected by terraform.
- Modify any value in `default.auto.tfvars`. Then, check that that change is detected by terraform.
- Modify any deployed resource via `kubectl edit` and observe that that change is not detected by Terraform.
#### Part 2: `cert-manager` (using the `cert-manager` component from the `>v0.185.1` releases of `cloudposse/terraform-aws-components`)
- With `letsencrypt_enabled: true` and `cert_manager_issuer_selfsigned_enabled: false`, modify any value in `cert-manager-issuer/templates/letsencrypt-issuer.yaml`. Then, check that that change is detected by terraform.
- With `letsencrypt_enabled: true` and `cert_manager_issuer_selfsigned_enabled: false`, modify any value in either `cert-manager-issuer/templates/selfsigning-certificate.yaml` or `cert-manager-issuer/templates/selfsigning-issuer.yaml`. Then, check that the change is not detected by Terraform, because these files will not be rendered in the deployed helm component (due to the if statements at the top of them).
- Modify any value in `default.auto.tfvars`. Then, check that that change is detected by terraform.
- Modify any deployed resource via `kubectl edit` and observe that that change is not detected by Terraform.
## Considered Options
### Option 1: Helm Provider
`manifest=true`
### Option 2: Helm Provider with Kubernetes Provider
### Option 3: Helmfile Provider
### Option 4: ArgoCD Provider (experimental)
### Option 5: Helm Provider with Git Provider and ArgoCD (experimental)
### Option 6: Stop using Helm? 🙂
## Decision
**DECIDED**: Use the Terraform `helm` provider with the `manifest=true` flag.
## Consequences
- We now have solved all the problems that motivated this design choice, and we have not sacrificed any drift detection, relative to using helmfiles for deployment.
## References
- [https://github.com/cloudposse/terraform-aws-components/pull/381](https://github.com/cloudposse/terraform-aws-components/pull/381)
---
## Use Vendoring in Atmos
**Date**: **21 Mar 2022**
:::tip Latest!
The content in this ADR is up-to-date! For questions, please reach out to Cloud Posse
:::
## Status
**DECIDED**
## Problem
We need a way to centralize cloudposse components for reuse across all customers. We have `cloudposse/terraform-aws-components`, but we do not use it as a source of truth. As a result, maintaining our vast library of components is challenging.
We need some way to discover components to avoid duplication of effort. Additionally, we need some way to easily create new components (e.g. from a template).
Also related to [Proposed: Use Mixins to DRY-up Components](/resources/adrs/proposed/proposed-use-mixins-to-dry-up-components) and [Proposed: Use Atmos Registry](/resources/adrs/proposed/proposed-use-atmos-registry)
## Context
## Considered Options
### Option 1: New configuration spec
In the component directory, place a file like this to specify attribution.
#### Components
```
# component.yaml — proposed name, up for debate
# This configuration is deep-merged with the upstream component's configuration (component.yaml).
source: # this stanza is omitted in the component.yaml of the upstream component
type: git
uri: github.com/cloudposse/terraform-aws-components.git//modules/argocd
version: 1.2.3
```
Similarly, in the component’s upstream repository (e.g. `cloudposse/terraform-aws-components`), we will distribute a file like this in each component.
```
# source:
# type: git
# uri: github.com/cloudposse/terraform-aws-components.git//modules/argocd
# version: 1.2.3
```
#### Mixins
The file can also define any mixins that are to be downloaded and generated as part of this component. See also [Proposed: Use Mixins to DRY-up Components](/resources/adrs/proposed/proposed-use-mixins-to-dry-up-components) for a use-case.
```
# component.yaml — proposed name, up for debate
mixins:
context.tf:
source: github.com/cloudposse/null-label.git//exports/context.tf # also supports local paths
version: 0.25.0
filename: context.tf
```
Multiple mixins can be defined and parameterized. The parameterization will be based on Go templates.
```
mixins:
infra-state:
source: github.com/cloudposse/terraform-aws-components.git//mixins/infra-state.mixin.tf
version: 1.2.3
filename: mixin.infra-state.tf # we should probably move to the prefix convention as a default
parameters: # anything that needs to be interpolated when the file is created
monorepo_uri: git::ssh://git@github.com/ACME/infrastructure.git?ref=0.1.0
sops:
source: github.com/cloudposse/terraform-aws-components.git//mixins/sops.mixin.tf
version: 1.2.3
filename: mixin.sops.tf
```
These could also be defined in the component’s upstream repository (e.g. `cloudposse/terraform-aws-components`)
```
# source:
# type: git
# uri: github.com/cloudposse/terraform-aws-components.git//modules/argocd
# version: 1.2.3
mixins:
context.tf:
source: github.com/cloudposse/null-label.git//exports/context.tf # also supports local paths
version: 0.25.0
filename: context.tf
```
A state file/manifest (not to be confused with a Terraform state file) needs to be created whenever `atmos` pulls down the mixins and components. This state file keeps track of which files were vendored and their checksums. This is so that we can identify which files came from the upstream versus files that already existed locally, and also so that we can determine if someone modifies the files upstream, and so we know when to update the mixins.
### Option 2: Extend the Stack Configuration
This was the direction we thought we would take, but there’s a fundamental flaw with this approach that makes it complicated to implement: inheritance. In configuration, if multiple stacks were to define different sources of the component, which component is used? We would need to download all versions of them and couldn’t have the neat & clean layout in `components/terraform`. Thus we prefer Option 1.
The inheritance issue can be mitigated by using the `metadata` section of the component configuration which is excluded from the inheritance chain.
### Option 3: Git Submodules
- Complicated syntax
- Doesn’t support adding files to the downstream tree (e.g. `backend.tf.json`)
### Option 4: Git Subtrees
- Complicated syntax
- Would be compatible with additional files added to the tree and local modifications
### Option 5: Vendir
[https://github.com/vmware-tanzu/carvel-vendir](https://github.com/vmware-tanzu/carvel-vendir)
Vendir was our original inclination. It’s a powerful tool, but supports too many features we don’t need and it would be hard to add what we want, since we’re not in control of the project.
[https://github.com/vmware-tanzu/carvel-vendir/issues/29](https://github.com/vmware-tanzu/carvel-vendir/issues/29)
[https://github.com/vmware-tanzu/carvel-vendir/pull/64](https://github.com/vmware-tanzu/carvel-vendir/pull/64)
## Decision
**DECIDED**: Use Option 1 - a new configuration specification
## Consequences
- Update atmos to add support for Option 1.
## References
-
---
## Architectural Design Records (ADRs)
import DocCardList from '@theme/DocCardList'
These are the records of why Cloud Posse made various decisions in the design of the Reference Architecture. These decisions may differ from what your organization has decided due to going through the "Design Decisions" process.
Send requests for additional documentation in [GitHub Discussions](https://github.com/orgs/cloudposse/discussions).
## Records
---
## Deprecated Architecture Decision Records
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
These are a collection of architectural design records that we no longer ascribe to or have abandoned for various reasons.
---
## Use Confluence for Customer Documentation
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- We are no longer using Confluence. Now all documentation is included with [docs.cloudposse.com](https://docs.cloudposse.com/) and formatted with Docusarus.
:::
## Context
Cloud Posse maintains a significant amount of documentation sprawled around our public open source repos, customer repos, and platforms (e.g. LucidChart, Google Docs, GitHub, YouTube). We struggle to write and keep the documentation up to speed, communicate changes to the documentation, and disseminate the documentation. More than any other company, we integrate dozens and dozens of tools and technologies. The landscape is also continuously changing beneath our feet, forcing us to make changes and adapt.
## Options Considered
### Confluence (Decided)
We debated for a long time whether or not to use Confluence, and for a long time were very much against the product because it felt clunky/slow and lacked the features of more modern documentation systems. Over the past couple of years, after significant investment by Atlassian into the product it is much faster, less clunky, and added essential features such as commenting on phrases and supports live editing.
#### Pros
- Extensions for most things we want (PlantUML, embeds, iframes, LucidCharts, etc)
- Tightly integrates into Jira, our Workflow Management platform and where we keep our reference architecture
- Updates to documentation can be worked on in a draft mode before publishing
- Accessible to all customers immediately (more scalable)
- Most of our customers use Atlassian products (Jira, Confluence, Service Desk)
- APIs exist for updating documentation and pulling it out
- We can archive pages without deleting them
- Zapier automation enables us to take action when things change
- Native [workflow automation coming soon](https://www.youtube.com/watch?v=N8WCeJldvFs)
- We’re already paying for the additional seats
- We can embed markdown content from anywhere, including our other repos using [Atlassian Marketplace Markdown Macro for Confluence](https://marketplace.atlassian.com/apps/1211438/markdown-macro-for-confluence?hosting=cloud&tab=overview)
([see example](/components/library/aws/account-settings/))
- Fully managed solution we do not need to host
#### Cons
- Cost per seat is not insignificant
- Does not natively export content to Markdown or GitHub (however, tools exist to facilitate this and we may write one eventually specifically for our reference architecture)
- Versions of documentation do not necessarily relate to a customer’s version of the software. For customers who are sensitive to this, we encourage vendoring relevant portions of documentation.
- Confluence does not have a native desktop client for Mac or PC
- No support for redlining or “suggest” like functionality that the author can simply accept; the workaround is to leave a comment on the section with the suggested changes
### Open Source All Documentation
We strongly considered open-sourcing all of our documentation, which would provide anyone the ability to implement and use our reference architecture. Our plan is instead to eventually open-source portions of our documentation but to not let that hold us up and start aggregating it in one place. We decided against this because for the reasons below.
#### Pros
- One place to update the documentation for all of our customers and community
- Radically increase the adoption of our process and methodology
- Receive contributions via GitHub Pull Requests
- Use our existing documentation infrastructure (hugo)
#### Cons
- No way to easily wall off portions of our documentation based on our current documentation infrastructure with Hugo and S3.
- We already receive an overwhelming amount of contributions and requests from our community. If our documentation were all public, it would increase our support burden while decreasing the time we focus on customers
- Difficult to share more sensitive or private information with our customers (contact information, architectural diagrams). We want to be as forthcoming as possible and make it as easy as possible to prioritize the support our customers need, over worrying about what we can publish publicly.
### Use GitHub with Markdown
We started delivering documentation to all customers via Markdown in GitHub. The problem is organizing the documentation leaves a lot to be desired.
#### Pros
- Easily vendored into customer repositories
- Use GitHub Pull Request / Code Review process
- Technology neutral solution
- Distribute documentation easily and securely with Private GitHub Pages
#### Cons
- Private Github Pages are only supported by GitHub Enterprise, which most of our customers do not use
- Hosting a private documentation portal (e.g. Hugo) is even more opinionated since most customers already host documentation in some system. Plus it would require some form of authentication.
- Using markdown by itself is very limiting and incorporating screenshots, images, diagrams is very tedious - since exported images are quickly out of date.
- Opening Pull Requests is arguably much slower and a larger barrier to contribution
### Vendor All Documentation
We have always wanted to help customers by providing documentation in their systems, but trying to serve our customers this way has not scaled with our rate of growth and the number of integrations we support. This is similar to [Use GitHub with Markdown](#)
#### Pros
- Customers have a version of the documentation that closely matches exactly what is deployed
- Infrastructure code and documentation are alongside each other
- Customers control changes to documentation
#### Cons
- No practical way to syndicate documentation and changes across customers
- Customers miss out on corrections, updates, and improvements
- Cost to customers is significantly greater
### Notion
Notion is a very polarizing system. Many users using Notion came from some other system like Confluence, Evernote or Quip. We felt massive FOMO not jumping ship to Notion, but decided against it for the reasons below.
#### Pros
- It supports some very nice ways to provide documentation; it’s somewhere in-between Evernote meets Airtable
- Integrates with systems like Jira
- Nice cross-platform Desktop application
#### Cons
- It’s another vendor and would require additional cost-per-seat to share
- Until recently, it provided no API whatsoever.
- Very few of our customers use Notion compared to the alternatives
- @Erik Osterman cannot stand all the cheesy UTF-8 emojis sprinkled everywhere on every page. 🚀 😵 💩
## Decision
- Use our `REFARCH` space in Confluence to aggregate, share, and disseminate documentation
## Consequences
- Share `REFARCH` space with all customers
---
## Use Folder Structure for Compliance Components
**Date**: **21 Mar 2022**
:::warning Rejected!
The proposal in this ADR was rejected! For questions, please reach out to Cloud Posse.
- We have since refactored our Compliance components. For more see, [Foundational Benchmark Compliance](/layers/security-and-compliance/).
:::
## Status
**DECIDED**
## Problem
- Too many files clutter the stack config folders
- Files are very terse and seldom edited
- So many files that code generation of YAML is the only practical way of managing them
## Context
## Considered Options
### Option 1: Use Virtual Components
Define one file for each default region (17). Having multiple files is convenient for GitOps and detecting what files changed in order to trigger CI.
```
# stacks/catalog/compliance/ue1.yaml
components:
terraform:
compliance-ue1:
component: compliance
vars:
region: us-east-1
environment: ue1
```
The root component should follow the same pattern, with one file for each default region.
```
# stacks/catalog/compliance/root/ue1.yaml
components:
terraform:
compliance-root-ue1:
component: compliance-root
vars:
region: us-east-1
environment: ue1
```
:::caution
There’s one downside with this naming convention, is that the final stack names (e.g. in Spacelift) will look like `acme-ue1-root-compliance-ue1`, with the `ue1` being repeated in the `name` because it is pulled from the `component`. This will be fixed in .
:::
Define a baseline that imports all 17 default regions
```
# stacks/catalog/compliance/baseline.yaml
imports:
- catalog/compliance/ue1
- catalog/compliance/ue2
...
```
Repeat for the “root” baseline should import all 17 default regions
```
# stacks/catalog/compliance/root/baseline.yaml
imports:
- catalog/compliance/root/ue1
- catalog/compliance/root/ue2
...
```
Then in each account-level stack configuration, import the compliance baseline.
Here are some examples:
```
# stacks/globals.yaml
imports:
- catalog/compliance/baseline
```
```
# stacks/plat/prod.yaml
imports:
- globals
```
```
# stacks/plat/staging.yaml
imports:
- globals
```
```
# stacks/core/security.yaml
imports:
- globals
```
```
# stacks/core/dns.yaml
imports:
- globals
```
The root account is the only exception, which would look like this:
```
# stacks/core/root.yaml
imports:
- globals
- compliance/root/baseline
```
### Option 2: Use YAML Separators
```
# mock stack config
stages:
- a
- b
- c
components:
terraform:
compliance:
vars:
region: us-east-1
environment: ue1
---
# mock stack config
stages:
- a
- b
- c
components:
terraform:
compliance:
vars:
environment: ue2
region: us-east-1
---
# mock stack config
stages:
- a
- b
- c
components:
terraform:
compliance:
vars:
region: us-west-1
environment: uw1
```
### Option 2: Current Solution
```
###
### stack: stacks/mdev/euw3/mdw3-audit.yaml
### chain: stacks/mdev/euw3/mdw3-audit.yaml
###
```
```
import:
- mdev/mdev-globals
- euw3/euw3-audit
```
```
###
### stack: stacks/mdev/mdev-globals.yaml
### chain: stacks/mdev/euw3/mdw3-audit.yaml > stacks/mdev/mdev-globals.yaml
###
import:
- globals
vars:
tenant: mdev
terraform:
backend:
s3:
bucket: "vygr-mdev-use2-root-tfstate"
dynamodb_table: "vygr-mdev-use2-root-tfstate-lock"
role_arn: "arn:aws:iam::807952753552:role/vygr-mdev-gbl-root-terraform"
remote_state_backend:
s3:
bucket: "vygr-mdev-use2-root-tfstate"
dynamodb_table: "vygr-mdev-use2-root-tfstate-lock"
role_arn: "arn:aws:iam::807952753552:role/vygr-mdev-gbl-root-terraform"
settings:
spacelift:
worker_pool_name: vygr-mdev-use2-auto-spacelift-worker-pool
```
```
###
### stack: stacks/globals.yaml
### chain: stacks/mdev/euw3/mdw3-audit.yaml > stacks/mdev/mdev-globals.yaml > stacks/globals.yaml
###
vars:
namespace: vygr
required_tags:
- Team
- Service
tags:
# We set the default team here, this means everything will be tagged Team:sre unless otherwise specified.
# This is used because it is the default alerted team.
Team: sre
terraform:
vars:
label_order: ["namespace", "tenant", "environment", "stage", "name", "attributes"]
descriptor_formats:
stack:
format: "%v-%v-%v"
labels: ["tenant", "environment", "stage"]
# This is needed for the transit-gateway component
account_name:
format: "%v"
labels: ["stage"]
backend_type: s3 # s3, remote, vault, etc.
backend:
s3:
encrypt: true
key: "terraform.tfstate"
acl: "bucket-owner-full-control"
region: "us-east-2"
remote_state_backend_type: s3 # s3, remote, vault, etc.
remote_state_backend:
s3:
encrypt: true
key: "terraform.tfstate"
acl: "bucket-owner-full-control"
region: "us-east-2"
```
```
###
### stack: stacks/euw3/euw3-audit.yaml
### chain: stacks/mdev/euw3/mdw3-audit.yaml > stacks/euw3/euw3-audit.yaml
###
import:
- euw3/euw3-globals
vars:
stage: audit
terraform:
vars: {}
helmfile:
vars: {}
components:
terraform:
compliance:
settings:
spacelift:
workspace_enabled: false
aws-inspector:
settings:
spacelift:
workspace_enabled: false
```
```
###
### stack: stacks/euw3/euw3-globals.yaml
### chain: stacks/mdev/euw3/mdw3-audit.yaml > stacks/euw3/euw3-audit.yaml > stacks/euw3/euw3-globals.yaml
###
import:
- catalog/compliance/compliance
# - catalog/aws-inspector
# @TODO aws-inspector is not yet supported in Paris, it is likely this will change in the future
# https://docs.aws.amazon.com/inspector/v1/userguide/inspector_rules-arns.html
vars:
region: eu-west-3
environment: euw3
components:
terraform:
vpc:
vars:
availability_zones:
- "eu-west-3a"
- "eu-west-3b"
- "eu-west-3c"
eks/eks:
vars:
availability_zones:
- "eu-west-3a"
- "eu-west-3b"
- "eu-west-3c"
```
```
###
### stack: stacks/catalog/compliance/compliance.yaml
### chain: stacks/mdev/euw3/mdw3-audit.yaml > stacks/euw3/euw3-audit.yaml > stacks/euw3/euw3-globals.yaml > stacks/catalog/compliance/compliance.yaml
###
components:
terraform:
compliance:
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
tags:
Team: sre
Service: compliance
config_bucket_env: use2
config_bucket_stage: audit
config_rules_paths:
- https://raw.githubusercontent.com/cloudposse/terraform-aws-config/0.14.1/catalog/account.yaml
...
central_logging_account: audit
central_resource_collector_account: security
cloudtrail_bucket_stage: audit
cloudtrail_bucket_env: use2
create_iam_role: true
global_resource_collector_region: us-east-2
guardduty_admin_delegated: true
securityhub_admin_delegated: true
securityhub_create_sns_topic: true
securityhub_enabled_standards:
- ruleset/cis-aws-foundations-benchmark/v/1.2.0
securityhub_opsgenie_sns_topic_subscription_enabled: false # TODO, enable this once /opsgenie/opsgenie_securityhub_uri SSM param is set
securityhub_opsgenie_integration_uri_ssm_account: corp
securityhub_opsgenie_integration_uri_ssm_region: us-east-2
default_vpc_deletion_enabled: true
az_abbreviation_type: short
```
## Decision
**DECIDED**: Use **Option 1** with virtual components
## Consequences
- Refactor configurations to be more DRY
- to avoid duplication of the environment in stack names
## References
- [Compliance Setup](/layers/security-and-compliance/)
---
## Use IPAM for IP Address Management and Allocation
**Date**: **29 Apr 2022**
:::warning Rejected!
The proposal in this ADR was rejected! For questions, please reach out to Cloud Posse.
- Too expensive without significant customer interest or value.
:::
## Status
**DRAFT**
## Problem
## Context
It’s not required to create subpools. Subpools are used when you need a logical grouping.
Management of Subnets of a VPC
-
Large enterprises want to do as much route aggregation as possible.
### Today
Today, without IPAM, for existing clients, we manage “pools” in terraform using straight subnet math:
- Pool: One supernet for the AWS organization
- Pool: Per account
- Pool: Per region VPC (with typically only one VPC per account, per region)
- Per availability zone
- public
- private
### Future
We propose managing pools in a similar manner, but introducing a pool for the OU.
- One supernet for the AWS organization
- Per OU
- Per region
- Per VPC (with typically only one VPC per account, per region) - final pool
- Per availability zone - all AZ subnets are siblings of each other, and children of the VPC
- public
- private
The more pools we create, the hard it is to leverage route aggregation.
### Use-case: Grant VPN Access in Zscaler to all non-production networks
### Use-case: Production VPC has reached 90% capacity in us-east-1 and need to add IPs
### Use-case: New production account added and needs VPCs in 2 regions
Proposal 1
```
components:
terraform:
# manage the organization's IPAM
ipam:
vars:
organization_admin_account: network
organization_pool_cidr: 10.0.0.0/8
operating_regions:
- name: ue1
cidr: 10.0.0.0/12
- name: ec1
cidr: 10.16.0.0/12
- name: ap1
cidr: 10.32.0.0/12
pools:
- name: ue1-phi-data
cidr_range: 10.0.0.0/13
parent: ue1
- name: ue1-non-phi-data
cidr_range: 10.8.0.0/13
parent: ue1
- name: ec1-phi-data
cidr_range: 10.16.0.0/13
parent: ec1
- name: ec1-non-phi-data
cidr_range: 10.24.0.0/13
parent: ec1
- name: ap1-phi-data
cidr_range: 10.32.0.0/13
parent: ap1
- name: ap1-non-phi-data
cidr_range: 10.40.0.0/13
parent: ap1
```
## Considered Options
### Option 1:
### Option 2:
### Option 3:
## Decision
**DECIDED**:
## Consequences
-
## References
-
---
## Jumpstart Design Records
import Intro from '@site/src/components/Intro';
import DocCardList from '@theme/DocCardList';
These are the design records for the Jumpstart architecture. They are helpful for understanding some of the more low-level aspects of our Jumpstart implementation.
---
## Proposed: Atmos Workflows v2
**Date**: **26 Jan 2022**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- The proposal has been already been adopted, and this ADR needs to be updated to reflect the final decision. Cloud Posse recommends use of Workflows generally with [Atmos Workflows](https://atmos.tools/core-concepts/workflows/) and specifically with the Reference Architecture.
:::
## Status
**DRAFT**
## Problem
In the original `variant` version of `atmos` we had the concept of workflows. These were a simple set of steps that could be executed in order to bring up an environment or execute some kind of operation. When we ported `atmos` to Golang, we didn’t carry over this functionality because it was seldom used as implemented. Updating a workflow with all the steps was cumbersome. If a workflow failed, there was no way to restart back at the last failed step. To define a workflow to build and destroy an environment, would require defining two workflows (E.g. `create-env` and `destroy-env`).
## Context
## Considered Alternatives
- Use `make` (or other task runner) to call `atmos`
- Use shell scripts
## Other examples
- See `astro` by Uber (abandoned) [https://github.com/uber/astro](https://github.com/uber/astro)
- Atlantis workflows: [https://www.runatlantis.io/docs/custom-workflows.html#use-cases](https://www.runatlantis.io/docs/custom-workflows.html#use-cases)
- Terragrunt dependencies [https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/](https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/)
## Considered Options
### Option 1: Maintain the exact same interface
```
import: []
vars: {}
terraform:
vars: {}
helmfile:
vars: {}
components:
terraform:
fetch-location:
vars: {}
fetch-weather:
vars: {}
output-results:
vars: {}
print_users_weather_enabled: true
helmfile: {}
workflows:
deploy-all:
description: Deploy terraform projects in order
steps:
- job: terraform deploy fetch-location
- job: terraform deploy fetch-weather
- job: terraform deploy output-results
```
### Option 2: Workflows with parameters
```
import: []
vars: {}
terraform:
vars: {}
helmfile:
vars: {}
components:
terraform:
fetch-location:
vars: {}
fetch-weather:
vars: {}
output-results:
vars: {}
print_users_weather_enabled: true
helmfile: {}
workflows:
deploy-all:
description: Deploy terraform projects in order
steps:
- subcommand: terraform apply fetch-location
vars:
enabled: true
- subcommand: terraform apply fetch-weather
- subcommand: terraform apply output-results
destroy-all:
description: Destroy terraform projects in order
steps:
- subcommand: terraform apply output-results
vars:
enabled: false
- subcommand: terraform apply fetch-weather
vars:
enabled: false
- subcommand: terraform apply fetch-location
vars:
enabled: false
```
### Option 3: Support native dependencies between components and a `--reverse` flag
First, we add an an official `depends-on` field to our stack configuration.
In this configuration `echo` → `vpc` → `eks` → `external-dns` → `cert-manager`
```
components:
terraform:
echo:
metadata:
type: abstract
hooks:
before:
- echo "Hello world!"
vpc:
component: vpc
depends-on: echo
eks:
component: eks
depends-on: vpc
external-dns:
depends-on: eks
hooks:
before-deploy:
- sleep 100
cert-manager:
depends-on: external-dns
alb-controller:
depends-on: eks
```
Provision the eks component’s workflow (and everything that depends on `eks`)
```
atmos workflow terraform apply eks
```
Decommission the eks component
```
atmos workflow terraform apply eks --reverse
```
Provision everything that depends on `eks` , which will deploy `external-dns` and then `cert-manager`
```
atmos workflow terraform apply --depends-on external-dns
```
### Option 4: Leverage Existing Go-based Task Runner Framework
Use something like `gotask` to add rich support into atmos stack configurations, without reinventing the wheel.
gotask: [https://taskfile.dev/#/](https://taskfile.dev/#/)
variant: [https://github.com/mumoshu/variant](https://github.com/mumoshu/variant)
mage: [https://github.com/magefile/mage](https://github.com/magefile/mage)
```
# Gotask example
tasks:
deploy-all:
cmds:
- echo 'Hello World from Task!'
- atmos terraform apply eks
- sleep 100
- atmos terraform apply external-dns
silent: true
```
## Decision
**DECIDED**:
## Consequences
-
## References
-
---
## Proposed: Distribution Method for GitHub Actions
**Date**: **22 Jun 2022**
:::warning Rejected!
The proposal in this ADR was rejected! For questions, please reach out to Cloud Posse.
- After this proposal was created, GitHub announced support for reusable workflows.
:::
## Status
**DRAFT**
## Problem
We need a reliable way to distribute GitHub Actions workflows to repos and keep these workflows up to date with any changes made to the GitHub Actions themselves. An example of such a workflow might be [https://github.com/cloudposse/github-action-ci-terraform/blob/main/.github/workflows/ci-terraform.yml](https://github.com/cloudposse/github-action-ci-terraform/blob/main/.github/workflows/ci-terraform.yml).
Here are 6 factors to keep in mind when designing a solution:
1. Customers (not just Cloud Posse) need to be able to use this solution to both initialize actions in their repos and update their actions later on.
2. We need our solution to be able to update all of our repos easily.
3. Not all repos need the same workflows; some repos might need all of our GitHub Actions workflows, but others might only need a subset. Our solution should distribute workflow files accordingly. (E.g., some actions might be specific to Terraform projects, and non-Terraform repos don’t need these.)
4. Our solution needs to include a (possibly, one-time) strategy for pushing out actions en masse to our Cloud Posse repos (e.g., git-xargs).
5. As a rule, too many PRs can be noisy, so ideally our solution will minimize the number of PRs needed to keep things up to date.
6. We want our solution to auto-merge PRs if the tests pass.
## Context
I, Dylan, have been writing a bunch of CI/CD-related GitHub Actions ([https://github.com/cloudposse/github-action-ci-terraform](https://github.com/cloudposse/github-action-ci-terraform), [https://github.com/cloudposse/github-action-auto-format](https://github.com/cloudposse/github-action-auto-format), [https://github.com/cloudposse/github-action-auto-release](https://github.com/cloudposse/github-action-auto-release), [https://github.com/cloudposse/github-action-validate-codeowners](https://github.com/cloudposse/github-action-validate-codeowners)) and other people, like @Igor Rodionov have been working on GitHub Actions, too. We need a reliable distribution and maintenance strategy for them in order for them to be usable. As it is now, the easiest way to add them to a repo is for someone to manually copy a workflow file from each action repo into the repo of interest. For maintenance, the state of the art is manually checking whether there have been updates to each GitHub Action repo. Needless to say, these strategies could be improved on.
## Considered Options
### Option 1: Pull distribution (w/ or w/o centralized workflow file repo)
The key here is creating a GitHub Action whose whole purpose is to distribute and keep current the workflows of all other GitHub Actions in all repos: `github-action-distributor`. (Very similar actions already exist, e.g., [https://github.com/marketplace/actions/repo-file-sync-action](https://github.com/marketplace/actions/repo-file-sync-action) and [https://github.com/marketplace/actions/push-a-file-to-another-repository](https://github.com/marketplace/actions/push-a-file-to-another-repository).) In this proposal, the `github-action-distributor` would copy all GitHub Actions workflows directly from their home repos (e.g., [https://github.com/cloudposse/github-action-validate-codeowners/blob/main/.github/workflows/validate-codeowners.yml](https://github.com/cloudposse/github-action-validate-codeowners/blob/main/.github/workflows/validate-codeowners.yml)) to their destination repos (e.g., [https://github.com/cloudposse/terraform-example-module](https://github.com/cloudposse/terraform-example-module)).
- **One-time, internal org-level distribution strategy:**
- Use a `git-xargs` command to distribute a GitHub Actions workflow for a `github-action-distributor`. The purpose of the `github-action-distributor` is to propagate all the appropriate GitHub Actions workflows to a repo and keep them up to date using a cron job.
- **Customer repo-level distribution strategy:**
- Customers can manually (or using a tool like `git-xargs`, I suppose) distribute the `github-action-distributor` workflow to each repository they would like to add GitHub Actions workflows to.
- **Internal and customer repo-level update strategy:**
- Whenever new a new version of a GitHub Actions workflow is released, a change can be made to the `github-action-distributor` (using `renovate.json` ideally, or manually) to distribute the new version of that workflow from now on, and this change will be reflected in all downstream repos the next time their `github-action-distributor` cron job runs. Alternatively, we can pin the version of the `github-action-distributor` action used in a given repo, so that the versions of all GitHub Actions workflows in that repo are known and stable.
There are two **variants** of this option, depending on whether we copy the workflow files for the GitHub Actions into their own repo:
1. We could have action maintainers manually copy the workflow file(s) for the action(s) they maintain to a centralized repo (e.g., `cloudposse/actions`) and have the `distributor` action pull whatever is in that repo into the end-user repos. In order to update what workflows are being distributed, someone would just copy new workflows to the centralized workflow repo.
2. We could have the `distributor` action pull workflows from their home repos (e.g., pulling `.github/workflows/auto-format.yml` from `cloudposse/github-action-auto-format`). In order to update what workflows are being distributed, someone would update the version tags that would be hardcoded into the `distributor` action.
**NB:** It should be possible to implement the `github-action-distributor` as just a piece of functionality within a larger, existing GitHub Action. For example, the `github` [target](https://github.com/cloudposse/github-action-auto-format/blob/main/scripts/github/format.sh) inside the `auto-format` action essentially fulfills this role right now by copying the desired workflows from the `cloudposse/.github` repo.
### Option 2: Push distribution (w/ or w/o centralized workflow file repo)
Similar to option 1 above, there would be a GitHub Action, `github-action-distributor`, whose purpose is to distribute GitHub Action workflow files to end-user repos. Also similar to Option 1, this action would be compatible with centralized and decentralized workflow organizational strategies (see “**variants**” above).
In this proposal, though, the `distributor` would behave differently. Instead of being added to each end-user repository (e.g., `cloudposse/terraform-aws-components`, `cloudposse/build-harness`, `cloudposse/atmos`, etc.), it would only run in one repository (the centralized workflow file repo, in the case where we use that strategy), or in a small number of repositories (each of the `github-action-*` repositories, if we decline to use a centralized workflow file repo). Whenever a workflow file is updated inside a repo that has the `distributor` action added, that updated workflow will be pushed out to either all `cloudposse/*` repos, or a logical subset of them, depending on the specific action. The net result is that PRs would be opened in the end-user repos and automatically merged, all by the `distributor` action.
This option comes with the advantage, relative to Option 1, of being much simpler for Cloud Posse to bootstrap, since the manual distribution of the `distributor` workflow file is limited to < ~10 repos, enough to easily be done by hand. However, the bootstrapping process for third-parties would be non-existent. They would need to find their own methods, likely by implementing something like Option 1 for themselves.
### Option 3: Using internal GitHub functionality
It looks like there may be a way to distribute sample actions to an org’s repos via the `[org]/.github` repo, but this functionality is not well-documented, if it does exist, and even if it does, it probably requires opening a number of PRs on each repo to bootstrap. [If there is more interest in this, I (Dylan) can look into it further.)
One point worth noting is that this approach would lead to the same workflows being distributor to all (or nearly all) repos in the `cloudposse` GitHub org. This means that all actions/workflows need to detects early as possible whether they’re going to do anything useful on a given repo (e.g., running `terraform/fmt` would be completely unnecessary in a non-Terraform repo) and exit asap, to not tie up GitHub runners unnecessarily.
## Decision
**DECIDED**:
## Consequences
-
## References
-
-
---
## Proposed: Spacelift Admin Stack Architecture
**Date**: **19 Oct 2021**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- Cloud Posse refactored our Spacelift implementation with [PR #717](https://github.com/cloudposse/terraform-aws-components/pull/717) and since updated the documentation included with the components. [More on Spacelift](/components/library/aws/spacelift/).
:::
## Status
**DRAFT**
## Problem
1. Human error - Stages that do not have spacelift access, will accidentally have spacelift enabled for stacks in those stages. This is due to human error which is an easy mistake to make.
2. High blast radius - Spacelift admin stacks have significant blast radius as we have a single admin stack that controls all of the infrastructure in an entire organization
3. Admin stack errors - Spacelift admin stack may error out in the middle of the apply due to one of the stacks currently in use, preventing the following stacks from modification. This requires rerunning the admin stack.
4. High priority fixes can be queued - Need to get a high priority fix out and if the change is in `globals.yaml` or similar import and we want the fix to be deployed to prod first, how would we prioritize that up the chain instead of waiting for all the queued stacks to be finished first. No sense of priority.
5. Same policies and configurations that’s associated with every spacelift stack
6. All stacks cannot be shown in Spacelift
## Context
## Considered Options
### Option 1: Single admin stack
Pros:
- Consistency
Cons:
- Significant blast radius (problem 2)
- Forces a single worker pool (problem 4)
- Same policies and configuration that’s associated with every spacelift stack (problem 5)
### Option 2: Multi admin stack
Segment on `-`
We can use [var.context_filters](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation#input_context_filters) on the spacelift component to explicitly say that we only want to capture spacelift stacks for stages like `auto`, `corp`, `dev`, `staging`, etc.
Pros:
- Using context_filters, we can capture specific stages for specific admin stacks. e.g. admin stack for `auto` can use a filter for only `auto` stacks (solves problem 1)
- Limited blast radius (solves problem 2)
- Reduces admin stack issues (reduces problem 3)
- Option to use multiple worker pools so for high priority items you can have a separate worker pool for prod vs dev (solves problem 4)
- Allows policy and configuration on a per stage basis (solves problem 5)
Consequences
- To make admin stack creations easier, we would need to codify the spacelift admin stack and reuse the [stack submodule](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation/tree/master/modules/stack)
### Option 3: Read-write only policies
Pros:
- Consistency
Cons:
- Have to ensure that security, audit, root, identity do not have spacelift stacks enabled (problem 6)
### Option 4: Read-write policies for some and Read for others
Option 2 (multiple admin stacks) could be a prereq for this however, with some terraform magic, we could add different policies for different accounts.
Pros:
- All stacks can be shown in spacelift (solves problem 6)
- Allows read-write policies for dev, qa, auto, corp, etc
- Allows read for security, audit, root, identity
Cons:
- If went with a single admin stack, this would require changes to the cloudposse spacelift module to change policies based on some input.
### Option 5: Single worker pool
Pros:
- Single worker pool so costs don’t have to be managed across multiple ASGs
Cons:
- High priority changes may be queued behind other changes (problem 4)
### Option 6: Multiple worker pools
This is currently solved in one of our customers using multiple admin stacks.
Option 2 (multiple admin stacks) could be a prereq for this however, with some terraform magic, we could add different policies for different accounts.
Pros:
- High priority changes can be delivered faster if changes need to go into prod first. Prod can have its own worker pool and non-prod can all share a separate pool.
Cons:
- Costs could get out of control if too many worker pools. More workers, higher cost.
- If went with a single admin stack, this would require changes to the cloudposse spacelift module to change worker pools based on some input.
### Option 7: Combination - Multi admin stacks, read&write policies for some and read policies for others, and multiple worker pools
We could do option 2, have multiple admin stacks to solve problems 1 to 5.
We could do option 4, have read/write policies for and read only for others to solve problem 6.
We could do option 6, multiple worker pools. One worker pool for prod (min 1, max 10) and one worker pool (min 1, max 10) for all others. This solves problem 4.
## Decision
**DECIDED**:
Loose decision for cplive
- Organize admin stacks around teams because
- overall we’re adopting a strategy where components are organized around teams where it makes sense e.g. opsgenie-team, datadog, and soon iam
- Teams are an easy construct for people to grok
- (optional) single spacelift worker pool
- ideally each admin stack associated with a dedicated worker pool
- worker pools can more easily be granted more narrowly scoped iam roles e.g. security stack mapped to security team with a security worker pool
- Spacelift is introducing spaces (end of july 2022) which map to teams which map to worker pools
## Consequences
-
-
-
## References
-
---
## Proposed: Use Atmos Registry
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- Cloud Posse uses the [terraform-aws-components](https://github.com/cloudposse/terraform-aws-components) repository with vendoring similarly to how a registry might behave. For more on vendoring, see [atmos vendor pull](https://atmos.tools/cli/commands/vendor/pull).
:::
## Problem
We need a way to centralize cloudposse components for reuse across all customers. We have cloudposse/terraform-aws-components, but we do not use it as a source of truth. As a result, maintaining our vast library of components is challenging.
We need some way to discover components to avoid duplication of effort. Providing a registry is a common characteristic among successful languages or tools (E.g. `python` has PyPi, `perl` has CPAN, `ruby` has RubyFoge, `docker` has DockerHub, `terraform` has the Terraform Registry).
Additionally, we need some way to easily create new components (e.g. from a template).
## Solution
Implemnet a GitHub based “registry” for components, stacks and mixins. Use a generator pattern (e.g. like cookiecuter), but make it natively built-in to atmos. (E.g. see [https://github.com/tmrts/boilr](https://github.com/tmrts/boilr) for inspiration, but anything we do should be a re-implementation with a very nice UI).
### TODO: inconsistencies
- Mixins (YAML vs Terraform)
- Registry usage (search, generating components)
### TODO: explanations
- why even care about templating?
- why add components to `--stack`
### Use-case #1: Pull down existing components
Imagine the command like this...
```
atmos component generate terraform/aurora-postgres \
--stack uw2-dev \
--source cloudposse/terraform-aws-components//modules/aurora-postgres \
--version 1.2.3
```
1. It will download the component
2. It will prompt the user for any information needed, providing sane defaults. It will save the user’s answers, so subsequent generations persist state.
3. It will add a component configuration to the `uw2-dev` stack if none found
4. User commits to VCS
```
atmos component generate terraform/eks \
--source cloudposse/terraform-aws-components//modules/eks
--version 1.2.3
atmos stacks generate stacks/catalog/eks \
--source cloudposse/terraform-aws-stacks//catalog/eks-pci
--version 1.2.3
```
### Use-case #2: Initialize a new component from the component template for AWS
This will create a new component from some boilerplate template and add it to the `uw2-dev` stack.
```
atmos component generate terraform/my-new-component \
--stack uw2-dev \
--source cloudposse/terraform-aws-component-template
```
### Use-case #3: Mixins
This will create a `context.tf` in the `components/my-new-component`
```
atmos mixins generate components/terraform/my-new-component/context.tf \
--source cloudposse/terraform-aws-components/mixins/context.tf
```
### Use-case #4: List & Search for Components, Stacks and Mixins.
The `--filter` argument is used to filter the results.
Search for all EKS components.
```
atmos component registry list --filter eks
```
Search for all EKS stacks
```
atmos stack registry list --filter eks
```
Search for all mixins
```
atmos mixin registry list --filter context
```
### Use-case #5: Add registries
```
atmos component registry add cloudposse/terraform-aws-components
```
Add our reference architecture registry
```
atmos stack registry add cloudposse/refarch
```
The `atmos.yml` contains:
```
components:
terraform:
registries:
- cloudposse/terraform-aws-components
stacks:
registries:
- cloudposse/refarch
```
### Use-case #5: Configuration
```
import:
- uw2-globals
vars:
stage: testplatform
terraform:
vars: {}
helmfile:
vars:
account_number: "199589633144"
components:
terraform:
# this will download all components into `aws-component/0.141.0`
# it's abstract because: atmos terraform apply aws-component doesn't make sense
"cloudposse/terarform-aws-components/0.141.0":
metadata:
type: abstract
source: https://github.com/cloudposse/terraform-aws-components//modules
version: 0.141.0
# this will run aurora-postgres from `aws-component/0.141.0/aurora-postgres`
aurora-postgres:
component: "cloudposse/terraform-aws-components/0.141.0/aurora-postgres"
mixins: # calling this mixins is confusing
# this will upgrade the context.tf
- file: context.tf
source: https://github.com/cloudposse/terraform-aws-components/mixins/context.tf
version: 1.2.3
vars:
# https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html
instance_type: db.r4.large
cluster_size: 1
cluster_name: main
database_name: main
# this will run aurora-postgres from `aws-component/0.141.0/aurora-postgres`
eks:
component: "eks"
mixins:
metadata:
type: real
source: https://github.com/gruntwork/terraform-aws-components//modules/eks
version: 1.0
vars:
# https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html
instance_type: db.r4.large
cluster_size: 1
cluster_name: main
database_name: main
helmfile:
cert-manager:
metadata:
type: real
source: https://github.com/cloudposse/helmfiles/cert-manager/
version: 1.0
vars:
..
```
### Use-case #6: easy upgrades
```
atmos component upgrade terraform/aws-component --version latest
```
1. This will update `version: 0.141.0` to the latest (e.g. `0.142.0`)
2. Then pull down the `latest` version of the component (in this case, the entire component library) and write it to `components/terraform/cloudposse/terraform-aws-components/0.142.0`
3. User commits to VCS
Optionally, the old version can be purged:
```
atmos component upgrade terraform/cloudposse/terraform-aws-components \
--version latest \
--update \ # update any derived component versions to use 0.142.0
--purge \ # delete previous versions
--commit # idea: git commit these changes (what about branch, pr, etc)
```
### Use-case #7: diverge from cloudposse component
```
cp -a components/terraform/cloudposse/terraform-aws-modules/aurora-postgres \
components/terraform/aurora-postgres
```
### Use-case #8: diff
```
atmos component upgrade --diff --use-defaults
```
### Use-case #6: refarch
```
atmos generate cloudposse/refarch/multi-account-eks-pci
```
1. pull down the refarch for a multi-account PCI compliance refarch
2. It will prompt the user for all the inputs
3. It will generate all the configs and components
4. User commits to VCS
---
## Proposed: Use AWS Federated IAM over AWS SSO
**Date**: **19 Oct 2021**
:::warning Rejected!
The proposal in this ADR was rejected! For questions, please reach out to Cloud Posse.
- Customers overwhelmingly prefer AWS SSO. We continue to use both AWS Federated IAM with the `aws-saml` component and use AWS SSO with the `aws-sso` component. However, customers typically use AWS SSO themselves and grant Cloud Posse access by AWS Federated IAM.
:::
## Status
**IN PROGRESS** @Jeremy Grodberg working on this one.
## Context
:::info
AWS Federated IAM and AWS SSO can coexist and are not mutually exclusive.
:::
### AWS SSO
#### Pros
- Native support via the AWS cli (e.g. `aws sso login` command)
- It is nice that you can define a permission set once and deploy it to all the accounts (but we can do that with Terraform about as easily)
#### Cons
- Must have a profile to log into and use a permission set
- Cannot set IAM permissions boundary
- Cannot attach customer-managed IAM policies
- Cannot set `SourceIdentity`
- Cannot use as a Principal in IAM policies because they are transient (**a particular problem with EKS access**)
- Cannot have more than one IdP
- In our use cases, the IdP is still a SAML app requiring a SAML connector
### AWS Federated IAM with SAML
```
# Note that you cannot update the aws-sso component if you are logged in via aws-sso.
# Use the SuperAdmin credentials mentioned in docs/cold-start.md (as of now, stored in 1password)
aws-saml-sso:
settings:
spacelift:
workspace_enabled: false
vars:
idps:
cloudposse:
acme:
roles:
terraform-prod:
policy: ..
terraform-nonprod: ..
account_assignments:
artifacts:
grants:
- cloudposse:
- terraform-nonprod
- acme
- terraform-nonprod
- terraform-prod
dev:
direct_idp:
- cloudposse:
- terraform-nonprod
grants:
- acme
- terraform-nonprod
- terraform-prod
```
#### Pros
- Full control over the implementation
- No problems working with EKS and terraform
#### Cons
- GSuite does not support mapping group attributes to SAML attributes (But they don't really solve it for AWS SSO either, and if you can script the GSuite API you can achieve the same effect.)
- Our current implementation with [iam-primary-roles](https://github.com/cloudposse/terraform-aws-components/tree/main/deprecated/iam-primary-roles) and [iam-delegated-roles](https://github.com/cloudposse/terraform-aws-components/tree/main/deprecated/iam-delegated-roles) is outdated and should be updated to use the interface we developed for AWS [sso](/components/library/aws/identity-center/).
### When to use AWS SSO?
lnmgafj/....[;..................................................................................................................................................................................................../PLU5D5YT6FTYUNFVHGT6FHGNU VDYBHBGYBDFR G YGDAWS SSO is ideally suited for business users of AWS that interact with the AWS Web Console. It does work well with the `aws` CLI, but not together with EKS.
### When to use AWS Federated IAM with SAML?
AWS Federated IAM is ideally suited for organizations that need to use multiple Identity Providers (IdP). It’s also better suited for managing the IAM RBAC mapping with EKS due to limitations in AWS managing the `auth` `ConfigMap` which has no AWS API. [https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html)
## Decision
**DECIDED**:
## Consequences
-
## References
-
---
## Proposed: Use Defaults for Components
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- The proposal has already been adopted, and this ADR needs to be updated to reflect the final decision.
:::
## Status
**DRAFT**
## Problem
There are many places to set configurations with various levels of prescedence:
- As defaults to `variables { ... }` in `.tf` files
- As `defaults.auto.tfvars` (or similar file `.tfvars` file)
- As configuration in `stacks/` or catalogs like`stacks/catalog/...`
- As environment variables (e.g. `TF_VAR_...`)
- As arguments to `terraform plan` or `terraform apply` (e.g. `terraform plan -var foo=bar`
Developers are confused about where to set variables without a consistent, opinionated convention on how to do it.
Running `atmos describe` it’s easy to see the deep-merged configuration after all imports have been consumed, however, it doesn’t show what defaults are set in `variables { ... }` or `.tfvars` files.
```
# Ideal outcome (not yet supported)
vars:
enabled: true # default.auto.tfvars
nodes: 10 # stacks/catalog/eks/defaults.yaml
min_size: 3 # stacks/uw2-prod.yaml
name: eks # components/terraform/variables.tf
```
## Context
In Terraform 0.11, regular `*.tf` files were [loaded in alphabetical order](https://www.terraform.io/docs/configuration-0-11/load.html), and then override files were applied.
When invoking any command that loads the Terraform configuration, Terraform loads all configuration files within the directory specified in alphabetical order. Override files are the exception, as they're loaded after all non-override files, in alphabetical order.
In the newer Terraform 0.12, the load order of `*.tf` files is [no longer specified](https://www.terraform.io/docs/configuration/index.html#configuration-ordering). Behind the scenes (in both versions), Terraform reads all of the files in a directory and then determines a resource order that makes sense ignoring the order the files were actually read.
Terraform automatically processes resources in the correct order based on relationships defined between them in configuration, and so you can organize resources into source files in whatever way makes sense for your infrastructure.
In TT 0.11, the `auto.tfvars` files were loaded in alphabetical order.
In TF 0.12 and newer, it says they are loaded in random order, but that depends on many things, depends on the file system, and we can assume that they can be loaded in the alphabetical order as well.
**NOTE:** This is a convincing reason to define ALL config in YAML files and not to have `default.auto.tfvars` at all, especially not to have many `auto.tfvars` in the same folder with different names and conflicting settings inside them.
Because in this case, the order of operations is NOT defined, and it could succeed in one place (`atmos`), but fail in another (Spacelift).
Thus our current recommendation is to remove all `*.auto.tfvars` (e.g. `default.auto.tfvars` and `variables-helm.auto.tfvars`) and put all the configuration for the component into the YAML stack config. This does not only solve the issue described above but allows seeing ALL variables for the component when executing the `atmos describe component argo-workflows --stack mgmt-uw2-sandbox` command (if some variables are in `auto.tfvars` files, the command will not see them)
## Considered Options
### Option 1: Only use Stacks and Catalogs
Avoid all use of `.tfvars` and just use stacks and catalogs. Avoid any defaults in `variable { ... }` blocks. Default enabled in `default.auto.tfvars` to `false`.
### Option 2: Use Stacks, Catalogs and `.tfvars`
Place the majority defaults in `.tfvars` with sane defaults and only create archetypes in catalogs. Default archetypes to be `enabled`.
## Decision
- Do not put defaults in `defaults.auto.tfvars`
- **Exception**: The `spacelift` component which requires it
- **Exception**: Helm releases should have chart name, repo and version in `default.auto.tfvars`
- Add defaults to `catalog/$component/baseline.yaml` where `$component` usually refers to a component in `components/terraform/$component`
- These would generally be of `metadata.type=abstract`
- Every component needs an `enabled` flag which gets passed to all modules as well as toggles any resources in the component itself
- The `enabled` flag should be set to `true` in the `catalog/$component/baseline.yaml`
### Consequences
- Provide a way to generate the baseline configurations from the `.tfvars` and/or the `variables { ... }`
- Provide a way to visualize where all the imports happen (almost like a `git blame`)
### References
- [https://stackoverflow.com/questions/59515702/multiple-tf-files-in-a-folder](https://stackoverflow.com/questions/59515702/multiple-tf-files-in-a-folder)
- [https://www.terraform.io/language/configuration-0-11/load#load-order-and-semantics](https://www.terraform.io/language/configuration-0-11/load#load-order-and-semantics)
---
## Proposed: Use GitHub Actions with Atmos
**Date**: **14 Apr 2022**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- The proposal has already been adopted, and this ADR needs to be updated to reflect the final decision.
:::
## Status
**DRAFT**
## Problem
Smaller, bootstrappy startups don’t have the budget for Spacelift. There are things in atmos that we can do (e.g. workflows) that are easier to implement in conventional manners than the rego-based approach in Spacelift.
## Context
## Considered Options
- Update atmos to natively support `git diff` to strategically determine what changed in a branch HEAD commit relative to any other BASE commit for the purpose of strategic plan/apply of YAML stack configurations
- For our GitHub Action, we’ll want to detect the last successful commit applied against the default branch. This can be accomplished by querying the GitHub API.
- Provision private, restricted S3 bucket to store terraform `planfiles` (or use dynamodb? to facilitate locking, and discovery of the latest planfile and invalidating old planfiles)
- Use lifecycle rules to expunge old plans
- Institute branch protections with `CODEOWNERS`
- Implement support for manual deployment approvals
[https://docs.github.com/en/enterprise-cloud@latest/actions/managing-workflow-runs/reviewing-deployments](https://docs.github.com/en/enterprise-cloud@latest/actions/managing-workflow-runs/reviewing-deployments)
- Implement environment protection rules
[https://docs.github.com/en/enterprise-cloud@latest/actions/deployment/targeting-different-environments/using-environments-for-deployment#environment-protection-rules](https://docs.github.com/en/enterprise-cloud@latest/actions/deployment/targeting-different-environments/using-environments-for-deployment#environment-protection-rules)
- Create private, shared GitHub Actions workflows for:
- `atmos terraform plan`
- Develop a github action that will use a service account role for the runner (or OIDC) to run a terraform plan
- Store the planfile in S3 (or dynamo)
- Comment on the PR with a pretty depiction of what will happen for every affected stack
- Upon merge to main, trigger a GitHub `deployment` with approval step
See [https://github.com/suzuki-shunsuke/tfcmt](https://github.com/suzuki-shunsuke/tfcmt) for inspiration
- Include support for `terraform plan -destroy` for deleted stacks or components
- `atmos terraform apply`
- Upon approval, trigger an “apply” by pulling down the corresponding “planfile” artifact from S3; they may be more than one planfile; abort if no planfile
- Run atmos terraform apply on the planfiles in the appropriate order
- Discard planfiles upon completion
- Workflows for complex, coordinated sequences of operations (e.g. to bring up a full stack, one component at time)
- Conflict resolution
- Locking strategy for components / planfiles. How do we determine the latest planfile?
- Implement a GitHub Action that when used together with branch protections prevents merging of pull requests if other unconfirmed changes are pending deployment and affect the same stacks.
- This is the most complicated part of the solution. A “one cancels all” type of strategy will probably need to be implemented. We have to ensure planfiles are applied in sequential order, and any planfiles needs to be invalidated (or replanned) if upstream commits are made affecting the stacks
- Drift Detection
- Implement cron-based automatic replans of all infrastructure under management.
[https://docs.github.com/en/enterprise-cloud@latest/actions/using-workflows/workflow-syntax-for-github-actions#onschedule](https://docs.github.com/en/enterprise-cloud@latest/actions/using-workflows/workflow-syntax-for-github-actions#onschedule)
- Trigger webhook callbacks when drift is detected (e.g. escalate to Datadog)
### Risks
- GitHub Actions does not provide a great dashboard overview of workflow runs. Mitigated by something like [https://github.com/chriskinsman/github-action-dashboard](https://github.com/chriskinsman/github-action-dashboard)
## Mocking GitHub Action Workflows
Checks UI. Each job is a component. Each step is an environment.
```
name: "plan"
on:
pull_request:
types: [opened, synchronize, reopened]
paths:
- stacks/*
- components/*
jobs:
atmos-plan:
runs-on: self-hosted-runner
steps:
- name: "Checkout source code at current commit"
uses: actions/checkout@v2
- name: Atmos do everything
runs: atmos plan do-everything
- name:
id: prepare
env:
LATEST_TAG_OS: 'alpine'
BASE_OS: ${{matrix.os}}
run: |
```
## Decision
**DECIDED**:
## Consequences
-
## References
- [https://blog.symops.com/2022/04/14/terraform-pipeline-with-github-actions-and-github-oidc-for-aws/](https://blog.symops.com/2022/04/14/terraform-pipeline-with-github-actions-and-github-oidc-for-aws/)
---
## Proposed: Use Global Filename Convention
**Date**: **29 Apr 2022**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- The proposal has already been adopted, and this ADR needs to be updated to reflect the final decision. We decided on changing the convention to `_defaults`.
:::
## Status
**DRAFT**
## Problem
- There are a lot of `globals` files scattered across many folders and subfolders. The meaning of the globals file is the same everywhere, but the context in which it is used is by convention based on it’s location in the filesystem.
- For some people, working in an IDE (e.g. JetBrains, VSCode), it’s easy to get confused on which globals file is being edited.
- For some people, working on the command line, if the prompt doesn’t show the full hierarchy, knowing which globals file is being edited is also not clear. Including the full path in the prompt, could get too long.
## Context
## Considered Options
### Option 1: `globals.yaml` (what we have today)
:heavy_minus_sign: easily confused with stacks; globals are intended to only be imported
### Option 2: `_globals.yaml`
:heavy_plus_sign: disambiguate easily between globals (or imports) from stacks using a `_` prefix convention
:heavy_minus_sign: doesn’t solve the filename disambiguation
### Option 3: `$subpath1-$subpath2-globals.yaml` (e.g. `eg-prod-globals.yaml`) - or something like it
:heavy_minus_sign: adds to the naming convention overhead. Moving files around requires renaming files.
### Option 4: Find alternatives for IDE
:heavy_plus_sign: Enable sorting files first over folders, and using `_globals` will place them at the top
:heavy_plus_sign: Enable the path depth
:heavy_plus_sign: Rainbow plugin
:heavy_plus_sign: Ship a `.vimrc`, `.VSCcode`, `.emacs`, etc file as a baseline
## Decision
**DECIDED**:
## Consequences
-
## References
- JetBrains has a setting to order
---
## Proposed: Use ISO-8601 Date Index for ADRs
**Date**: **19 Oct 2021**
:::warning Rejected!
The proposal in this ADR was rejected! For questions, please reach out to Cloud Posse.
:::
## Status
**PROPOSED**
## Context
Using the auto-incrementing index for ADRs is the conventional way of indexing them. The problem is when we have multiple open PRs, the indexes frequently end up conflicting, forcing team members to update the PRs.
Using ISO-8601 dates will accomplish the same purpose of an incrementing ID, but avoid needing to renumber all ADRs based on PR merge order.
### Pros
- Auto incrementing based on date
- No conflicts, assuming we consider the (date, summary) is the ID.
### Cons
- We can no longer refer to ADRs by a single number (E.g. `ADR 0007`). Maybe we refer to the corresponding `REFARCH` ticket instead.
## Decision
**DECIDED**: Use ISO-8601 dates for ADR index
## Consequences
- Write all new ADRs using date format.
- Recommend updating existing ADRs with date
- Regenerate the table of contents
## References
- [Architectural Design Records (ADRs)](/resources/adrs)
---
## Proposed: Use Mixins to DRY-up Components
**Date**: **11 Mar 2022**
:::warning Rejected!
The proposal in this ADR was rejected! For questions, please reach out to Cloud Posse.
- Cloud Posse does use mixins, but generally they are avoided. Instead, we recommend [using the override pattern](/learn/component-development#how-can-terraform-modules-or-resources-be-added-to-a-component). This ADR should be updated to reflect the latest decision.
:::
## Status
**DRAFT**
## Problem
Many Terraform components are not [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) component-to-component because they contain the same boilerplate for achieving similar functions: Configuring variables or the Helm Provider for an EKS Component, Creating a SopsSecret for an EKS Component, etc.
## Considered Options
A Terraform mixin (inspired by the [concept of the same name in OOP languages such as Python and Ruby](https://en.wikipedia.org/wiki/Mixin)) is a Terraform configuration file that can be dropped into a root-level module, i.e. a component, in order to add additional functionality.
Mixins are meant to encourage code reuse, leading to more simple components with less code repetition between component to component.
### Proposed Mixins
#### Mixin: `infra-state.mixin.tf`
Code: [https://github.com/cloudposse/terraform-aws-components/blob/6dc766d848306d6ce3ddb1a86bc26822b30ce56f/mixins/infra-state.mixin.tf](https://github.com/cloudposse/terraform-aws-components/blob/6dc766d848306d6ce3ddb1a86bc26822b30ce56f/mixins/infra-state.mixin.tf)
This mixin is meant to be placed in a Terraform configuration outside the organization's infrastructure monorepo in order to:
1. Instantiate an AWS Provider using roles managed by the infrastructure monorepo. This is required because Cloud Posse's `providers.tf` pattern requires an invocation of the `account-map` component’s `iam-roles` submodule, which is not present in a repository outside of the infrastructure monorepo.
Retrieve outputs from a component in the infrastructure monorepo. This is required because Cloud Posse’s `remote-state` module expects a `stacks` directory, which will not be present in other repositories, the monorepo must be cloned via a `monorepo` module instantiation.
Because the source attribute in the `monorepo` and `remote-state` modules cannot be interpolated and refers to a monorepo in a given organization, the following dummy placeholders have been put in place upstream and need to be replaced accordingly when "dropped into" a Terraform configuration:
1. Infrastructure monorepo: `github.com/ACME/infrastructure`
Infrastructure monorepo ref: `0.1.0`
#### Mixin: `introspection.mixin.tf`
Code: [https://github.com/cloudposse/terraform-aws-components/blob/6dc766d848306d6ce3ddb1a86bc26822b30ce56f/mixins/introspection.mixin.tf](https://github.com/cloudposse/terraform-aws-components/blob/6dc766d848306d6ce3ddb1a86bc26822b30ce56f/mixins/introspection.mixin.tf)
This mixin is meant to be added to Terraform components in order to append a `Component` tag to all resources in the configuration, specifying which component the resources belong to.
It's important to note that all modules and resources within the component then need to use `module.introspection.context` and `module.introspection.tags`, respectively, rather than `module.this.context` and `module.this.tags`.
#### Mixin: `sops.mixin.tf`
Code: [https://github.com/cloudposse/terraform-aws-components/blob/6dc766d848306d6ce3ddb1a86bc26822b30ce56f/mixins/sops.mixin.tf](https://github.com/cloudposse/terraform-aws-components/blob/6dc766d848306d6ce3ddb1a86bc26822b30ce56f/mixins/sops.mixin.tf)
This mixin is meant to be added to Terraform EKS components which are used in a cluster where sops-secrets-operator (see: [https://github.com/isindir/sops-secrets-operator](https://github.com/isindir/sops-secrets-operator) ) is deployed. It will then allow for SOPS-encrypted SopsSecret CRD manifests (such as `example.sops.yaml`) placed in a `resources/` directory to be deployed to the cluster alongside the EKS component.
This mixin assumes that the EKS component in question follows the same pattern as `alb-controller`, `cert-manager`, `external-dns`, etc. That is, that it has the following characteristics:
1. Has a `var.kubernetes_namespace` variable.
2. Does not already instantiate a Kubernetes provider (only the Helm provider is necessary, typically, for EKS components).
#### Mixin: `helm.mixin.tf`
Code: TODO
This mixin is meant to be added to Terraform EKS components and performs the following functions:
1. It provides consistent boilerplate for Helm charts, i.e. all of the Terraform variables required to configure a Helm chart and its version.
2. It instantiates the Helm provider and the Kubernetes provider, and all of the variables to override it, including toggling of the Helm Provider’s experimental manifest feature.
This mixin does _not_ instantiate the `helm-release` module itself. Rather, it encapsulates all of the boilerplate required to do so. The reason for this is because the module instantiation is unique to the component, and has an intuitive interface to set up policies for the IRSA role, etc.
This mixin also assumes that EKS components will contain some values in `defaults.auto.tfvars`, which do not frequently change but can still be overridden by YAML stack configs. This includes things such as the chart repository and chart version. The benefit of this is that tools such as [renovatebot](https://github.com/renovatebot) can automatically increment the Helm chart version if these values are within `defaults.auto.tfvars`, rather than the YAML stack config. Additionally, the variables within `helm.mixin.tf` need defaults for these values, but these defaults should not exist within the variable declaration blocks themselves as they are unique per component, and the end user of the component should not always have to provide a YAML stack config with values for these variables, if they do not frequently change from user-to-user.
```
name = "alb-controller"
chart = "aws-load-balancer-controller"
chart_repository = "https://aws.github.io/eks-charts"
chart_version = "1.4.0"
kubernetes_namespace = "kube-system"
resources = {
limits = {
cpu = "200m"
memory = "256Mi"
},
requests = {
cpu = "100m"
memory = "128Mi"
}
}
```
### Additional Considerations
#### Versioning
See [Use Vendoring in Atmos](/resources/adrs/adopted/use-vendoring-in-atmos)
#### Mixin Best Practices
- Whenever a Terraform mixin contains a Terraform Provider, it must set an alias for it. Otherwise, mixins will conflict with each other.
#### Unit Testing Mixins
- `terraform-aws-components` will contain both the mixins and components using them. The component configuration schema allows for referencing mixins using relative paths. Thus, the component can reference the mixin in the same repository. This provides an integration test for both the components and the mixins they use, ensuring both are functioning.
## Decision
**DECIDED**:
## Consequences
- TODO: Waiting on Decision
## References
- [https://github.com/cloudposse/terraform-aws-components/pull/385](https://github.com/cloudposse/terraform-aws-components/pull/385)
---
## Proposed: Use More Flexible Resource Labels
**Date**: **19 Apr 2022**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- No pushback from the team. Overall, we know we need to support arbitrary label fields and don't like how we use environment to represent region. Note, this suggestion also matches (is consistent with) our filesystem organization: `///`. [Internal discussion reference](https://github.com/cloudposse/knowledge-base/discussions/120)
- Decision is to adopt: `---`
:::
## Status
**PROPOSAL**
## Problem
Currently, we use a fixed set of labels, dictated by the [terraform-null-label](https://github.com/cloudposse/terraform-null-label/) component, for labeling everything provisioned by IoC. This set of labels is also treated specially by `atmos` and includes labeling IAM roles and both `atmos` and Spacelift “stacks”.
1. **The choice of label names has proven to be confusing and unpopular.**
2. The set of labels is fixed. When we added “tenant” as a possible label it was a major undertaking to upgrade `terraform-null-label` to handle it.
3. Because the label names are fixed, and `atmos` does not have access to the outputs of `terraform-null-label` (because `atmos` is written in `go` and not Terraform), adding or changing label names requires code changes to both `terraform-null-label` and `atmos`
4. The use of `name` as a label name is a particular problem as it conflicts with AWS' usage of the tag key “Name” as the UI display name of a resource.
5. We have come to rely on `atmos` as a tool, and it needs to parse labels to determine the Atmos “stack” name, the Terraform backend configuration, the Terraform workspace name, the EKS cluster name, and possibly other resources, but `atmos` is written in `go` and cannot use `terraform-null-label`, which is a Terraform module, to generate these items, but nevertheless we want some of them to be available in Terraform so that components can access configuration data generated by other Terraform components.
6. We have some components, such as Kubernetes deployments, that have additional configuration labels/variants, such as `color` for blue/green deployments or `ip` for IPv4/IPv6 variants. We would like to be able to flexibly use or not use these additional labels to distinguish deployed deployments where applicable, without requiring them for other components (e.g. `cloudtrail_bucket`) where they are not needed. Currently we are doing this by manually altering the component names to include the variant labels, but this practice is not DRY and eliminates many of the advantages `atmos` gives us through importing configurations, since all configurations are, in the end, tied to a component name. The proper Atmos model is to have a single component name with variable Terraform workspaces selected by variable labels.
## Context
Early on, Cloud Posse decided that consistent labeling was important and implemented a mechanism for it in the form of `terraform-null-label`. (`terraform-null-label`, or `null-label` for short, was first released in 2017.) At the time it was first released, Terraform itself was in the early stages of development and lacked many essential features, so the capabilities of the module were limited. In particular, there was no way to iterate over lists or maps. This imposed a practical requirement that inputs to `null-label` be known in advance (hardcoded).
The original set of labels was:
- `namespace`
- `stage`
- `name`
Over time, we added
- `environment`
- `tenant`
...to get to the current set of 5 labels. (`null-label` also accepts a list of `attributes` and a map of `tags`, which are outside the scope of this ADR.)
Unfortunately, except for the `tenant`, there are issues with all of these label names.
- `namespace` collides with Kubernetes' use of “namespace” as a mechanism for isolating groups of resources within a single cluster, and we have had problems due to the `$NAMESPACE` shell variable being set to indicate our version of “namespace” while being interpreted by some tools as Kubernetes' version.
- `environment` is not bad, but a lot of people use it in a way we do not use it. We use it as a region code (abbreviation for a particular AWS Region) while most people use it to indicate a functional role or AWS account, such as “production” or “staging”.
- `stage` is a bit confusing, and in the end more generic than we allow. We use it the way many people use “environment”, but because we typically have a 1-to-1 mapping of `stage` to AWS Account, our code frequently assumes that “stage” is the same as “account”. This breaks, however, in multi-tenant environments where tenants have multiple accounts, such as `tenant-dev`, `tenant-stage` and `tenant-production`.
- `name` is a problem in that AWS reserves that for the tag key whose value is displayed in the web UI. For all our other labels, we add a tag with the (capitalized) label name as tag key and (normalized) label value as the tag value. We make an exception for “Name”, setting that value to the the `id` (the fully formed identifier combining all the labels), not the value of the `name` label, which confuses everyone.
- Atmos separately has (in `atmos.yaml`) configuration for `helm_aws_profile_pattern`, EKS `cluster_name_pattern`, and Stack `name_pattern`, along with separate configuration for Component name (directory) and Terraform workspace name. Currently these are either completely hard coded (Component name) or are configured using a template based on the above listed special label names, which works completely separately from `null-label` and must be kept in sync.
Now (April 2022), Terraform version 1.1 has several features that enable us to use an arbitrary set of label names. On the drawing board (but for no earlier than Terraform version 1.3) is also an additional feature we would like, allowing input objects to have optional attributes. This suggests we can create a new `null-label` version with 1.1 features and again enhance it after optional attributes have been released. [https://github.com/hashicorp/terraform/pull/31154](https://github.com/hashicorp/terraform/pull/31154)
## Considered Options
### Option 1:
#### Null Label
Going forward, I suggest Cloud Posse use different label names in its engagements:
- `company` instead of `namespace`, to provide a global prefix that makes the final ID unique despite our reuse of all the other label values
- `region_code` or `reg` instead of `environment` to indicate the abbreviated AWS Region
- `tenant` can remain, or be changed to `ou` for organizational unit.
- `env` instead of `stage`, to indicate the function of the environment, such as “development”, “sandbox”, or “production”. In environments where `env` always equals `account`. We would specify only one and have the other be a generated label (see below). Which one to specify should be based on a survey of clients' preferences.
- `account` instead of `stage` to indicate the name of the AWS account. `account` would never be specified directly, it would generally be either `env` or `tenant-env`.
- `component_name` instead of `name` (and to avoid overloading `name` used by AWS and `component` which has special meaning to `atmos`).
- Possibly an additional label component, such as `net` or `ip` that can be used to allow us to create IPv4 and IPv6 versions of components like EKS clusters or ALBs in the same account and region and yet still distinguish them. It label component would ideally have an optional attribute that removes the delimiter before it, so if `name` is `eks` and `ip` is `6`, we can get a name like `{namespace}-{tenant}-{environment}-{stage}-eks6-cluster` instead of `{namespace}-{tenant}-{environment}-{stage}-eks-6-cluster`
To facilitate this, I suggest an overhaul of `terraform-null-label`. We can use the existing `label_order` input to take an arbitrary list of label names. We can deprecate the existing hard-coded label names in favor of a new input, called `label_input` (to allow us to have an output named `labels` which has the normalized label values, and a separate output named `label_input` which preserves the input untransformed) or `labels` (where either we do not care about the output `labels` being different than the input or we are satisfied that `module.this.labels` is normalized while `module.this.context.labels` gets you back exactly what was input, as is currently the case with the special label names., e.g `module.this.stage` vs `modules.this.context.stage`) which is a `map(string)` where the keys are label names and the values are label values. (This is exactly like the `tags` input, but the tags are not altered, while labels are.)
Additionally, we deprecate the existing `descriptor_format` input and `descriptors` output in favor of a `label_generator` input which adds labels to the `labels` output. This would allow us to have an `account` output that by default is the same as the `env` or `stage` output (and for that matter, allow us to preserve the `namespace`, `environment`, and `name` outputs even though we have stopped using them as inputs), and also handle the case where `account` is a composite of 2 labels like `tenant-dev`.
#### Future Possibilities
**Once** [**Terraform supports optional object members**](https://github.com/hashicorp/terraform/issues/19898#issuecomment-1101853833)[,](https://github.com/hashicorp/terraform/issues/19898#issuecomment-1101853833) I would propose `label_generator` be a `map(object)` that has:
- key is name of label to generate
- `labels = list(string)` list of label to construct the label from, in order
- `delimiter = optional(string)` the delimiter to use when joining the labels, defaults to label `delimiter`
- `value_case = optional(string)` the case formatting of the label values, one of `lower`, `title`, `upper` or `none` (no transformation), defaults to `label_value_case`
- `regex_remove_chars = optional(string)` regex specifying characters to remove from the value, defaults to top level `regex_replace_chars` (which I would deprecate and replace with `regex_remove_chars` since we do not provide the capability to replace the characters and no one has asked for that).
- `length_limit = optional(number)` the limit on the length of the value, or 0 for unlimited, defaults to 0.
- `truncation_mode = optional(string)` one of "beginning", "middle", or "end". Where to place the hash that substitutes for the extra characters in the label. Allows you to decide to truncate `foo-bar-baz` as `foo-bar-` (the only mode we allow today), `-bar-baz`, or `foo--baz`. I would also add `id_truncation_mode` to the top-level and default `truncation_mode` to whatever `id_truncation_mode` is set to. Unfortunately, `id_truncation_mode` would need to default to `end` for backward compatibility, but I think `middle` is the better default.
```
locals {
# Create a default format map so it can be reused, optionally with changes applied.
# This is in part to deal with the Terraform requirement that all values of a map
# must have the exact same type.
default_format = {
delimiter = "-"
value_case = "lower"
regex_remove_chars = "/[^a-zA-Z0-9-]/"
length_limit = 64
truncation_mode = "middle"
}
}
# Advanced example, more like what we would probably use
module "this" {
source = "cloudposse/label/null"
label_order = [ "org", "ou", "reg", "env", "component"]
label_format = local.default_format
label_generator = {
# This is how we would generate the "id" output if it were not hardcoded for backward compatibility
id = merge(local.default_format, {
labels = [ "org", "ou", "reg", "env", "component"]
})
# Generate an output named "account" of the form "${ou}_${env}"
account = merge(local.default_format, {
# Specify the value inputs and the order
labels = ["ou", "env"]
# Change the delimiter to "_" instead of "-"
delimiter = "_"
# By default, we remove underscores, so we need to alter the list of characters to remove
regex_remove_chars = "/[^a-zA-Z0-9-_]/"
})
}
# In practice, the "values" input would be generated by Atmos
# For example, in stacks/orgs/cplive/_defaults.yaml
# vars:
# label_values:
# org: cplive
label_values = merge ({component = var.component_name} , {
org = "cplive",
ou = "plat",
reg = "ue1"
})
}
locals {
id = module.this.id
org = module.this.labels["org"]
account_name = module.this.labels["account"]
}
```
```
# Simpler example
module "this" {
source = "cloudposse/label/null"
label_order = [ "org", "ou", "reg", "env", "component"]
label_format = local.default_format
label_generator = {
account = {
labels = ["ou", "env"]
delimiter = "_"
regex_remove_chars = "/[^a-zA-Z0-9-_]/"
}
}
label_values = {
org = "cplive",
ou = "plat",
reg = "ue1"
}
}
```
```
# Simplest example
module "this" {
source = "cloudposse/label/null"
label_order = [ "org", "ou", "reg", "env", "component"]
format = local.default_format
values = {
org = "cplive",
ou = "plat",
reg = "ue1"
}
}
```
```
# In stacks/orgs/cplive/_defaults.yaml using current labels
# (Compare to https://github.com/cloudposse/infra-live/blob/8754dc3d1e938c31387bc704ef361fc476fe28e5/stacks/orgs/cplive/_defaults.yaml#L9-L28 )
vars:
label_values:
namespace: cplive
label_order:
- namespace
- tenant
- environment
- stage
- name
- attributes
label_format: &default_label_format
delimiter: "-"
value_case: "lower"
regex_remove_chars: "/[^a-zA-Z0-9-]/"
length_limit: 64
truncation_mode: "middle"
label_generator:
account_name:
<<: *default_label_format
labels:
- tenant
- stage
stack:
<<: *default_label_format
labels:
- tenant
- environment
- stage
# In stacks/orgs/cplive/core/_defaults.yaml
vars:
label_values:
tenant: cplive
# et cetera
```
For now (April 2022) with no ETA on that feature, I would limit `label_generators` to `map(list(string))`:
- key is name of label to generate
- `labels = list(string)` list of label to construct the label from, in order
The generated label will be the normalized values of the labels named in the list, in that order, joined by the same `delimiter` used for the `id`.
Likewise, we would deprecate the named outputs (and `descriptors`) in favor of a `labels` output which is a map of label names to normalized label outputs. So instead of `module.this.stage` we would reference `modules.list.labels["stage"]`
#### Atmos Changes
We need to update atmos to support a flexible set of labels.
##### Atmos option 1
Instead of specifying a template for each configuration value, such as `cluster_name_pattern`, Atmos could configure a `labels` output to use as `cluster_name_pattern` (e.g. `cluster`) and then both `atmos` and `terraform` will have access to exactly the same information in the same way (e.g. `module.this.labels["cluster"]`).
##### Atmos option 2
Right now, there are the top level `namespace`, `stage`, `name`, `tenant`, `environment` labels.
We could put these now under a new section in the stacks or in `atmos.yaml`:
```
terraform:
backend:
backend_pattern: {foo}-{bar}-{baz}
labels:
- foo
- bar
- baz
```
For compatibility with `null-label`, `atmos` should populate the labels based on the fully merged `vars` section of the stack configuration, supporting both the old variables as it does now and the new `label_input` (or whatever we call it) map.
### Option 2:
```
module "this" {
label = "camelcase(id)-lowercase(name)-uppercase(company)" # camelcaseHyphenFoobarFormat(....)
context = var.context
}
```
### Option 3:
We predefine a named set of formats and allow additional custom formats to be defined
```
# Simpler example
module "this" {
source = "cloudposse/label/null"
label_order = [ "org", "ou", "reg", "env", "component"]
label_format = "kebab"
label_generator = {
account = {
labels = ["ou", "env"]
format = "snake"
}
}
label_values = {
org = "cplive",
ou = "plat",
reg = "ue1"
}
}
```
## Decision
**DECIDED**:
## Consequences
-
## References
-
---
## Proposed: Use Multiple Terraform State Bucket Backends
**Date**: **25 Mar 2022**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- The proposal remains in draft and needs context updated.
:::
## Status
**DRAFT**
## Problem
- Terraform state backend has sensitive information e.g. RDS master credentials
- Using multiple state backends would alleviate some of these concerns, but introduce new problems with how to manage access to the bucket as well as how remote-state lookups know where to find the state
## Context
## Considered Options
### Option 1: Use AWS SSO with Standalone IAM Roles
Create a `terraform-prod` `PermissionSet` and create a `terraform-non-prod` PermissionSet.
#### Pros
#### Cons
- We create permission sets, not roles. We only create permission sets in the `aws-sso`, standalone components cannot create permission sets.
- Introducing more permission sets pollutes the global namespace with roles that are only really relevant in a couple of accounts
- Delegation of PermissionSets cannot be given to other components
### Option 2: Use Federated IAM with SAML
### Option 3:
## Decision
**DECIDED**:
## Consequences
-
## References
-
---
## Proposed: Use Private and Public Hosted Zones
**Date**: **11 Feb 2022**
:::warning Rejected!
The proposal in this ADR was rejected! For questions, please reach out to Cloud Posse.
- Context for rejection is needed. Overally this proposal adds complexity and cost for using private zones, as well as has a lack of customer demand.
:::
## Status
**DRAFT**
## Problem
There is confusion regarding service discovery and vanity domains. Hisotically, service discovery domains are on privately hosted DNS zones, yet Cloud Posse typically advocates using public DNS zones for everything, including internal load balancers. Using public hosted zones leaks information about the cloud architecture, which is why some advocate for strictly using private zones.
## Context
## Considered Options
### Option 1: Public zones only
#### Pros
- This is what we’re currently doing and this has been tried and tested
- ACM certs can be used without PCA
- LetsEncrypt can be an issuer for cert-manager (when using `ingress-nginx`)
#### Cons
- Route53 hostnames might be leaked (**although services won’t be accessible**).
- Security through Obscurity.
### Option 2: Private zones only
This is an “old school” security best practice, which in principle is great but in practice rather limiting.
#### Pros
- Route53 hostnames cannot be leaked, making it more difficult for adversaries to map out the infrastructure for targeted attacks
#### Cons
- We’ve recently added support for this in a customer engagement. There could be “underwater stones” (credit to @Igor Rodionov for this term).
- **IMPORTANT** Services cannot be exposed in any way to external third-party integrations (e.g. webhook callbacks with Twillio, GitHub, etc
- Requires cross association of VPCs with private hosted zones
- **IMPORTANT** Can only be associated with exactly one VPC
- Is this true? I’ve been able to associate a private zone with multiple VPCs. @RB (Ronak Bhatia)
- Requires a VPN solution like AWS Client VPN Endpoint in order to resolve any names
- Private CA or a public hosted zone for ACM verification
- Public hosted zone for acm verification which would require a split-view setup
- or Private CA to sign certificates and sign ACM certs and `cert-manager`.
- At least 2 Private CAs are recommended (one for prod, one for non-prod). Each private CA is $400/mo. [https://aws.amazon.com/certificate-manager/pricing/](https://aws.amazon.com/certificate-manager/pricing/)
- Troubleshooting DNS lookups would require ssh'ing via SSM to an instance in a VPC that is associated with a private Hosted Zone to check if DNS lookups work as expected
### Option 3: Hybrid of public and private zones
This has all the pros of Option 1 and Option 2 and mitigates some of the Cons.
#### Pros
- Best of both worlds
- Public for vanity domains
- Private for service discovery domains
- Both public and private can co-exist
#### Cons
- Still requires either a public hosted zone for certificate verification OR the costly Private CA at $400/mo/ea x2.
---
## Proposal: Use Stack Filesystem Layout That Follows AWS Organization Conventions
**Date**: **27 May 2022**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
- The proposal has already been adopted, and this ADR needs to be updated to reflect the final decision.
:::
## Status
**DRAFT**
## Decision
Use Option 4: Use an organization directory.
## Problem
We have stacks defined all over the place. It’s not clear what is a top-level stack and what is imported. It’s not clear where to define something and where a service is deployed. There are too many ways to do things and we haven’t standardized how we organize configurations across customers.
## Context
## Considered Options
### Option 1: Current
- `stacks/catalog`
- `stacks/`
- `stacks/`
```
✗ tree -L 2 stacks
stacks
├── catalog
│ ├── account-map.yaml
│ ├── ...
│ └── waf.yaml
├── core
│ ├── gbl
│ ├── globals.yaml
│ ├── ue1
│ └── ue2
├── gbl
│ ├── artifacts.yaml
│ ...
│ └── staging.yaml
├── globals.yaml
├── plat
│ ├── gbl
│ ├── globals.yaml
│ └── ue2
├── ue1
│ └── globals.yaml
└── ue2
├── audit.yaml
...
└── staging.yaml
```
### Option 2: Put region within catalog
- `stacks/catalog/`
- `stacks/`
```
✗ tree -L 3 stacks --dirsfirst
stacks
├── catalog
│ ├── argocd
│ │ └── repo
...
│ ├── gbl
│ │ ├── artifacts.yaml
...
│ │ └── staging.yaml
│ ├── s3-bucket
│ ├── ue1
│ │ └── globals.yaml
│ ├── ue2
│ │ ├── audit.yaml
│ │ ├── auto.yaml
│ │ ├── corp.yaml
│ │ ├── dev.yaml
│ │ ├── globals.yaml
│ │ ├── marketplace.yaml
│ │ ├── network.yaml
│ │ ├── prod.yaml
│ │ ├── root.yaml
│ │ ├── sandbox.yaml
│ │ └── staging.yaml
│ ├── account-map.yaml
...
│ └── waf.yaml
├── core
│ ├── gbl
│ │ ├── artifacts.yaml
...
│ │ └── security.yaml
│ ├── ue1
│ │ ├── globals.yaml
│ │ └── public.yaml
│ ├── ue2
│ │ ├── audit.yaml
...
│ │ └── root.yaml
│ └── globals.yaml
├── plat
│ ├── gbl
│ │ ├── dev.yaml
...
│ │ └── staging.yaml
│ ├── ue2
│ │ ├── dev.yaml
...
│ │ └── staging.yaml
│ └── globals.yaml
└── globals.yaml
```
### Option 3: Put root level stacks in order tenant/account/region instead of tenant/region/account
- `stacks/catalog`
- `stacks/mixins/`
- `stacks/`
```
✗ tree -L 3 stacks --dirsfirst
stacks
├── catalog
│ ├── argocd
...
│ └── waf.yaml
├── mixins
│ ├── gbl
│ │ ├── artifacts.yaml
...
│ │ └── staging.yaml
│ ├── ue1
│ │ └── globals.yaml
│ └── ue2
│ ├── audit.yaml
...
│ └── staging.yaml
├── plat
│ ├── gbl
│ │ ├── dev.yaml
...
│ │ └── staging.yaml
│ ├── ue2
│ │ ├── dev.yaml
...
│ │ └── staging.yaml
│ └── globals.yaml
├── tenants
│ ├── core
│ │ ├── gbl
│ │ ├── ue1
│ │ ├── ue2
│ │ └── globals.yaml
│ └── plat
│ ├── gbl
│ ├── ue2
│ └── globals.yaml
└── globals.yaml
```
### Option 4: Use an organization directory
Use a filesystem hierarchy that mirrors the AWS hierarchy: Organization → OU → Account → Region → Resources
- `stacks/catalog/`
- e.g. `eks/cluster.yaml`
- `stacks/mixins/`
- e.g. `regions/us-east-1.yaml`
- `stacks/orgs////.yaml`
| | | |
| --------- | ---------------------------------- | --- |
| namespace | The namespace for the organization | |
| ou | Typically the tenant name | |
| account | The stage within the tenant | |
| region | The canonical AWS region | |
Use fully spelled out canonical region name (e.g. `us-east-1`)
Use `global-region.yaml` for resources that are not tied to any particular region (e.g. Route53).
Use `_defaults.yaml` for any other default settings.
## Decision
**DECIDED**:
## Consequences
-
## References
-
---
## Proposed: Use Strict Provider Pinning in Components
**Date**: **11 Feb 2022**
:::info Needs Update!
The content in this ADR may be out-of-date and needing an update. For questions, please reach out to Cloud Posse
:::
## Status
**DRAFT**
## Problem
New major provider versions can break planning and applying of terraform components/modules.
[https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.0.0](https://github.com/hashicorp/terraform-provider-aws/releases/tag/v4.0.0)
While we wait to upgrade broken modules to newer provider versions, we should come up with a solution that won’t break customer workflows.
Currently, we set this in both modules and components for most customers
```hcl
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.0"
}
}
```
## Considered Options
### Option 1: Pinning Providers in Components to Major, Minor Versions
- The easiest way for Cloud Posse to distribute components with versions that have been tested in a particular configuration (E.g. as opposed to using `.terraform.lock.hcl`)
- Pin providers to major versions in downstream components for clients
```hcl
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
```
### Option 2: Pinning Providers in Upstream Modules
:::info
Cloud Posse needs to do lower-bound pinning for terraform core version and AWS provider in our open source modules on `github.com/cloudposse`. We learned this lesson the hard way. The essence of the problem is that terraform computes the intersection of all supported provider versions and takes the highest number supported. If the sets are disjoint, it errors.
-
```
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider hashicorp/aws:
no available releases match the given constraints ~> 3.0, ~> 2.0
```
If we pin a module to `~> 3.0` it presents several problems when version 4.0 comes out:
- We cannot tell (without removing the version constraint) whether or not the module works with 4.0 as-is or needs modification.
- Anyone wanting to use version 4.0 cannot do so with this module or any module that uses it, even if it would otherwise work.
- This includes other Cloud Posse modules: if we want to upgrade another module to work with 4.0, we cannot do that if it uses any other modules pinned to `~> 3.0`
The net result is that as soon as 4.0 comes out, in practice we need to remove the `~> 3.0` pin anyway, just to see if the module needs modification, and more often than not it does not, so the pin has just created a lot of extra work for no real benefit. (If the code still works, then the pin did nothing but break it. If the code is broken at 4.0, then the best we can say is the pin makes it a little easier to see why the previously working code is now broken, but either way it is broken, so that is not great consolation.)
Therefore, Cloud Posse today by convention only does lower bound pinning in our open source module until all the modules are updated. We only bump the the lower bound when the code takes advantage of a new feature that requires it.
:::
- Pin providers to major versions in upstream modules
```hcl
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
```
### Option 3: Pinning Providers in Components to Exact Versions
- This can also be done by committing the `.terraform.lock.hcl` file instead of ignoring it in `.gitignore` and using a github action to periodically update it.
- This can be done using a `required_providers` and using something like rennovatebot to update it
```
required_providers {
aws = {
source = "hashicorp/aws"
version = "= 3.70"
}
}
```
## Decision
- Use Option 1
## Consequences
- Update `providers.tf` in all components to follow this convention
- Something should still update the pinning when new providers are available.
## Related Documentation
- [Components](/components)
- [How to Keep Everything Up to Date](/learn/maintenance/upgrades/how-to-keep-everything-up-to-date)
---
## Proposed Architecture Decision Records
---
## Architecture Diagrams
import Intro from '@site/src/components/Intro';
We provide a number of boilerplate architecture diagrams. Think of them as templates that can be copied and used throughout your organization. Reach out to Cloud Posse PMs if you’d like a copy of any one of them.
## Available Diagrams
Don’t see the diagram you need? Open a [GitHub Discussion](https://github.com/orgs/cloudposse/discussions) to raise the request!
## 4 Layers of Infrastructure
The 4 Layers of Infrastructure depict the various layers and lifecycles associated with provisioning infrastructure from the bottom up. Each layer introduces new tools and builds upon the previous layers. The SDLC of each layer is independent from the other layers, and each layer must exist before the subsequent layers can be provisioned. As we approach the top of the stack, the layers change more frequently. The lower down we go, the more seldom layers change and frequently more challenging to modify in place.
## 8 Layers of Security
The 8 Layers of Security depict security in depth. Cloud Posse has Terraform support for provisioning the most essential security-oriented products, mostly AWS managed services like AWS SecurityHub or AWS WAF.
## Big Picture
The Big Picture helps paint the story of how there are dozens of services in play. Where possible, we opt for fully managed services by AWS or best-of-breed SaaS alternatives. We reserve the platform (EKS or ECS) for running and operating your applications, which is your competitive advantage.
## Security Escalation Architecture
Our approach to Security Escalations has everything flow through SecurityHub and then to Amazon SNS then through to OpsGeneie for Incident Management.
---
## Alerting
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
Learn how to set up an effective alerting system that notifies the right people when issues arise. This involves configuring OpsGenie for incident management and integrating it with Datadog for monitoring, ensuring alerts are properly escalated and managed to prevent alert fatigue and system failures.
Alerting notifies users about changes in a system’s state, typically via emails or text messages sent to specific individuals or groups.
Alerting goes hand in hand with [monitoring](/layers/monitoring). When a system is monitored properly, only the right people are notified when something goes wrong.
:::important
This Quick Start assumes you have read through [the Monitoring Quick Start](/layers/monitoring)
:::
## The Problem
When a system is not monitored properly, it's easy to get overwhelmed with alerts. This leads to alert fatigue. Alert fatigue is a real problem that can lead to a system being ignored and eventually failing.
Furthermore, a system must be in place for alerts to be escalated to the right team and people. When a single system goes down, there are often cascading effects on other services.
## Our Solution
Our solution begins with [OpsGenie](https://www.opsgenie.com/). OpsGenie is a modern incident management platform for operating always-on services, empowering Dev and Ops teams to plan for service disruptions and stay in control during incidents.
We integrate [OpsGenie with Datadog](https://support.atlassian.com/opsgenie/docs/integrate-opsgenie-with-datadog/) to pair it with our monitoring solution.
### Implementation
We have a singular component that can be instanced for every team. This component is called [`opsgenie-team`](/components/library/aws/opsgenie-team/) and it handles the surrounding work of setting up a team in OpsGenie.
To get started with this component, you'll need an OpsGenie API key, which you can get from the [OpsGenie API page](https://support.atlassian.com/opsgenie/docs/api-key-management/).
Follow the Component README to get started, this will create a catalog entry which should be configured to your company's defaults for every team. Then start by creating a team for each team in your organization.
### How it works
Our Monitors have a global configurable variable called `alert_tags`, this should be set to include `@opsgenie-{{team.name}}`, such as:
```yaml
alert_tags: ["@opsgenie-{{team.name}}"]
```
This creates a message on the Datadog monitor that uses the team tag to send the alert to the correct team in OpsGenie. When an event or alert is triggered, the **data's** tag of `team` will be used to dynamically send an alert to the corresponding team in OpsGenie. An important distinction in that the tag is not fetched from the monitor, but the data sent to the monitor.
Having the data's tag of team being used is beneficial, monitors can be configured to send alert to different teams. For example, you can have a single monitor for pods crashlooping on EKS, if each deployment is properly labeled with the `team:foo` or `team:bar` tag, then the alert will be sent to the correct team.
### Service Level Indicators (SLIs) and Service Level Objectives (SLOs)
SLIs and SLOs are a way to measure the reliability of a service. They are a way to measure the quality of a service.
Sometimes a business has contractual obligations to their SLOs. For example, a business may have a contractual obligation to have 99.9% uptime. This means that the service must be available 99.9% of the time.
Datadog supports SLOs, they can be a set of metrics or monitors. For example, you can have a monitor that checks if a service is up or down. This monitor can be used as an SLO.
You can then use SLO Monitors to report Incidents in OpsGenie, when you create a team, you can specify the level of priority that is considered an **Incident**. When the alert matches the priority level of an Incident, an Incident is created in OpsGenie. An [**Incident**](https://support.atlassian.com/opsgenie/docs/what-is-an-incident/) is a specialized alert that is used to track the progress of an issue, it can have cascading effects on other services.
## References
:::tip
[This article](/resources/deprecated/alerting/opsgenie) goes more in-depth on some of the above topics.
:::
- [OpsGenie](https://www.opsgenie.com/)
- [How to Sign Up for OpsGenie?](/resources/deprecated/alerting/opsgenie/how-to-sign-up-for-opsgenie)
- [How to Create New Teams in OpsGenie](/resources/deprecated/alerting/opsgenie/how-to-create-new-teams-in-opsgenie)
- [How to Add Users to a Team in OpsGenie](/resources/deprecated/alerting/opsgenie/how-to-add-users-to-a-team-in-opsgenie)
- [How to Onboard a New Service with Datadog and OpsGenie](/resources/deprecated/alerting/opsgenie/how-to-onboard-a-new-service-with-datadog-and-opsgenie)
- [Component `opsgenie-team`](/components/library/aws/opsgenie-team/)
- [Datadog: How to Pass Tags Along to Datadog](/layers/monitoring/datadog/tutorials/how-to-pass-tags-along-to-datadog)
## FAQ
### How do I set up SSO with OpsGenie?
There are [official docs on how to configure SSO/SAML](https://support.atlassian.com/opsgenie/docs/configure-saml-based-sso/). Those should suffice for using AWS Identity Center. AWS also has
[docs on adding SAML applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/saasapps.html)
which includes an official config for OpsGenie already. If you don't plan on using AWS IC, there
are [docs on configuring SSO with other identity providers](https://support.atlassian.com/opsgenie/docs/configure-sso-for-opsgenie/).
---
## Decide on Default Schedules
## Context and Problem Statement
By default, an opsgenie team comes with its own schedule. Sometimes however we want different schedules for different timezones. A team spread across the world would have to manually keep track of the schedule to make sure individuals are only on call for particular hours.
## Considered Options
### Option 1 - Use one default Schedule (Recommended)
:::tip
Our Recommendation is to use Option 1 because....
:::
#### Pros
- One single pane of glass for whose on call
#### Cons
- Ensuring people in different timezones are on call at the right times is a manual process
### Option 2 - Many Schedules to follow the sun
#### Pros
- Sets default routing based on timezones to particular schedules.
#### Cons
- Slightly more complex setup
## References
- Links to any research, ADRs or related Jiras
---
## Decide on Incident Ruleset
## Context and Problem Statement
We need to decide the rules that make an alert an incident. This ruleset could be based on priority-level of the alert, message, or by tag.
Opsgenie can escalate an alert into an incident, this marks the alert as more severe and needs more attention than a standard alert. See [How to Implement Incident Management with OpsGenie](/resources/deprecated/alerting/opsgenie/#terminology) for more details on what an **Incident** is.
:::info
Picking a standard here provides a clear understanding to when an alert should become an incident, ideally this is not customized by each team.
:::
## Considered Options
### Option 1 - Priority Level Based (P1 & P2) (Recommended)
:::tip
Recommended because maps 1-1 with Datadog Severity and provides a clear understanding
:::
#### Pros
- Priority is a first-class field in Datadog and Opsgenie
- Directly maps to Datadog severity level in monitors.
- P1 & P2 Are considered Critical and High priority, allowing slight variation in the level of incidents.
- Dynamic based on the Monitoring Platform (e.g. Datadog can say if this alert happens 5x in 1 min, escalate priority)
### Option 2 - Priority Level Based (Other)
This could be only **P1** or any range.
#### Pros
- Directly maps to Datadog severity level in monitors.
- Dynamic based on the Monitoring Platform (e.g. Datadog can say if this alert happens 5x in 1 min, escalate priority)
### Option 3 - Tag Based
Tag based approach would mean any monitor that sends an alert with a tag `incident:true` becomes an incident.
#### Pros
- Dynamic based on the Monitoring Platform (e.g. Datadog can say if this alert happens 5x in 1 min, escalate priority)
#### Cons
- Incidents can now be defined in more than one way
- An extra field must be passed
- Puts definition of an incident on the monitoring platform.
## References
- [How to Implement Incident Management with OpsGenie](/resources/deprecated/alerting/opsgenie/)
- [How to Implement SRE with Datadog](/layers/monitoring/datadog)
---
## Decide on Teams for Escalations
## Problem
Teams need to be notified of incidents tied to services that affect them.
## Solution
Come up with a table of services and the teams or business units responsible for them.
Services are associated with incidents
Incidents are escalated to teams
## Other Considerations
- Should the teams map to products, services or business units?
- Should we map the teams to existing teams in IdP or directly associate users to teams in OpsGenie?
#### Here’s how we think about teams:
:::note
Members can also be handled by the IdP integration, but the teams still need to be defined in OpsGenie)
:::
```yaml
teams:
- name: cloudplatform
description: "Cloud Platform Team"
members:
- username: user@ourcompany.com
role: admin
- username: user@ourcompany.com
role: admin
- name: security
description: "Security Team"
members:
- username: user@ourcompany.com
role: admin
- username: user@ourcompany.com
role: admin
- name: compliance-engineering
description: "Compliance Engineering Team"
members:
- username: user@ourcompany.com
role: admin
- username: user@ourcompany.com
role: admin
```
---
## Design Decisions(8)
import DocCardList from "@theme/DocCardList";
import Intro from "@site/src/components/Intro";
Review the key design decisions to determine how you'll implement incident
management, escalations, and alerting.
---
## How to Add Users to a Team in OpsGenie
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
Adding users to a team in OpsGenie ensures that the right team members are notified when alerts occur. This guide will help you update your team's YAML stack configuration to include new members, specify their roles, and integrate the changes seamlessly using the `opsgenie-team` component. Whether you're managing a Site Reliability Engineering (SRE) team or any other team, this process ensures efficient alert handling and response.
## Problem
We often need to change who on a team responds to particular alerts.
### Prerequisites
Assuming you are using the [opsgenie-team](/components/library/aws/opsgenie-team/) component with `ignore_team_members` set to `false`
## Solution
:::tip
**TL;DR**
In your team’s YAML stack configuration, add users to the `members` array block.
:::
Example Configuration:
```
members:
- user: erik@cloudposse.com
role: admin
- user: ben@cloudposse.com
```
```
components:
terraform:
opsgenie-team-sre:
component: opsgenie-team
settings:
spacelift:
workspace_enabled: true
vars:
enabled: true
name: sre
description: "SRE team"
members:
- user: erik@cloudposse.com
role: admin
- user: ben@cloudposse.com
```
---
## How to Create Escalation Rules in OpsGenie
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
Creating escalation rules in OpsGenie allows you to control the actions taken when an alert isn’t acknowledged. This guide will walk you through configuring escalation rules in the stack configuration of the `opsgenie-team` component, ensuring timely response and proper alert handling. By defining escalation conditions and actions, you can efficiently manage alert escalations within your teams.
## Problem
You want to control what to do when an alert isn’t acknowledged.
## Solution
:::tip
**TL;DR** This is controlled by escalation rules in the stack configuration of the `opsgenie-team`.
:::
An Escalation resource for a team is directly exposed via a map. Have a look at [escalations in terraform for exact variable names](https://registry.terraform.io/providers/opsgenie/opsgenie/latest/docs/reference/escalation) and [How do escalations work in opsgenie](https://support.atlassian.com/opsgenie/docs/how-do-escalations-work-in-opsgenie/) to determine how you want to configure your escalations.
An example is below
```
components:
terraform:
opsgenie-team-my-team:
component: opsgenie-team
...
escalations:
my-team_escalate_to_sre:
enabled: true
description: "Escalate to 'sre' team if 'my-team' team does not acknowledge"
rule:
condition: if-not-acked
notify_type: all
delay: 5
recipients:
- type: team
team_name: sre
repeat:
wait_interval: 10
count: 2
reset_recipient_states: false
close_alert_after_all: false
```
---
## How to Create New Teams in OpsGenie
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';
import TaskList from '@site/src/components/TaskList';
As your company grows, you'll need a streamlined way to configure new teams with alerting on specific resources. This guide demonstrates how to use the `opsgenie-team` component to create a new team in OpsGenie. By tagging resources appropriately, you can ensure that alerts are directed to the right team through Datadog.
## Problem
As a company grows so does its number of teams. We need a way to be easily able to configure a new team with alerting on particular resources.
## Solution
The [opsgenie-team](/components/library/aws/opsgenie-team/) component can be used as a virtual component to create a new team.
:::tip
**TL;DR**
Create a new opsgenie-team component. Then tag resources with the `team: ` to start sending alerts through datadog.
:::
### Prerequisites