Using Geodesic with Terraform

Learn how to use a Geodesic Module to manage resources using Terraform

Prerequisites

Make sure you have created a Geodesic Module before continuing with these steps.

Important

Before provisioning any terraform resources, it’s essential to provision a Terraform state backend (aka tfstate backend). A terraform state backend consists of an S3 bucket and a DynamoDB lock table.

Provisioning a Terraform State Backend

To create terraform state bucket and lock table, follow these steps:

Configure Environment Variables

Update your geodesic module’s Dockerfile with the following environment variables:

Example

ENV TF_VAR_tfstate_namespace=example
ENV TF_VAR_tfstate_stage=staging
ENV TF_VAR_tfstate_region=us-west-2
ENV TF_BUCKET_REGION=us-west-2

Replace with values to suit your specific project.

Rebuild the Module

Rebuild the module

sh-3.2$ make build

Add tfstate-bucket backing service

Create a file in ./conf/tfstate-backend/main.tf with following content

./conf/tfstate-backend/main.tf

terraform {
  required_version = ">= 0.11.2"
  backend "s3" {}
}

variable "aws_assume_role_arn" {}

variable "tfstate_namespace" {}

variable "tfstate_stage" {}

variable "tfstate_region" {}

provider "aws" {
  assume_role {
    role_arn = "${var.aws_assume_role_arn}"
  }
}

module "tfstate_backend" {
  source    = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.1.0"
  namespace = "${ var.tfstate_namespace }"
  stage     = "${ var.tfstate_stage }"
  region    = "${ var.tfstate_region }"
}

output "tfstate_backend_s3_bucket_domain_name" {
  value = "${module.tfstate_backend.s3_bucket_domain_name}"
}

output "tfstate_backend_s3_bucket_id" {
  value = "${module.tfstate_backend.s3_bucket_id}"
}

output "tfstate_backend_s3_bucket_arn" {
  value = "${module.tfstate_backend.s3_bucket_arn}"
}

output "tfstate_backend_dynamodb_table_name" {
  value = "${module.tfstate_backend.dynamodb_table_name}"
}

output "tfstate_backend_dynamodb_table_id" {
  value = "${module.tfstate_backend.dynamodb_table_id}"
}

output "tfstate_backend_dynamodb_table_arn" {
  value = "${module.tfstate_backend.dynamodb_table_arn}"
}

Start the Geodesic Shell

Run the Geodesic Module shell.

> $CLUSTER_NAME

Run the Geodesic Shell

sh-3.2$ staging.example.com
# Mounting /home/goruha into container
# Starting new staging.example.com session from cloudposse/staging.example.com:dev
# Exposing port 41179
* Started EC2 metadata service at http://169.254.169.254/latest

         _              _                                              _
     ___| |_ __ _  __ _(_)_ __   __ _    _____  ____ _ _ __ ___  _ __ | | ___
    / __| __/ _` |/ _` | | '_ \ / _` |  / _ \ \/ / _` | '_ ` _ \| '_ \| |/ _ \
    \__ \ || (_| | (_| | | | | | (_| | |  __/>  < (_| | | | | | | |_) | |  __/
    |___/\__\__,_|\__, |_|_| |_|\__, |  \___/_/\_\__,_|_| |_| |_| .__/|_|\___|
                  |___/         |___/                           |_|


IMPORTANT:
* Your $HOME directory has been mounted to `/localhost`
* Use `aws-vault` to manage your sessions
* Run `assume-role` to start a session


-> Run 'assume-role' to login to AWS
 ⧉  staging example
❌   (none) ~ ➤

Log into AWS

Assume role by running

assume-role

Assume role

❌   (none) conf ➤  assume-role
Enter passphrase to unlock /conf/.awsvault/keys/:
Enter token for arn:aws:iam::xxxxxxx:mfa/goruha: 781874
* Assumed role arn:aws:iam::xxxxxxx:role/OrganizationAccountAccessRole
-> Run 'init-terraform' to use this project
 ⧉  staging example
✅   (example-staging-admin) conf ➤

Save terraform state to local

Comment in ./conf/tfstate-backend/main.tf with vim

#  backend "s3" {}

Example

⧉  staging example
✅   (example-staging-admin) ~ ➤  vim /conf/tfstate-backend/main.tf

Apply tfstate-bucket

Change directory to /conf/tfstate-backet and run there commands

init-terraform
terraform plan
terraform apply

When terraform apply completes, it output the value of the terraform state bucket and DynamoDB table. Take note of these values because we will need them in the following steps.

terraform apply

✅   (example-staging-admin) tfstate-backend ➤  terraform apply
null_resource.default: Refreshing state... (ID: 4514126170089387416)
null_resource.default: Refreshing state... (ID: 5129624787293790468)
aws_dynamodb_table.default: Refreshing state... (ID: example-staging-terraform-state-lock)
aws_s3_bucket.default: Refreshing state... (ID: example-staging-terraform-state)

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

tfstate_backend_dynamodb_table_arn = arn:aws:dynamodb:us-west-2:xxxxxxx:table/example-staging-terraform-state-lock
tfstate_backend_dynamodb_table_id = example-staging-terraform-state-lock
tfstate_backend_dynamodb_table_name = example-staging-terraform-state-lock
tfstate_backend_s3_bucket_arn = arn:aws:s3:::example-staging-terraform-state
tfstate_backend_s3_bucket_domain_name = example-staging-terraform-state.s3.amazonaws.com
tfstate_backend_s3_bucket_id = example-staging-terraform-state
 ⧉  staging example
✅   (example-staging-admin) tfstate-backend ➤

In the example the bucket name is example-staging-terraform-state and dynamo DB table example-staging-terraform-state-lock.

Save terraform state to s3

Uncomment in ./conf/tfstate-backend/main.tf with vim

  backend "s3" {}

Example

⧉  staging example
✅   (example-staging-admin) ~ ➤  vim /conf/tfstate-backend/main.tf

Change directory to /conf/tfstate-backet and run there commands

export TF_BUCKET={TERRAFORM_STATE_BUCKET_NAME}
terraform apply

Example

export TF_BUCKET=example-staging-terraform-state
terraform apply

Exit the module shell

Exit from the shell by running exit twice

Exit the shell

✅   (example-staging-admin) tfstate-backend ➤  exit
logout
Goodbye
-> Run 'assume-role' to login to AWS
 ⧉  staging example
❌   (none) ~ ➤  exit
logout
Goodbye

Config environment variables

Update the geodesic module’s Dockerfile with the following environment variables.

Example

ENV TF_BUCKET=example-staging-terraform-state
ENV TF_DYNAMODB_TABLE=example-staging-terraform-state-lock

Update the values based on the outputs from the previous step.

Rebuild module

Rebuild the module.

> make build

Now that we have provisioned all the nessary resources to operate terraform, we’re ready to provision the other terraform modules needed by kops.

Use with other terraform modules

Using our terraform modules you can now provision any other terraform projects using the init-terraform script.

Create terraform module

To provision terraform module create a directory for it in /conf

Tip

If the terraform module is named kube2iam, then create /conf/kube2iam and stick the terraform code in there. Example of code you can find there [LINK!]

Rebuild the Geodesic Module

Rebuild the shell container with make build command.

Tip

During development, you can skip rebuilding the container and instead work from the /localhost folder inside of the container. The /localhost folder is the user’s $HOME folder mounted into the container. Any files on this system will be persisted.

Start the Shell

$CLUSTER_NAME

For example, to access your geodesic project shell do the following. If $CLUSTER_NAME=staging.example.com simply run the command staging.example.com.

Login to AWS with your MFA device

assume-role

Assume role

❌   (none) conf ➤  assume-role
Enter passphrase to unlock /conf/.awsvault/keys/:
Enter token for arn:aws:iam::xxxxxxx:mfa/goruha: 781874
* Assumed role arn:aws:iam::xxxxxxx:role/OrganizationAccountAccessRole
-> Run 'init-terraform' to use this project
 ⧉  staging example
✅   (example-staging-admin) conf ➤

Provision terraform module

Change directory to the required resources folder

cd /conf/{module_name}

Run Terraform

init-terraform
terraform plan
terraform apply

Example

If terraform module name is kube2iam.

cd /conf/kube2iam
init-terraform
terraform plan
terraform apply

Examples

Provision CloudTrail with Terraform

Change directory to the required resources folder

cd /conf/cloudtrail

Run Terraform

init-terraform
terraform plan
terraform apply

Terraform Plan Output of Cloud Trail

Provision Backing Services with Terraform

Change directory to the required resources folder

cd /conf/backing-services

Run Terraform

init-terraform
terraform plan
terraform apply

Terraform Plan Output of VPC and Subnets Repeat for all other projects in the solution (dns, acm, etc.).

Build and Release geodesic shell

Run make docker/build to build terraform modules in geodesic shell container.

All done. All AWS resources are now up and running.