Skip to main content

Concepts

SweetOps is built on top of a number of high-level concepts and terminology that are critical to understanding prior to getting started. In this document, we break down these concepts to help you better understand our conventions as we introduce them.

Components

Components are opinionated, self-contained units of infrastructure as code that solve one, specific problem or use-case. SweetOps has two flavors of components:

  1. Terraform: Stand-alone root modules that implement some piece of your infrastructure. For example, typical components might be an EKS cluster, RDS cluster, EFS filesystem, S3 bucket, DynamoDB table, etc. You can find the full library of SweetOps Terraform components here. We keep these types of components in the components/terraform/ directory within the infrastructure repository.

  2. Helmfiles: Stand-alone, applications deployed using helmfile to Kubernetes. For example, typical helmfiles might deploy the DataDog agent, cert-manager controller, nginx-ingress controller, etc. Similarly, the full library of SweetOps Helmfile components is on GitHub. We keep these types of components in the components/helmfile/ directory within the infrastructure repository.

One important distinction about components that is worth noting: components are opinionated “root” modules that typically call other child modules. Components are the building blocks of your infrastructure. This is where you define all the business logic for how to provision some common piece of infrastructure like ECR repos (with the ecr component) or EKS clusters (with the eks/cluster component). Our convention is to stick components in the components/terraform directory and to use a modules/ subfolder to provide child modules intended to be called by the components.

caution

We do not recommend consuming one terraform component inside of another as that would defeat the purpose; each component is intended to be a loosely coupled unit of IaC with its own lifecycle. Further more, since components define a state backend, it’s not supported in terraform to call it from other modules.

Stacks

Stacks are a way to express the complete infrastructure needed for an environment using a standard YAML configuration format that has been developed by Cloud Posse. Stacks consist of components and the variables inputs into those components. For example, you configure a stack for each AWS account and then reference the components which comprise that stack. The more modular the components, the easier it is to quickly define a stack without writing any new code.

Here is an example stack defined for a Dev environment in the us-west-2 region:

# Filename: stacks/uw2-dev.yaml
import:
- eks/eks-defaults

vars:
stage: dev

terraform:
vars: {}

helmfile:
vars:
account_number: "1234567890"

components:
terraform:

dns-delegated:
vars:
request_acm_certificate: true
zone_config:
- subdomain: dev
zone_name: example.com

vpc:
vars:
cidr_block: "10.122.0.0/18"

eks:
vars:
cluster_kubernetes_version: "1.19"
region_availability_zones: ["us-west-2b", "us-west-2c", "us-west-1d"]
public_access_cidrs: ["72.107.0.0/24"]

aurora-postgres:
vars:
instance_type: db.r4.large
cluster_size: 2

mq-broker:
vars:
apply_immediately: true
auto_minor_version_upgrade: true
deployment_mode: "ACTIVE_STANDBY_MULTI_AZ"
engine_type: "ActiveMQ"

helmfile:

external-dns:
vars:
installed: true

datadog:
vars:
installed: true
datadogTags:
- "env:uw2-dev"
- "region:us-west-2"
- "stage:dev"

Great, so what can you do with a stack? Stacks are meant to be a language and tool agnostic way to describe infrastructure, but how to use the stack configuration is up to you. We provide the following ways to utilize stacks today:

  1. Atmos: Atmos is a command-line tool that enables CLI-driven stack utilization and supports workflows around terraform, helmfile, and many other commands

  2. terraform-provider-utils: is our Terraform provider for consuming stack configurations from within HCL/Terraform.

  3. Spacelift: By using the terraform-spacelift-cloud-infrastructure-automation module you can configure Spacelift continuously deliver components. Read up on why we Use Spacelift for GitOps with Terraform .

Catalogs

Catalogs in SweetOps are collections of sharable and reusable configurations. Think of the configurations in catalogs as defining archetypes (a very typical example of a certain thing) of configuration (E.g. s3/public and s3/logs would be two kinds of archtypes of S3 buckets). They are also convenient for managing Terraform. These are typically YAML configurations that can be imported and provide solid baselines to configure security, monitoring, or other 3rd party tooling. Catalogs enable an organization to codify its best practices of configuration and share them. We use this pattern both with our public terraform modules as well as with our stack configurations (e.g. in the stacks/catalog folder).

SweetOps provides many examples of how to use the catalog pattern to get you started.

Today SweetOps provides a couple important catalogs:

  1. DataDog Monitors: Quickly bootstrap your SRE efforts by utilizing some of these best practice DataDog application monitors.

  2. AWS Config Rules: Quickly bootstrap your AWS compliance efforts by utilizing hundreds of AWS Config rules that automate security checks against many common services.

  3. AWS Service Control Policies: define what permissions in your organization you want to permit or deny in member accounts.

In the future, you’re likely to see additional open-source catalogs for OPA rules and tools to make sharing configurations even easier. But it is important to note that how you use catalogs is really up to you to define, and the best catalogs will be specific to your organization.

Collections

Collections are groups of stacks.

Segments

Sements are interconnected networks. For example, a production segment connects all production-tier stacks, while a non-production segment connects all non-production stacks.

Primary vs Delegated

Primary vs Delegated is an implementation pattern in SweetOps. This is most easily described when looking at the example of domain and DNS usage in a mutli-account AWS organization: SweetOps takes the approach that the root domain (e.g. example.com) is owned by a primary AWS account where the apex zone resides. Subdomains on that domain (e.g. dev.example.com) are then delegated to the other AWS accounts via an NS record on the primary hosted zone which points to the delegated hosted zone’s name servers.

You can see an example of this pattern in the dns-primary and dns-delegated components.

Live vs Model (or Synthetic)

Live represents something that is actively being used. It differs from stages like “Production” and “Staging” in the sense that both stages are “live” and in-use. While terms like “Model” and “Synthetic” refer to something which is similar, but not in use by end-users. For example, a live production vanity domain of acme.com might have a synthetic vanity domain of acme-prod.net.

Docker Based Toolbox (aka Geodesic)

In the landscape of developing infrastructure, there are dozens of tools that we all need on our personal machines to do our jobs. In SweetOps, instead of having you install each tool individually, we use Docker to package all of these tools into one convenient image that you can use as your infrastructure automation toolbox. We call it Geodesic and we use it as our DevOps automation shell and as the base Docker image for all of our DevOps tooling.

Geodesic is a DevOps Linux Distribution packaged as a Docker image that provides users the ability to utilize atmos, terraform, kubectl, helmfile, the AWS CLI, and many other popular tools that compromise the SweetOps methodology without having to invoke a dozen install commands to get started. It’s intended to be used as an interactive cloud automation shell, a base image, or in CI/CD workflows to ensure that all systems are running the same set of versioned, easily accessible tools.

Vendoring

Vendoring is a strategy of importing external dependencies into a local source tree or VCS. Many languages (e.g. NodeJS, Golang) natively support the concept. However, there are many other tools which do not address how to do vendoring, namely terraform.

There are a few reasons to do vendoring. Sometimes the tools we use do not support importing external sources. Other times, we need to make sure to have full-control over the lifecycle or versioning of some code in case the external dependencies go away.

Our current approach to vendoring of thirdparty software dependencies is to use vendir when needed.

Example use-cases for Vendoring:

  1. Terraform is one situation where it’s needed. While terraform supports child modules pulled from remote sources, components (aka root modules) cannot be pulled from remotes.

  2. GitHub Actions do not currently support importing remote workflows. Using vendir we can easily import remote workflows.

Generators

Generators in SweetOps are the pattern of producing code or configuration when existing tools have shortcomings that cannot be addressed through standard IaC. This is best explained through our use-cases for generators today:

  1. In order to deploy AWS Config rules to every region enabled in an AWS Account, we need to specify a provider block and consume a compliance child module for each region. Unfortunately, Terraform does not currently support the ability loop over providers, which results in needing to manually create these provider blocks for each region that we’re targeting. On top of that, not every organization uses the same types of accounts so a hardcoded solution is not easily shared. Therefore, to avoid tedious manual work we use the generator pattern to create the .tf files which specify a provider block for each module and the corresponding AWS Config child module.

  2. Many tools for AWS work best when profiles have been configured in the AWS Configuration file (~/.aws/config). If we’re working with dozens of accounts, keeping this file current on each developer’s machine is error prone and tedious. Therefore we use a generator to build this configuration based on the accounts enabled.

  3. Terraform backends do not support interpolation. Therefore, we define the backend configuration in our YAML stack configuration and use atmos as our generator to build the backend configuration files for all components.

The 4-Layers of Infrastructure

We believe that infrastructure fundamentally consists of 4-layers of infrastructure. We build infrastructure starting from the bottom layer and work our way up.

Each layer builds on the previous one and our structure is only as solid as our foundation. The tools at each layer vary and augment the underlying layers. Every layer has it’s own SDLC and is free to update independently of the other layers. The 4th and final layer is where your applications are deployed. While we believe in using terraform for layers 1-3, we believe it’s acceptable to introduce another layer of tools to support application developers (e.g. Serverless Framework, CDK, etc) are all acceptable since we’ve built a solid, consistent foundation.