Kube2IAM

Dependencies

None

Install

Enable Assumed Roles

Important

By default, the kops manifest that ships with Geodesic is configured to permit nodes to assume roles. So you can continue to next step

All Kubernetes nodes instance profile should have permissions to assume role.

To do this, kops manifest should define following additionalPolicies. By default, we include this in the manifest.yaml that ships with geodesic.

manifest.yaml

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  name: us-west-2.staging.example.com
spec:
  additionalPolicies:
      nodes: |
        [
          {
            "Sid": "assumeClusterRole",
            "Action": [
              "sts:AssumeRole"
            ],
            "Effect": "Allow",
            "Resource": ["*"]
          }
        ]

Follow the instructions to apply changes to the kops cluster

Kops Integration

Now to leverage IAM Roles with your kops cluster, you’ll need to install kube2iam. There are a number of ways to go about this, but we recommend to use our Helmfiles.

Install with Helmfile

Install `kube2iam`

helmfile --selector chart=kube2iam sync

This service depends on the following environment variables:

  • AWS_REGION - AWS region

Environment variables can be specified in Geodesic Module’s Dockerfile or using Chamber storage.

Install with Custom Helmfile

Add to your Kubernetes Backing Services Helmfile this code snippet.

helmfile.yaml

repositories:
- name: "stable"
  url: "https://kubernetes-charts.storage.googleapis.com"

releases:
- name: "iam"
  namespace: "kube-system"
  labels:
    chart: "kube2iam"
    component: "iam"
    namespace: "kube-system"
    vendor: "jtblin"
    default: "true"
  chart: "stable/kube2iam"
  version: "0.8.5"
  set:
  - name: "tolerations[0].key"
    value: "node-role.kubernetes.io/master"
  - name: "tolerations[0].effect"
    value: "NoSchedule"
  - name: "aws.region"
    value: 'us-west-2'
  - name: "extraArgs.auto-discover-base-arn"
    value: "true"
  - name: "host.iptables"
    value: "true"
  - name: "host.interface"
    value: "cali+"
  - name: "resources.limits.cpu"
    value: "200m"
  - name: "resources.limits.memory"
    value: "256Mi"
  - name: "resources.requests.cpu"
    value: "50m"
  - name: "resources.requests.memory"
    value: "128Mi"

Then run helmfile sync to install.

Usage

Add an annotation login iam.amazonaws.com/role: "some-aws-role" to the kubernetes resource (e.g. Deployment, CronJob, ReplicaSet, Pod, etc). Replace some-aws-role with an IAM role that you’ve previously provisioned.

We recommend provisioning all IAM roles using terraform modules like this one (terraform-aws-kops-external-dns)[https://github.com/cloudposse/terraform-aws-kops-external-dns] for provisioning IAM roles to access Route53.

Here are some examples:

ingress.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: chartmuseum-deployment
spec:
  replicas: 3
  template:
    metadata:
      annotations:
        iam.amazonaws.com/role: s3-access-role
      labels:
        app: chartmuseum
    spec:
      containers:
      - name: chartmuseum
        image: chartmuseum/chartmuseum:v0.5.2
        ports:
        - containerPort: 80

values.yaml

replica:
  annotations:
    iam.amazonaws.com/role: s3-access-role

helmfile

repositories:
- name: stable
  url: https://kubernetes-charts.storage.googleapis.com

releases:
- name: charts
  chart: stable/chartmuseum
  version: 1.3.1
  set:
  - name: replica.annotations.iam.amazonaws\.com/role
    value: s3-access-role

Note

There is no unified specification for the structure of helm chart values. Different charts may have very different structures to values. The only way to know for sure what is supported is to refer to the chart manifests. Additionally, there’s no schema validation for values.yaml, so specifying an incorrect structure will not raise any alarms.

The examples provided here are based on the stable/chartmuseum chart https://github.com/kubernetes/charts/blob/master/stable/chartmuseum