Decide on Argo CD Architecture
Deciding on the architecture for Argo CD involves considering multiple clusters, plugin management, and Kubernetes integration. We present some recommended strategies and considerations for deploying Argo CD, addressing potential risks, and detailing common deployment patterns.
Considerations
-
Multiple Argo CD clusters should be used to provide a means to systematically upgrade Argo CD in different environments.
-
Argo CD runs as a single pod, and requires disruptive restarts to add or upgrade plugins.
-
Restarts of Argo CD are disruptive to deployments.
-
The more Argo CD servers, the harder it is to visualize the delivery process.
-
Each Argo CD server must be integrated with each cluster it deploys to.
-
Argo CD can automatically deploy to the local cluster by installing a service account.
Our recommendation is to deploy one Argo CD per cluster.
Introduction
Argo CD is a tool designed specifically for continuous delivery to Kubernetes. It is similar to specialized platforms like Terraform Cloud, which focuses on deploying with Terraform. Argo CD does not support deployments outside of Kubernetes, such as uploading files to a bucket. While it does support plugins, these plugins are not intended to extend its deployment capabilities beyond Kubernetes.
Two forms of escape hatches exist for slight deviations such as deployments involving Kubernetes-adjacent tooling like Helm, Kustomize, etc.
-
Using Argo CD Config Plugins that shell-out and generate kubernetes manifests
-
Using the Operator Pattern to deploy Custom Resources that perform some operation
Risks
While the Operator Pattern is ideal in theory, the reality is less than ideal:
-
Operators are frequently abandoned, or not regularly maintained (most are lucky to achieve traction)
-
Most operators are in alpha state, seemingly created as pet-projects to scratch an itch
-
Upgrading operators is non-trivial because you cannot have 2 versions deployed at the same time
-
Operators don’t automatically play well with other operators. For example, how would you pass a secret written from ExternalSecrets operator to a Terraform
-
When Operators fail, it might not break the pipeline. Debugging is also harder due to its asynchronous nature.
Use-Cases
Here are some of the most common deployment patterns we see, and some ways in which those could be addressed:
-
Deploy a generic application to Kubernetes
-
Raw manifests are supported natively
-
Render Helm charts to manifests, then proceed as usual.
-
(Secrets and Secret Operators) Use ExternalSecrets Operator
-
Deploy generic Lambda
-
Convert to Serverless Framework
-
Convert to Terraform
-
Deploy Serverless Framework Applications
-
Serverless applications render to Cloudformation. See Cloudformation.
-
Wrap Cloudformation in a Custom Resource
-
Deploy Infrastructure with Cloudformation
-
Deploy Single Page Application to S3
-
This does not fit well into the
-
Deploy Infrastructure with Terraform
-
Most operators are alpha. https://github.com/hashicorp/terraform-k8s Something feels wrong about deploying kubernetes with Terraform and then running Terraform inside of Kubernetes?
-
While it works, it’s a bit advanced to express.
-
Deploy Database Migrations
-
Replicated (enterprise application delivery platform) maintains schemahero. Only supports DDL. https://github.com/schemahero/schemahero
-
Standard kubernetes Jobs calling migration tool
Pros
-
Simplify dependency management across components (eventually, Argo CD will redeploy everything)
-
Protect KubeAPI from requiring public access (reduce attack surface)
-
Powerful CD tool for Kubernetes supporting multiple rollout strategies for Pods
-
Nice UI
-
Easy to use many kinds of deployment toolchains (that run in argo cd docker image)
-
Feels like deployments are faster
-
“Backup kubernetes cluster” out of the box
-
Consistent framework for how to do continuous deployment regardless of CI platform
Cons
-
Breaks the immediate feedback loop from commit to deployment (as deployment with Argo CD are async)
-
Application CRD should be created in the namespace where argo cd is running
-
Applications name must be unique for Argo CD instance
-
Custom deployment toolchain (anything except kube resources/helm/kustomize/jsonnet) requires to build a custom docker image for argo cd and redeploy it.
-
Redeploying Argo CD is potentially a disruptive operation to running deployments (like restarting Jenkins) and therefore must be planned.
-
Updating plugins requires re-deploying Argo CD since the tools must exist in the Argo CD docker image
-
Access management has an additional level - github repo access + argo cd projects + rbac. We can have
helm tiller
type of problem -
Additional self-hosted solution ( while classic deploy step with helm 3 runs on CI and use only kubectl )
-
Repository management (give access to private repos for argo cd) does not support declarative way (need research for ‘repo pattern’ workaround)
-
Argo CD is in the critical path of deployments and has it’s own SDLC.
Infrastructure
Create terraform-helm-argo cd module
-
Deploy Argo CD with Terraform so it will work well with Spacelift setup continuous delivery to Kubernetes
-
Use terraform-helm-provider
-
Use projects/terraform/argo cd/ (do not bundle with projects/terraform/eks/) In https://github.com/acme/infrastructure
-
Use spacelift to deploy/manage with GitOps
-
Use terraform-github-provider to create a deployment (e.g. deploy-prod) repository to manage Argo CD and manage all branch protections (confirm with acme)