Decide on Strategy for Preview Environments (e.g. Review Apps)
Considerations
- 
Use as few AWS proprietary services (E.g DynamoDB, SNS, SQS) because provisioning terraform for ephemeral environments is very slow, complicated and therefore not recommended
 - 
Use instead as many tools that have managed service equivalents in AWS (DocumentDB
MongoDB container, MSKKafka container, MySQL, Postgres) - 
Usage of API Gateway will require running terraform and will complicate preview environments
 - 
Preview environments are poor substitutions for remote development environments due to the slow feedback loop (e.g. commit, push, build, deploy)
 - 
Preview environments are not a replacement for staging and QA environments, which should have configurations that more likely resemble production
 - 
Multiple microservices are much harder to bring up if any sort of version pinning is required
 
Use helmfile with raw chart
We do not recommend this approach.
Pros
- 
Very rapid to prototype; no need to commit to any chart or convention
 - 
Repos/apps share nothing, so they won’t be affected by breaking changes in a shared chart (mitigated by versioning charts)
 
Cons
- 
This is the least DRY approach and the manifest for services are not reusable across microservices
 - 
Lots of manifests everywhere leads to inconsistency between services; adding some new convention/annotation requires updating every repo
 - 
No standardization of how apps are described for kubernetes (e.g. what we get with a custom helm chart)
 
Use helmfile with custom chart
Slight improvement over using helmfile with the raw chart.
Pros
- 
Reusable chart between services; very DRY
 - 
Chart can be tested and standardized to reduce the variations of applications deployed (e.g. NodeJS chart, Rails chart)
 - 
More conventional approach used by community at large (not cloudposse, but everyone using helm and helmfile)
 
Cons
- 
Using
helmfileis one more tool; for new comers they often don’t appreciate the value it brings - 
CIOps is slowly falling out of favor for GitOps
 
Use GitHub Actions directly with helm
Pros
- 
Very easy to understand what is happening with “CIOps”
 - 
Very easy to implement
 
Cons
- 
No record of the deployed state for preview environments in source control
 - 
Requires granted direct Kubernetes administrative access GitHub Action runners in order to deploy helm charts
 - 
GitHub Action runners will need direct access to read any secrets needed to deploy the helm releases. (mitigation is to use something like
sops-operatororexternal-secretsoperator) 
Use GitHub actions with ArgoCD and helm
For some additional context on ArgoCD Decide on ArgoCD Architecture
Requirements
- 
How quickly should environments come online? e.g. 5 minutes or less
 - 
How many backing services are required to validate your one service?
 - 
Do you need to pin dependent services at specific versions for previews? - if so, rearchitect how we do this
 - 
How should we name DNS for previews?
 - 
The biggest limitation is ACM and wildcard certs, so we’ll need a flat namespace
 - 
https://pr-123-new-service.dev.acme.org. (using the*.dev.acme.orgACM certificate) - 
URLs will be posted to GitHub Status API to that environments are directly reachable from PRs
 
- 
Do we need to secure these environments? We recommend just locking down the ALB to internal traffic and using VPN
 - 
How will we handle databases for previews? How will we seed data. Decide on Database Seeding Strategy for Ephemeral Preview Environments
 - 
What is the effort to implement ArgoCD?
 - 
Very little, we have all the terraform code to deploy ArgoCD
 - 
We need to change the GitHub Actions to:
 - 
build the docker image (like to do)
 - 
render the helm chart to the raw k8s manifests
 - 
commit the manifests to deployment repo when ready to deploy
 - 
What is the simplest path we could take to implement and that developers will have the easiest time understanding?
 
Patterns of Microservices
This is more of a side note that not all microservices organizations are the same. If you’re using microservices, please self-identify with some of these patterns as it will be helpful in understanding the drivers behind them and how they are implemented.
- 
As a result of acquisitions
 - 
As a result of architecture design from the beginning (this premature)
 - 
As a result of needing to use different languages for specific purposes
 - 
As a result of seeing performance needs to scale
 - 
If this is the case, we technically don’t need to do microservices; we just need to be able to control the entry point & routing (e.g. a “Microservices Monolith”)
 - 
For this to work, the monolith needs to be able to communicate with itself as a service (e.g. gRPC) for local development. We see this with Go microservices; this can be done when it’s necessary as a pattern to scale endpoints
 - 
Preview environments can still use the gRPC but over localhost
 - 
As a result of wanting to experiment with multiple versions of the same service (E.g. using a service mesh)
 
Related Design Decisions
Decide on Strategy for Preview Environments (e.g. Review Apps)
Internal preview environments cannot accept webhook callbacks from external services like twilio