Getting Started with Cloud Posse
This page will help you get started with Cloud Posse. You’ll be up and running in a jiffy!
Start with getting familiar with the geodesic design.
Create geodesic modules anywhere you want to logically organize infrastructure as code.
Get intimately familiar with docker inheritance and multi-stage docker builds. We use this pattern extensively.
Check out our terraform-root-modules for reference architectures to easily provision infrastructure
- Get your local environment setup
- Make sure you’re familiar with
Makefilesbecause we use them extensively for “executable documentation”.
- Review Docker compose
- Docker composition monorepo strategy
Tons of tools/clis are used as part of our solution. We distribute these tools in a couple of different ways.
- Geodesic bundles most of these tools as part of the geodesic base image
- Our packages repo provides an embeddable
Makefilesystem for installing packages in other contexts (e.g.
build-harness). This can also be used for local (“native”) development contexts.
Here are some of the most important tools to be aware of:
If using kubernetes, then also review these tools:
Kubernetes is a massive part of our solutions. Our Kubernetes documentation is geared towards leveraging
kops by way of our
Helm is central to how we deploy all services on kubernetes.
- helm is essentially the package manager for Kubernetes (like
gemfor Ruby, and
- helm charts are how kubernetes resources are templatized using Go templates
- helm charts quickstart is our “cheatsheet” for getting started with Helm Charts.
- helm registries are used to store helm charts, which are essentially tarball artifacts.
- chartmuseum is deployed as the chart repository
- helmfiles are used to define a distribution of helm charts. So if you want to install prometheus, grafana, nginx-ingress, kube-lego, etc, we use a
helmfile.yamlto define how that’s done.
- chamber is used to manage secrets and provide them when provisioning with
helmfile. It’s also a big part of our overall story on secrets management
CI/CD with Codefresh
Our standard CI/CD pipeline describes in detail each step and what it does.
Codefresh runs docker containers for each build step. We provide a dockerized build-harness to distribute common build tools that we use as part of the build steps in the
Learn how Codefresh is integrated with kubernetes. This is also the same process used to add integrations for multiple clusters.
We use some terraform modules to provision resources for codefresh like a chamber user.
Securely deploy apps with secrets.
Backing Services (Coming Soon)
Checkout our docs on kubernetes backing services.
Platform Services (Coming Soon)
Checkout our docs on kubernetes platform services.
Inevitably, at some point comes the time when you will need to optimize for performance. We’ve documented some of the best ways to get started.
First, make sure you’re familiar with kubernetes resource management.
- Scale Cluster Horizontally - Scale Kubernetes cluster horizontally by adding nodes
- Scale Cluster Vertically - Scale Kubernetes cluster vertically by using different types of EC2 instances
- Scale Pods Horizontally - Scale Kubernetes pods horizontally by increasing the replica count
- Scale Pods Vertically - Scale Kubernetes pods vertically by increasing CPU and Memory limits
- Scale Nginx Ingress Horizontally - Scale Nginx Ingress pods horizontally by increasing the replica count
- Scale Nginx Ingress Vertically - Scale Nginx Ingress vertically by increasing CPU and Memory limits
- Tune Nginx - Tune Nginx parameters (timeouts, worker processes, logs, http)
- Optimize databases - Optimize database queries and indexes
We provide a staggering number of Terraform modules in our GitHub. This number is growing every week and we’re also accepting module contributions.
Our modules are broken down in to specific areas of concern:
Before writing your own modules, review our Best Practices for working with Terraform modules.
Monitoring (Coming Soon)
In the meantime, review some of our docs on monitoring and alerting.
If running on kubernetes, review our “backing services” documentation for monitoring
After you’ve gotten familiar with how monitoring is working, you’ll want to run some load tests to ensure everything meets expectations. We provide some of our “best practices”, workflows, scripts and scenarios for load and performance testing of websites and applications (in particular those deployed on Kubernetes clusters).
Our strategy for load and performance testing breaks down like this:
- Review Load Testing Tools - how we select and setup our load testing tools
- Example Testing Scenarios - how we implement load testing scenarios
- Run Tests and Analyze Results - how we do load testing and analyze the results
- Optimization and Tuning Procedures - optimization and tuning steps that we usually perform after running load tests
Secrets (Coming Soon)
Have a look at our docs on secrets management.
Everything we provide on our GitHub wouldn’t have been possible if it weren’t for our phenomenal customers and the support of the community contributing bug-fixes, filing issues and submitting a steady stream of Pull Requests.
We welcome any Terraform module submissions, Helm charts, and generally any other useful tools that others could benefit from. Our only requirement is that they be licensed under
Drop us a line at [email protected] to get in touch with us about contributing.
Review our glossary if there are any terms that are confusing.
File issues anywhere you find the documentation lacking by going to our docs repo.
Join our Slack Community and speak directly with the maintainers
We provide “white glove” DevOps support. Get in touch with us today!
Schedule Time with us.