A year and a half ago I became one of two developers responsible for migrating our infrastructure to Kubernetes. SimpleNexus, the company I work for, had made the decision to migrate about a year earlier but hadn't been able to execute on that decision and the project landed in our hands. We migrated our main monolithic application and several smaller microservices over 5 months with zero downtime. While there was a steep learning curve for Kubernetes, the benefits of migrating were critical in allowing us to scale our traffic more than 5x in the year after we completed the project. The migration itself wasn't easy, but several open-source projects were instrumental in our ability to migrate quickly and successfully.
The first pain point we ran into with Kubernetes was how to store the state of the cluster. While we were
keeping our configuration in source control, one of us would apply a change to our test cluster using
kubectl from the
branch we were working on and it would overwrite a different change the other developer had pushed. We spent a lot of time
going in circles that first week as we were building the initial services. We also knew that we didn't want to manage a production
kubectl. As with anything manual, mistakes are to easy to make and it would be too easy to make
configuration changes that weren't intended, not to mention the lack of checks and balances on deploying configuration
changes. Flux solved this problem for us. It enabled us to manage our system through a pattern
called GitOps. Weaveworks, the company responsible for Flux, describes GitOps better than I could.
"GitOps is a way to do Kubernetes cluster management and application delivery. It works by using Git as a single source of truth for declarative infrastructure and applications. With GitOps, the use of software agents can alert on any divergence between Git with what's running in a cluster, and if there's a difference, Kubernetes reconcilers automatically update or rollback the cluster depending on the case. With Git at the center of your delivery pipelines, developers use familiar tools to make pull requests to accelerate and simplify both application deployments and operations tasks to Kubernetes."
Flux is one tool you can use for implementing GitOps. It keeps clusters in sync with a git repository. It runs as a controller inside your Kubernetes cluster and continuously polls the repository (V2 has webhook support) and applies changes to the cluster as new commits are checked in. Flux is the single biggest tool that has allowed us to have our Kubernetes changes running through full CI/CD. We can commit a change to the state of the cluster, run automated tests (using kube-score and kubeval among other things) and then if everything passes we merge to the branch that syncs with the cluster.
Flux is critical to releasing new code to services running in the cluster as well. When we have a new version of a service built we can use Flux to update the image of a specific deployment to start a deploy process inside of Kubernetes. If at any point we need to roll back a change it becomes as easy as reverting a commit and Flux will apply the changes again to the cluster. Flux can also watch different repositories and automatically deploy new changes when new images are pushed.
Gitops can become very verbose if you don't have a templating system. For example, if you have a staging and production cluster you most likely want those two clusters to be almost identical. Some differences may include resource limits, the number of replicas running for a deployment, and the images the services run. Without a templating system, you would have to copy and paste all the different files you need for the other environment and make the tweaks you need for each environment. You can quickly get out of sync between environments this way as you would have to edit each environment's files every time you wanted to make a change.
Kustomize is the tool we ended up using for our templates. The nice thing about kustomize is that you leave your original YAML files untouched, you don't have to add templating syntax into the files. I think this makes the base files easier to read and reason about. You can then write a patch file for the environment you are deploying into and that will get merged with the original template using native K8s patch APIs. Here is an example file structure using kustomize.
├── base │ ├── example │ │ ├── example.yaml │ │ ├── kustomization.yaml │ ├── kustomization.yaml ├── production │ ├── kustomization.yaml │ ├── patches │ │ ├── example.yaml └── staging ├── kustomization.yaml ├── patches │ ├── example.yaml
Helm is the other big contender in the template arena. There are different pros and cons to each approach. In our use case, Helm added a lot of complexity for features that we didn't really need. Helm gets useful as you need more complex templating logic, or you are deploying multiple copies of a specific "chart" into the same cluster with different parameters. There are many other use cases as well that are out of scope for this blog post.
Cert Manager and Nginx Ingress Controller
At SimpleNexus, most of our customers white label our product. This means that there are hundreds and hundreds of DNS names out there that are pointed at our servers. When SimpleNexus was a young company each of those certs was created and added to our infrastructure manually as a new client was added, obviously that doesn't scale. We had already automated this problem before switching to Kubernetes, but that solution wasn't going to be able to transfer to the new infrastructure. Thankfully, cert-manager and the Nginx Ingress Controller came to our rescue. The whole process is simple. Cert-manager will watch new ingress objects created in Kubernetes and automatically provision the certificates. It will also automatically renew those certificates as they get close to their expiration date. It was easy once we had that set up to create a small Kubernetes controller to allow our application to manage the state of the ingress objects.
Another pain point we ran into was getting secrets into Kubernetes. We could spin up a new cluster or namespace in the cluster but every time we did we had to manually go in and add the secrets needed. Obviously, it wasn't a great long term solution especially if you want everything to be automated. We discovered a project created by GoDaddy called kubernetes-external-secrets. It is a controller that runs in the cluster and connects to external secret stores like AWS Secrets Manager, Hashicorp Vault, or Azure Key Vault. You define an ExternalSecret resource that points at the external store and the controller will pull the values and create the native Kubernetes secret object for you.
apiVersion: kubernetes-client.io/v1 kind: ExternalSecret metadata: name: hello-service spec: backendType: secretsManager dataFrom: - hello-service/credentials data: - key: hello-service/migration-credentials name: password property: password
AWS NLB Helper Operator
Due to the limitations of AWS ALBs, we have to use NLBs to route traffic into our cluster. These load balancers aren't as complete feature-wise probably because they are not as popular. We needed to be able to turn on Proxy Protocol on the load balancers to preserve the client IP on the requests going into the cluster. However, the annotation in the EKS documentation didn't work for us. There was an open issue for this in Github for over about 3 years. At the time of writing there has been work done on the AWS Load Balancer Controller to support this but it hasn't been released yet. The AWS NLB Helper Operator provides a couple of annotations and the code to enable these features on NLBs automatically so when we create a service in our cluster those features get enabled and nothing has to be done by hand. Once the AWS Load Balancer Controller supports these features, I'd recommend using that, but until then this project solves the need.
The other thing not currently supported is termination protection. Which, if you are like me, there may or may not be a time when you have deleted a production resource accidentally thinking it wasn't a production resource. Only to see all your alerts immediately start firing and you get that pit in your stomach after you realize exactly what happened. Of course, that is purely hypothetical. There are a couple of other options not supported by default as well that this project provides to you.
apiVersion: v1 kind: Service metadata: name: test-api annotations: service.beta.kubernetes.io/aws-load-balancer-type: "nlb" # helpers provided by the NLB Helper Operator aws-nlb-helper.3scale.net/enable-targetgroups-proxy-protocol: "true" aws-nlb-helper.3scale.net/enable-targetgroups-stickness: "true" aws-nlb-helper.3scale.net/loadbalanacer-termination-protection: "true" aws-nlb-helper.3scale.net/targetgroups-deregisration-delay: "450" spec: type: LoadBalancer selector: deployment: deployment ports: - name: http port: 80 protocol: TCP targetPort: http
There are times when you want to launch something in its own namespace but use environment variables and secrets that have already been pulled into the cluster. One example of this is docker registry authentication. Currently, EKS doesn't have a way to provide all nodes with a way to access a private docker registry that isn't ECR. Creating the credentials in every namespace that needs them is an extra step, that while possible, is annoying. The kubernetes-replicator lets us create the secret once and then replicate it to the namespaces that need it by adding a simple annotation to the resources to be replicated. It is very easy to use and removes some work that doesn't need to be done over and over.
apiVersion: v1 kind: Secret metadata: annotations: replicator.v1.mittwald.de/replicate-to: "my-ns-1,namespace-[0-9]*" data: key: value