Best practices for storing kubernetes configuration in source control

There is no established standard yet, I believe. I find helm's charts too complicated to start with, especially having another unmanaged component running on the k8s cluster. This is a workflow that we follow that works quite well for a setup of 15ish microservices, and 5 different environments (devx2, staging, qa, prod).

The 2 key ideas:

  1. Store kubernetes configurations in the same source repo that has the other build tooling. Eg: alongside the microservice source code which has the tooling for building/releasing that particular microservice.
  2. Template the kubernetes configuration with something like jinja and render the templates according to the environment you're targeting.

The tooling is reasonably straightforward to figure out by putting together a few bash scripts or integrating with a Makefile etc.

EDIT: to answer some of the questions in the comment

The application source code repository is used as the single source of truth. So that means that if everything works as it should, changes should never be moved from the kubernetes cluster to the repository.

Changes directly on the server are prohibited in our workflow. If it ever does happen, we have to manually make sure they enter the application repository again.

Again, just want to note that the configurations stored in the source code are actually templates and use secretKeyRef quite liberally. This means that some configurations are coming in from the CI tooling as they are rendered and some are coming in from secrets that live only on the cluster (like database passwords, API tokens etc.).


I think that helm is going to become the standardized way to create an application installer for kubernetes clusters. I'll try to create my own chart to parametrize my app deployments.


Use a separate repository to store configuration.

If you have multiple microservices to orchestrate, none of them is authoritative over the configuration- especially when you run multiple configurations in parallel, e.g. for canary testing.

Helm (https://helm.sh/) helps you propagate constants through multiple microservices' configurations. Again, this indicates that those constants / parameters are independent of any single codebase.


In my opinion Helm is to kubernetes as Docker-compose is to docker

There is no reason to fear helm, as in it's most basic functionality, all it does is similar to kubectl apply -f templates.

Once you get familiar with helm you can start using values.yaml and adding values into your kubernetes templates for maximum flexibility.

values.yaml

name: my-name

inside templates/deployment.yaml

name: {{ .Values.name }}

https://helm.sh/

Here are some approaches to using helm "infrastructure as code". Regardless of which approach you use, remember that you can also maintain a helm repository to distribute helm charts.

  1. Create a helm subdirectory in each project, the same way that you may include a docker-compose.yml file.

  2. Create a seperate helm repository for each chart and control it individually from the application code. This may be a better approach when code and infrastructure are managed by seperate teams.

  3. Store all helm charts in a central repository. This is useful for easily distributing your charts, but may cause confusion when many teams are working on different charts.

  4. If you want to have the benefits of method 3, with the clear ownership of method 2, you can use method 2 and instead create a git repo of submodules that pull from multiple chart repos which are maintained by appropriate owners.