Is there a concept of inheritance for Kubernetes deployments? - kubernetes

Is there a way to create a tree of inheritance for Kubernetes deployments? I have a number of deployments which are similar but not identical. They share many ENV vars but not all. They all use the same image.
For example, I have a dev deployment which is configured almost identical to a production deployment but has env vars pointing to a different database backend. I have a celery deployment which is configured the same as the production deployment, however, it has a different run command.

Helm is what many people are using for this. It let's you create templates for kubernetes descriptors and pass parameters in to generate descriptors from the templates.
There are other tools out there which can be used to generate variations on kubernetes deployment descriptors by injecting parameters into templates. Ansible is also popular. But Helm is closely connected with the Kubernetes CNCF and community and there's a good selection of official charts available.
EDIT: If the aim is to enable different deployments (e.g. for dev and prod) using a single docker image then that's probably best handled with a single chart. You can create different values files for each deployment and supply the chosen values file to helm install with the --values parameter. If there are parts of the chart that are only sometimes applicable then they can be wrapped in if conditions to turn them on/off.
On the subject of inheritance specifically, there's an example in the helm documention of how to take another chart as a parent/dependency and override its values and I created a chart earlier that you can see in github that includes several other charts and overrides parts of all of them via the values.yml. It also shares some config between the included charts with globals. If you're looking to use a parent to reduce duplication rather than join multiple apps then it is possible to create a base/wrapper chart but it may turn out to be better to just duplicate config.
EDIT (180119): The alternative of Kustomize may soon become available in kubectl

You may also want to check Kustomize. It provides some support to write your yaml manifests in hierarchical form, so that you don't have to repeat yourself.

Related

What is the right way to manage changes in kubernetes manifests?

I've been using terraform for a while and I really like it. I also set up Atlantis so that my team could have a "GitOps" flow. This is my current process:
Add or remove resources from Terraform files
Push changes to GitHub and create a pull request
Atlantis picks up changes and creates a terraform plan
When the PR is approved, Atlantis applies the changes
I recently found myself needing to set up a few managed Kubernetes clusters using Amazon EKS. While Terraform is capable of creating most of the basic infrastructure, it falls short when setting up some of the k8s resources (no support for gateways or ingress, no support for alpha/beta features, etc). So instead I've been relying on a manual approach using kubectl:
Add the resource to an existing file or create a new file
Add a line to a makefile that runs the appropriate command (kubectl apply or create) on the new file
If I'm using a helm chart, add a line with helm template and then kubectl apply (I didn't really like using tiller, and helm3 is getting rid of it anyway)
If I want to delete a resource, I do it manually with kubectl delete
This process feels nowhere near as clean as what we're doing in Terraform. There are several key problems:
There's no real dry-run. Using kubectl --dry-run or kubectl diff doesn't really work, it's only a client-side diff. Server-side diff functions are currently in alpha
There's no state file. If I delete stuff from the manifests, I have to remember to also delete it from the cluster manually.
No clear way to achieve gitops. I've looked at Weaveworks Flux but that seems to be geared more towards deploying applications.
The makefile is getting more and more complicated. It doesn't feel like this is scaleable.
I should acknowledge that I'm fairly new to Kubernetes, so might be overlooking something obvious.
Is there a way for me to achieve a process similar to what I have in Terraform, within the Kubernetes universe?
This is more of an opinion question so I'll answer with an opinion. If you like to manage configuration you can try some of these tools:
If you want to use existing YAML files (configurations) and use something at a higher level you can try kustomize.
If you want to manage Kubernetes configurations using Jsonnet you should take a look at Ksonnet. Keep in mind that Ksonnet will not be supported in the future.
If you want to just automatically do a helm update in an automated way, there is not a tool there yet. You will have to build something at this point to orchestrate everything. For example, we ended up creating an in house tool that does this.

Reusing the same image, config, secrets for several different kubernetes services

We have a bunch of services that run off of the same Docker image: some long running services, some cron jobs, and a webservice.
I'm wondering what the current best practice here is? I essentially want some basic templating for reusing an image and its config, keeping all of them at the same revision (so sirensoftitan-image:{gitsha1hash} is used where gitsha1hash isn't repeated everywhere).
Should I be using a helm chart? Kustomize? Some other type of yaml templating? I want something light with as little added complexity as possible.
I found helm chart heavy compared to kustomize. Give kustomize a try, very simple and easy to use.
You can deploy the same template for different environments by adding new labels, updating the deployment objects name by prefixing with environment value. So you can have unique naming convention for different environments.
More over it uses YAML format which makes it easy to learn and adopt it.
All custom configuration goes into one YAML file unlike helm in which you manage multiple files. I personally like kustomize as it is simple and flexible and not the least comes from Google community. Give it a try

Create custom helm charts

I'm using helm charts to create deploy micro services, by executing helm create it creates basic chart with deployment, services and ingress but I have few other configurations such as horizontal pod autoscaler, pod disruption budget.
what I do currently copy the yaml and change accordingly, but this takes lot of time and I don't see this as a (correct way/best practice) to do it.
helm create <chartname>
I want to know how you can create helm charts and have your extra configurations as well.
Bitnami's guide to creating your first helm chart describes helm create as "the best way to get started" and says that "if you already have definitions for your application, all you need to do is replace the generated YAML files for your own". The approach is also suggested in the official helm docs and the chart developer guide. So you are acting on best advice.
It would be cool if there were a wizard you could use to take existing kubernetes yaml files and make a helm chart from them. One tool like this that is currently available is chartify. It is listed on helm's related projects page (and I couldn't see any others that would be relevant).
You can try using Move2Kube. You will have to put all your yamls (if the source is kubernetes yamls) or other source artifacts in a directory (say src) and do move2kube translate -s src/.
In the wizard that comes up, you can choose helm instead of yamls and it will create a helm chart for you.

Structuring kubernetes configuration files

Say that I have 5 apis that i want to deploy in a Kubernetes cluster, my question is simply what is the best practice to store the yaml files related to Kubernetes.
In projects I've seen online, Kubernetes yaml files are just added to the the api project itself. I wonder if it makes sense to decouple all files related to Kubernetes in an entirely separate "project", and which is managed by VCS as a completely separated entity from the api projects themselves.
This question arises since I'm currently reading a book about Kubernetes, on the topic namespaces, and considered it might be a good idea to have separate namespaces per environment (DEV / UAT / PROD), and it may make sense to have these files in a centralized "Kubernetes" project (unless it might be better to have a separate cluster per environment (?)).
Whether to put the yaml in the same repo as the app is a question that projects answer in different ways. You might want to put them together if you find that you often change both at the same time or you just find it clearer to see everything in one place. You might separate if you mostly work on the yaml separately or if you find it less clutttered or want different visibility for it (e.g. different teams to look at it). If things get more sophisticated then you'll actually want to generate the yaml from templates and inject environment-specific configuration into it at deploy time (whether those environments are namespaces or clusters) - see Best practices for storing kubernetes configuration in source control for more discussion on this.
From Production k8s experience for CI/CD:
One cluster per environment such as dev , stage , prod ( optionally per data centre )
One namespace per project
One git deployment repo per project
One branch in git deployment repo per environment
Use configmaps for configuration aspects
Use secret management solution to store and use secrets

How to manage more than 200 microservice with Helm?

i would want to know how do you manage your service with Helm ?
I already know that we are going to have more than 200 microservices. How to manage them easily ?
Each microservice with is own yaml files (deployment,service,ingress ,values etc..)
or one several large (deployment,ingress, etc.. )yaml files for all microservices and i push the values yaml file with the specific params for the application.
I'd suggest aiming for an umbrella chart that includes lots of subcharts for the individual services. You can deploy each chart individually but using a single umbrella makes it easier to deploy the whole setup consistently to different environments.
Perhaps some microservices will be similar enough that for them you could use the same chart with different parameters (maybe including docker image parameter) but you'll have to work through them to see whether you can do that. You can include the same chart as a dependency multiple times within an umbrella chart to represent different services.
Ideally you also want a chart for a service to be individually-deployable so you can deploy and check that service in isolation. To do this you would give each chart its own resources including its own Ingress. But you might decide that for the umbrella chart you prefer to disable the Ingresses in the subcharts and put in a single fan-out Ingress for everything - that comes down to what works best for you.