Reusing the same image, config, secrets for several different kubernetes services - kubernetes

We have a bunch of services that run off of the same Docker image: some long running services, some cron jobs, and a webservice.
I'm wondering what the current best practice here is? I essentially want some basic templating for reusing an image and its config, keeping all of them at the same revision (so sirensoftitan-image:{gitsha1hash} is used where gitsha1hash isn't repeated everywhere).
Should I be using a helm chart? Kustomize? Some other type of yaml templating? I want something light with as little added complexity as possible.

I found helm chart heavy compared to kustomize. Give kustomize a try, very simple and easy to use.
You can deploy the same template for different environments by adding new labels, updating the deployment objects name by prefixing with environment value. So you can have unique naming convention for different environments.
More over it uses YAML format which makes it easy to learn and adopt it.
All custom configuration goes into one YAML file unlike helm in which you manage multiple files. I personally like kustomize as it is simple and flexible and not the least comes from Google community. Give it a try

Related

Manage k8s secrets with Kustomize for microservices

Kustomize secrets seem to work fine in a mono-repo scenario with all the deployment config together. How does one deal with microservices where each component is in its own repo? I could move the manifests together in a devops repo, but seems odd to separate the manifest from the respective component.
It will be very dependent of your way to manage your configuration. In my case, all my services repositories are basically bases (in kustomize parlance). I don't include any secret in them.
My overall production or testing environment is an overlay that include all the bases or overlays that it needs. In the case the bases and overlays are my services. I include the secrets directly in my environment overlay.
At this point you probably realized that you need a way to specify your secrets names or some place holder in your bases or services repositories. There is few solutions:
you could just patch all the resources that reference your secrets, but that a lot of work
you can define some naming convention for your secrets and know in advance what the secret name will be (that the way I usually go about it).
If you use kustomize secret generator, you'll pretty much be stuck with the second solution.

What is the right way to manage changes in kubernetes manifests?

I've been using terraform for a while and I really like it. I also set up Atlantis so that my team could have a "GitOps" flow. This is my current process:
Add or remove resources from Terraform files
Push changes to GitHub and create a pull request
Atlantis picks up changes and creates a terraform plan
When the PR is approved, Atlantis applies the changes
I recently found myself needing to set up a few managed Kubernetes clusters using Amazon EKS. While Terraform is capable of creating most of the basic infrastructure, it falls short when setting up some of the k8s resources (no support for gateways or ingress, no support for alpha/beta features, etc). So instead I've been relying on a manual approach using kubectl:
Add the resource to an existing file or create a new file
Add a line to a makefile that runs the appropriate command (kubectl apply or create) on the new file
If I'm using a helm chart, add a line with helm template and then kubectl apply (I didn't really like using tiller, and helm3 is getting rid of it anyway)
If I want to delete a resource, I do it manually with kubectl delete
This process feels nowhere near as clean as what we're doing in Terraform. There are several key problems:
There's no real dry-run. Using kubectl --dry-run or kubectl diff doesn't really work, it's only a client-side diff. Server-side diff functions are currently in alpha
There's no state file. If I delete stuff from the manifests, I have to remember to also delete it from the cluster manually.
No clear way to achieve gitops. I've looked at Weaveworks Flux but that seems to be geared more towards deploying applications.
The makefile is getting more and more complicated. It doesn't feel like this is scaleable.
I should acknowledge that I'm fairly new to Kubernetes, so might be overlooking something obvious.
Is there a way for me to achieve a process similar to what I have in Terraform, within the Kubernetes universe?
This is more of an opinion question so I'll answer with an opinion. If you like to manage configuration you can try some of these tools:
If you want to use existing YAML files (configurations) and use something at a higher level you can try kustomize.
If you want to manage Kubernetes configurations using Jsonnet you should take a look at Ksonnet. Keep in mind that Ksonnet will not be supported in the future.
If you want to just automatically do a helm update in an automated way, there is not a tool there yet. You will have to build something at this point to orchestrate everything. For example, we ended up creating an in house tool that does this.

How to manage more than 200 microservice with Helm?

i would want to know how do you manage your service with Helm ?
I already know that we are going to have more than 200 microservices. How to manage them easily ?
Each microservice with is own yaml files (deployment,service,ingress ,values etc..)
or one several large (deployment,ingress, etc.. )yaml files for all microservices and i push the values yaml file with the specific params for the application.
I'd suggest aiming for an umbrella chart that includes lots of subcharts for the individual services. You can deploy each chart individually but using a single umbrella makes it easier to deploy the whole setup consistently to different environments.
Perhaps some microservices will be similar enough that for them you could use the same chart with different parameters (maybe including docker image parameter) but you'll have to work through them to see whether you can do that. You can include the same chart as a dependency multiple times within an umbrella chart to represent different services.
Ideally you also want a chart for a service to be individually-deployable so you can deploy and check that service in isolation. To do this you would give each chart its own resources including its own Ingress. But you might decide that for the umbrella chart you prefer to disable the Ingresses in the subcharts and put in a single fan-out Ingress for everything - that comes down to what works best for you.

Is there a concept of inheritance for Kubernetes deployments?

Is there a way to create a tree of inheritance for Kubernetes deployments? I have a number of deployments which are similar but not identical. They share many ENV vars but not all. They all use the same image.
For example, I have a dev deployment which is configured almost identical to a production deployment but has env vars pointing to a different database backend. I have a celery deployment which is configured the same as the production deployment, however, it has a different run command.
Helm is what many people are using for this. It let's you create templates for kubernetes descriptors and pass parameters in to generate descriptors from the templates.
There are other tools out there which can be used to generate variations on kubernetes deployment descriptors by injecting parameters into templates. Ansible is also popular. But Helm is closely connected with the Kubernetes CNCF and community and there's a good selection of official charts available.
EDIT: If the aim is to enable different deployments (e.g. for dev and prod) using a single docker image then that's probably best handled with a single chart. You can create different values files for each deployment and supply the chosen values file to helm install with the --values parameter. If there are parts of the chart that are only sometimes applicable then they can be wrapped in if conditions to turn them on/off.
On the subject of inheritance specifically, there's an example in the helm documention of how to take another chart as a parent/dependency and override its values and I created a chart earlier that you can see in github that includes several other charts and overrides parts of all of them via the values.yml. It also shares some config between the included charts with globals. If you're looking to use a parent to reduce duplication rather than join multiple apps then it is possible to create a base/wrapper chart but it may turn out to be better to just duplicate config.
EDIT (180119): The alternative of Kustomize may soon become available in kubectl
You may also want to check Kustomize. It provides some support to write your yaml manifests in hierarchical form, so that you don't have to repeat yourself.

Helm vs Replace Tokens in VSTS

I have been asked to set up CI/CD for a new app using VSTS and Kubernetes.
It was suggested to me that we could use Helm (but it was made clear it was not mandatory).
The value I am seeing for this tool in our project is to define different values for different environments e.g. database connection string.
But for that we can also use the Replace Tokens VSTS task which is a lot simpler.
A definition explains that Helm is a chart manager and it sort of connections all resources of a system to deploy to Kubernetes.
Our system is just 1 web API (could grow later) so I feel deploying using Helm would be over-engineering the deployment process. Plus, we need this for yesterday.
Question
According to the current context, should I go with Replace Tokens VSTS task or Helm?
Just based on your requirement, for example, which is easier to deploy, which is easier to manage, which you familiar or which is easier for requirement changes.
You also can custom build task to achieve it.
I would go for helm because it gives you more flexibility and it's more cross-platform; moreover, when adding more API's/components or microservices it will be easier to control configuration (a single or multiple values.yaml, using git submodules for helm charts and so on).
Surely it requires a slightly bigger time investment than simple value substitution in your CI/CD tools, but has a potential payback that far outweighs the effort (again, based on my experience and the limited information about your environment).
I'm curious, what did you end up using?