Passing variables to args field in a yaml file, kubernetes - kubernetes

I am writing a YAML file to use Kubernetes and I wondering how to pass variables to args field.
I need to do something like this :
args: ['--arg1=http://12.12.12.12:8080','--arg2=11.11.11.11']
But I don't want to hard code those values for --arg1 and --arg2, instead it should be like,
args: ['--arg1='$HOST1,'--arg2='$HOST2]
How should I do this?

You have two options that are quite different and really depend on your use-case, but both are worth knowing:
1) Helm would allow you to create templates of Kubernetes definitions, that can use variables.
Variables are supplied when you install the Helm chart, and before the resulting manifests are deployed to Kubernetes.
You can change the variables later on, but what it does is regenerate the YAML and re-deploy "static" versions of the result (template+variables=YAML that's sent to Kubernetes)
2) ConfigMaps allow you to separate a configuration from the pod manifest, and share this configuration across several pods/deployments.
You can later reference the ConfigMap from your pod/deployment manifests.
Hope this helps!

Related

Helmfile with additional resource without chart

I know this is maybe a weird question, but I want to ask if it's possible to also manage single resources (like f.e. a configmap/secret) without a seperate chart?
F.e. I try to install a nginx-ingress and would like to additionally apply a secret map which includes http-basic-authentication data.
I can just reference the nginx-ingress-repo directly in my helmfile, but do I really need to create a seperate helm chart to also apply the http-basic-secret?
I have many releases which need a single, additional resource (like a json configmap, a single secret) and it would be cumbersome to always need a seperate chart file for each release?
Thank you!
Sorry, Helmfile only manages entire Helm releases.
There are a couple of escape hatches you might be able to use. Helmfile hooks can run arbitrary shell commands on the host (as distinct from Helm hooks, which usually run Jobs in the cluster) and so you could in principle kubectl apply a file in a hook. Helmfile also has some integration with Kustomize and it might be possible to add resources this way. As you've noted you can also write local charts and put whatever YAML you need in those.
The occasional chart does support including either arbitrary extra resources or specific configuration content; the Bitnami MariaDB chart, to pick one, supports putting anything you want under an extraDeploy value. You could use this in combination with Helmfile values: to inject more resources
releases:
- name: mariadb
chart: bitnami/mariadb
values:
- extraDeploy:
- |-
apiVersion: v1
kind: ConfigMap
...

How to distribute n different configs to exactly n pods

I have a containerized daemon that I need to run one instance of for every thing. Each thing has a unique set of configs associated with it, but the container image is the same. The configs can be set simply as environment variables. I have a list of the configs, and I need to define the desired state as having exactly 1 pod running for each thing. What is the appropriate way to construct this in Kubernetes with or without Helm?
My understanding is that ReplicaSets and Deployments work on identical containers, in other words they would all be spun up with the same environment variables? I understand that StatefulSet may be able to represent this, but the deamons do not need to hold state really, they do not need persistent storage, they can be killed at will, so long as another with the same configs comes up soon afterwards.
One clue I was given by somebody was to use Helmfile or Helm partials. That is the extent of what they told me. I have not yet investigated whether those are appropriate or not.
You are correct saying that Deployment and ReplicaSets are running on identical containers, so the way I see it you have 2 options:
Deploy multiple deployments with different configs defined in the values file:
You can see an example here, where multiple configs are set in the values file and using {{ range }} to iterate and create multiple deployments
Iterate over you configurations names/files using scripting language of your choice and create separate release for each of your configuration via the command line for example: --set configName=
Personally, I would go with the 2nd option since multiple helm releases can harness the helm cli to better understand what is running and it's state. also, any CRUD action you would like to do would be less dangerous since the deployments are decoupled

Common config in Kubernetes ConfigMap

Kubernetes already provides a way to manage configuration with ConfigMap.
However, I have a question/problem here.
If I have multiple applications with different needs deployed in Kubernetes, all these deployments might share and access some common config variables. Is it possible for ConfigMap to use a common config variable?
There are two ways to do that.
Kustomize - Customization of kubernetes YAML configurations (developed as kubernetes sigs, and had been integrated into kubectl command line). But currently it isn't mature enough if compare with helm chart
https://github.com/kubernetes-sigs/kustomize
Helm chart - The Kubernetes Package Manager. Its vaules.yaml can define the vaule for same configuration files (in your case, they are configmap) with variables.
https://helm.sh/

Kubernetes equivalent of Terraform modules and variables

Does Kubernetes have a way of reusing manifests without copying and paste them? Something akin to Terraform templates.
Is there a way of passing values between manifests?
I am looking to deploy the same service to multiple environments and wanted a way to call the necessary manifest and pass in the environment specific values.
I'd also like to do something like:
Generic-service.yaml
Name={variablename}
Foo-service.yaml
Use=General-service.yaml
variablename=foo-service-api
Any guidance is appreciated.
Kustomize, now part of kubectl apply -k is a way to parameterize your Kubernetes manifests files.
With Kustomize, you have a base manifest file (e.g. of Deployment) and then multiple overlay directories for parameters e.g. for test, qa and prod environment.
I would recommend to have a look at Introduction to kustomize.
Before Kustomize it was common to use Helm for this.

Whats the best way for stage-specific K8s config?

Let's say we have to manage a database connection string for stages test, int and prod.
What are the patterns here for Kubernetes?
I would handle general configuration via ConfigMaps. Create configuration for each environment and have your pods/deployment consume the values via environment variables.
This approach allows you to decouple your configuration from your k8 object definition and gives the ability to inject the required config per environment.
For sensitive data, which might include a username and password in a connection string for example, consider using Secrets instead.
The best way in my experience is to use a higher level construct like Helm Chart. This way you manage all your manifests in platform agnostic way and make them configurable during chart install/update.
That way you can use both ConfigMaps, Secrets or env vars, and populate them from values set during install/upgrade. With helm, you would do it somewhat like this :
helm install -f values.yaml : where values yaml contains all your non-default values (ie. db password)
helm upgrade <release> --reuse-values --set image.tag=1.0.1 to say release a new version keeping all other values defined during initial install.
For non-default components like ie. development database, you can use value like devdb.enabled with a default value to false and set it to true only on dev env where you want to launch devdb pod and point your database service there (all the logic for it within manifest templates in helm chart)