Use multiple value files on helm template - kubernetes-helm

I got into a scenario where I need to create a deployment template to use multiple value files one after the another without merging or overriding.
The valueFiles as specified in argoCD application yaml file under valueFiles parameter.
Can someone help on how to overcome this situation. I need to create a helm template which should take value file one after another using some range function or so.
Thanks in advance
A simple helm template example to use pass multiple value files one after another to a helm template.

Related

a very simple Kubernetes Scheduler question about custom scheduler

Just a quick question.. Do you HAVE to remove or move the default kube-scheduler.yaml from the folder? Can't I just make a new yaml(with the custom scheduler) and run that in the pod?
Kubernetes isn't file-based. It doesn't care about the file location. You use the files only to apply the configuration onto the cluster via a kubectl / kubeadm or similar CLI tools or their libraries. The yaml is only the content you manually put into it.
You need to know/decide what your folder structure and the execution/configuration flow is.
Also, you can simply have a temporary fule, the naming doesn't matter as well and it's alright to replace the content of a yaml file. Preferably though, try to have some kind of history record such as manual note, comment or a source control such as git in place, so you know what and why was changed.
So yes, you can change the scheduler yaml or you can create a new file and reorganize it however you like but you will need to adjust your flow to that - change paths, etc.

Move variable groups to the code repository and reference it from YAML pipelines

We are looking for a solution how to move the non-secret variables from the Variable groups into the code repositories.
We would like to have the possibilities:
to track the changes of all the settings in the code repository
version value of the variable together with the source code, pipeline code version
Problem:
We have over 100 variable groups defined which are referenced by over 100 YAML pipelines.
They are injected at different pipeline/stage/job levels depends on the environment/component/stage they are operating on.
Example problems:
some variable can change their name, some variable can be removed and in the pipeline which targets the PROD environment it is still referenced, and on the pipeline which deploys on DEV it is not there
particular pipeline run used the version of the variables at some date in the past, it is good to know with what set of settings it had been deployed in the past
Possible solutions:
It should be possible to use the simple yaml template variables file to mimic the variable groups and just include the yaml templates with variable groups into the main yamls using this approach: Variable reuse.
# File: variable-group-component.yml
variables:
myComponentVariable: 'SomeVal'
# File: variable-group-environment.yml
variables:
myEnvVariable: 'DEV'
# File: azure-pipelines.yml
variables:
- template: variable-group-component.yml # Template reference
- template: variable-group-environment.yml # Template reference
#some stages/jobs/steps:
In theory, it should be easy to transform the variable groups to the YAML template files and reference them from YAML instead of using a reference to the variable group.
# Current reference we use
variables:
- group: "Current classical variable group"
However, even without implementing this approach, we hit the following limit in our pipelines: "No more than 100 separate YAML files may be included (directly or indirectly)"
YAML templates limits
Taking into consideration the requirement that we would like to have the variable groups logically granulated and separated and not stored in one big yml file (in order to not hit another limit with the number of variables in a job agent) we cannot go this way.
The second approach would be to add a simple script (PowerShell?) which will consume some key/value metadata file with variables (variableName/variableValue) records and just execute job step with a command to
##vso[task.setvariable variable=one]secondValue.
But it could be only done at the initial job level, as a first step, and it looks like the re-engineering variable groups mechanism provided natively in Azure DevOps.
We are not sure that this approach will work everywhere in the YAML pipelines when the variables are currently used. Somewhere they are passed as arguments to the tasks. Etc.
Move all the variables into the key vault secrets? We abandoned this option at the beginning as the key vault is a place to store sensitive data and not the settings which could be visible by anyone. Moreover storing it in secrets cause the pipeline logs to put * instead of real configuration setting and obfuscate the pipeline run log information.
Questions:
Q1. Do you have any other propositions/alternatives on how the variables versioning/changes tracking could be achieved in Azure DevOps YAML pipelines?
Q2. Do you see any problems in the 2. possible solution, or have better ideas?
You can consider this as alternative:
Store your non-secrets variable in json file in a repository
Create a pipeline to push variables to App Configuration (instead a Vault)
Then if you need this settings in your app make sure that you reference to app configuration from the app instead running replacement task in Azure Devops. Or if you need this settings directly by pipelines Pull them from App Configuration
Drawbacks:
the same as one mentioned by you in Powershell case. You need to do it job level
What you get:
track in repo
track in App Configuration and all benefits of App Configuration

What are configmap.yaml and job.yaml?

I'm trying to create Kafka cluster automatically, instead of creation manually, I'm using the stable chart: https://github.com/helm/charts/tree/master/stable/kafka-manager
in the template folder there are two .yaml files: configmap.yaml and job.yaml, what's these files and what's the roles of these files?
configMap is just a way to store non-confidential data in key-value pairs, you can also consume this data as an environment variable from the pods. (it doesn't provide secrecy or encryption!).
job.yaml is a supervisor for pods carrying out batch processes, that is, a process that runs for a certain time to completion, for example a calculation or a backup operation.
hope it answers your question, let me know if you need anything else. :)

Populating a config file when a pod starts

I am trying to run a legacy application inside Kubernetes. The application consists of one of more controllers, and one or more workers. The workers and controllers can be scaled independently. The controllers take a configuration file as a command line option, and the configuration looks similar to the following:
instanceId=hostname_of_machine
Memory=XXX
....
I need to be able to populate the instanceId field with the name of the machine, and this needs to be stable over time. What are the general guidelines for implementing something like this? The application doesn't support environment variables, so my first thought was to record the stateful set stable network ID in an environment variable and rewrite the configuration file with an init container. Is there a cleaner way to approach this? I haven't found a whole lot of solutions when I searched the 'net.
Nope, that's the way to do it (use an initContainer to update the config file).

Is it possible for Deis Workflow read values from ConfigMap?

I have installed Deis Workflow v.2.11 in a GKE cluster, and some of our applications share values in common, like a proxy URL e credentials. I can use these values putting them into environment variables, or even in a .env file.
However, every new application, I need to create a .env file, with shared values and then, call
deis config:push
If one of those shared value changes, I need to adjust every configuration of every app and restart them. I would like to modify the value in ConfigMap once and, after changes, Deis restart the applications.
Does anyone know if it is possible to read values from Kubernetes ConfigMap and to put them into Deis environment variables? Moreover, if yes, how do I do it?
I believe what you're looking for is a way to set environment variables globally across all applications. That is currently not implemented. However, please feel free to hack up a PR and we'd likely accept it!
https://github.com/deis/controller/issues/383
https://github.com/deis/controller/issues/1219
Currently there is no support for configMaps in Deis Workflow v2.18.0 . We would appreciate a PR into the Hephy Workflow (open source fork of Deis Workflow). https://github.com/teamhephy/controller
There is no functionality right now to capture configMap in by the init scripts of the containers.
You could update the configMap, but each of the applications would need to run kubectl replace -f path/accessible/for/everyone/configmap.yaml to get the variables updated.
So, I would say yes, at Kubernetes level you can do it. Just figure out the best way for your apps to update the configMap. I don't have details of your use case, so I can't tell you specific ways.