Kubernetes Calling multiple values with the same name Helm Charts - kubernetes-helm

We have two networks that suppose to use the sane name ClientApp but different values in the values.yaml file. The network_a works when on a particular network.
For example a user with a particular laptop with network_a can access the application url and can access network-a.co.uk URL. but the issue is that another type of laptop require network_b and we do not want to overrides network_a in the values.yaml file but define also the network_b in order to access application in network-b.co.uk.
deployment.yaml
env:
- name: ClientApp
value: {{ .Values.env.network_a }}
// how to define network_b on this deployment template
values.yaml
env:
network_a: https://network-a.co.uk
network_b: https://network-b.co.uk
Since we can not call the same variables name in the deployment.yaml, how would I put the network_b to use the values from the deployment using the same name ClientApp.

Related

How to add custom templates to bitnami helm chart?

I'm deploying a spring cloud data flow cluster on kubernetes with helm and the chart from bitnami. This works fine.
Now I need an additional template to add a route. Is there a way to somehow add this or inherit from the bitnami chart and extend it? Of course I'd like to reuse all of the variables which are already defined for the spring cloud data flow deployment.
That chart has a specific extension point for doing things like this. The list of "Common parameters" in the linked documentation includes a line
Name: extraDeploy; Description: Array of extra objects to deploy with the release; Value: []
The implementation calls through to a helper in the Bitnami Common Library Chart that calls the Helm tpl function on the value, serializing it to YAML first if it's not a string, so you can use Helm templating within that value.
So specifically for the Bitnami charts, you can include an extra object in your values.yaml file:
extraDeploy:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: '{{ include "common.names.fullname" . }}'
...
As a specific syntactic note, the value of extraDeploy is a list of either strings or dictionaries, but any templating is rendered after the YAML is parsed; this is different from the normal Helm template flow. In the example above I've included a YAML object, but then quoted a string value that begins with a {{ ... }} template, lest it otherwise be parsed as a YAML mapping. You could also force the whole thing to be a string, though it might be harder to work with in an IDE.
extraDeploy:
- |-
metadata:
name: {{ include "common.names.fullname" . }}
You can just create the YAML template file in the templates folder and it will get deployed with the chart.
You can also edit the existing YAML template accordingly and extend it no need to inherit or much things.
For example, if you are looking forward to adding the ingress into your chart, add ingress template and respective values block in values.yaml file
You can add this whole YAML template in folder : https://github.com/helm/charts/blob/master/stable/ghost/templates/ingress.yaml
and specific values.yaml block for ingress.
Or for example your chart dont have any deployment and you want to add deployment you can write your own template or use form internet.
Deployment : https://github.com/helm/charts/tree/master/stable/ghost/templates
there is deployment.yaml file template and you can get specific variables that the template uses into values.yaml and you have extended the chart successfully.

Retrieve database service name of helm subchart

I am installing postgres as a dependency in my helm chart, and need to retrieve the connection details.
Postgres connection URIs in kubernetes are of the form:
postgres://username:password#servicename.namespace.svc.cluster.local:port/dbname
The username, password, namespace, port, and dbname are all easily accessible through .Values.postgresql...., and .Release.Namespace, but the service name is initialized using the subchart template common.names.fullname.
Accessing subchart templates is surprisingly not a thing, and probably wouldn't work anyways due to context changes.
What's a simple way to configure my application to access the database?
I'm usually content to observe that the subchart obeys the typical Helm convention of naming its objects {{ .Release.Name }}-{{ .Chart.Name }}. If the subchart uses that convention, and I'm getting the database from a subchart named postgresql, then I can hard-code that value in my template code:
- name: PGHOST
value: '{{ .Release.Name }}-postgresql'
My experience has generally been that the Bitnami charts (and the older stable charts) have been pretty good about using semantic versioning, so if this name changes then the major version of the chart will change.

How do i pass a standalone mysql container as a dependency to a service in kubernetes-helm?

I have a service for which a helm chart has been generated. This helm chart spins off zookeeper and mysql containers with which the service communicates.
I now want to create a helm chart that spins off a platform of services of which the above service is one. When i attempt to do this, I use tags to disable the above service's dependencies that are listed in the helm chart, like so:
tags:
service-zookeeper: false
service-mysql: false
Now, i have a few init containers(liquibase) that populate the mysql instances created via dependencies whenever the service is deployed. I need to pass a separate, stand alone mysql container as the instance of mysql that this init container needs to populate. A similar chroots job for zookeeper exists.
The problem I need help tackling is that I can't seem to find a way to pass the separate mysql container as the container that needs to be populated by the first service's liquibase init-container. Is there any way to do so? Any help/insights are appreciated.
You just need the MySQL Service's hostname and credentials for this.
Remember that the Helm YAML templates can use everything in the Go text/template language. That includes conditionals {{ if ... }}...{{ else }}...{{ end }}, among other control structures, plus most of the support functions in the Sprig library. This can become verbose, but neatly solves this class of problem.
For the host name, one approach is to assert a single service name, whether installed by your chart itself or the wrapper chart. (If the top-level chart installs MySQL, and also installs your service, they will have the same Helm release name and the same generated hostname, independently of whether MySQL is a direct dependency of your chart.)
- name: MYSQL_HOST
value: {{ printf "%s-mysql.%s.svc.cluster.local" .Release.Name .Release.Namespace | quote }}
Another is to pass it in the values.yaml configuration, optionally. The Sprig default function is useful here.
- name: MYSQL_HOST
value: {{ .Values.mysqlHostname | default (printf "%s-mysql.%s.svc.cluster.local" .Release.Name .Release.Namespace) | quote }}
You can use a similar approach to either find the Secret the MySQL installation saves its passwords in or reconstruct it from configuration.

Helm `pre-install `hook calling to script during helm install

I want to use the pre-install hook of helm,
https://github.com/helm/helm/blob/master/docs/charts_hooks.md
in the docs its written that you need to use annotation which is clear but
what is not clear how to combine it ?
apiVersion: ...
kind: ....
metadata:
annotations:
"helm.sh/hook": "pre-install"
for my case I need to execute a bash script which create some env variable , where should I put this pre-hook script inside my chart that helm can use
before installation ?
I guess I need to create inside the templates folder a file which called: pre-install.yaml is it true? if yes where should I put the commands which create the env variables during the installation of the chart?
UPDATE
The command which I need to execute in the pre-install is like:
export DB=prod_sales
export DOMAIN=www.test.com
export THENANT=VBAS
A Helm hook launches some other Kubernetes object, most often a Job, which will launch a separate Pod. Environment variable settings will only effect the current process and children it launches later, in the same Docker container, in the same Pod. That is: you can't use mechanisms like Helm pre-install hooks or Kubernetes initContainers to set environment variables like this.
If you just want to set environment variables to fixed strings like you show in the question, you can directly set that in a Pod spec. If the variables are, well, variable, but you don't want to hard-code them in your Pod spec, you can also put them in a ConfigMap and then set environment variables from that ConfigMap. You can also use Helm templating to inject settings from install-time configuration.
env:
- name: A_FIXED_VARIABLE
value: A fixed value
- name: SET_FROM_A_CONFIG_MAP
valueFrom:
configMapKeyRef:
name: the-config-map-name
key: someKey
- name: SET_FROM_HELM
value: {{ .Values.environmentValue | quote }}
With the specific values you're showing, the Helm values path is probably easiest. You can run a command like
helm install --set db=prod_sales --set domain=www.test.com ...
and then have access to .Values.db, .Values.domain, etc. in your templates.
If the value is really truly dynamic and you can't set it any other way, you can use a Docker entrypoint script to set it at container startup time. In this answer I describe the generic-Docker equivalents to this, including the entrypoint script setup.
You can take as an example the built-in helm-chart from arc* project, here is the source code.
*Arc - kind of bootstraper for Laravel projects, that can Dockerize/Kubernetize existing apps written in this PHP framework.
You can place ENV in POD.yaml under the template folder. That will be the easiest option.

Kubernetes - different settings per environment

We have an app that runs on GKE Kubernetes and which expects an auth url (to which user will be redirected via his browser) to be passed as environment variable.
We are using different namespaces per environment
So our current pod config looks something like this:
env:
- name: ENV
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_URL
value: https://auth.$(ENV).example.org
And all works amazingly, we can have as many dynamic environments as we want, we just do apply -f config.yaml and it works flawlessly without changing a single config file and without any third party scripts.
Now for production we kind of want to use different domain, so the general pattern https://auth.$(ENV).example.org does not work anymore.
What options do we have?
Since configs are in git repo, create a separate branch for prod environment
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that, else use config.yaml) - but with this approach we cannot use kubectl directly anymore
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
Other...?
This seems like an ideal opportunity to use helm!
It's really easy to get started, simply install tiller into your cluster.
Helm gives you the ability to create "charts" (which are like packages) which can be installed into your cluster. You can template these really easily. As an example, you might have you config.yaml look like this:
env:
- name: AUTH_URL
value: {{ .Values.auth.url }}
Then, within the helm chart you have a values.yaml which contains defaults for the url, for example:
auth:
url: https://auth.namespace.example.org
You can use the --values option with helm to specify per environment values.yaml files, or even use the --set flag on helm to override them when using helm install.
Take a look at the documentation here for information about how values and templating works in helm. It seems perfect for your use case
jaxxstorms' answer is helpful, I just want to add what that means to the options you proposed:
Since configs are in git repo, create a separate branch for prod environment.
I would not recommend separate branches in GIT since the purpose of branches is to allow for concurrent editing of the same data, but what you have is different data (different configurations for the cluster).
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that,
else use config.yaml) - but with this approach we cannot use kubectl
directly anymore
Using Helm will solve this more elegantly. Instead of a script you use helm to generate the different files for different environments. And you can use kubectl (using the final files, which I would also check into GIT btw.).
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
This is a matter of opinion but I would recommend in general to split up the deployments by applications and technologies. For example when I deploy a cluster that runs 3 different applications A B and C and each application requires a Nginx, CockroachDB and Go app-servers then I'll have 9 configuration files, which allows me to separately deploy or update each of the technologies in the app context. This is important for allowing separate deployment actions in a CI server such as Jenkins and follows general separation of concerns.
Other...?
See jaxxstorms' answer about Helm.