I want to use the pre-install hook of helm,
https://github.com/helm/helm/blob/master/docs/charts_hooks.md
in the docs its written that you need to use annotation which is clear but
what is not clear how to combine it ?
apiVersion: ...
kind: ....
metadata:
annotations:
"helm.sh/hook": "pre-install"
for my case I need to execute a bash script which create some env variable , where should I put this pre-hook script inside my chart that helm can use
before installation ?
I guess I need to create inside the templates folder a file which called: pre-install.yaml is it true? if yes where should I put the commands which create the env variables during the installation of the chart?
UPDATE
The command which I need to execute in the pre-install is like:
export DB=prod_sales
export DOMAIN=www.test.com
export THENANT=VBAS
A Helm hook launches some other Kubernetes object, most often a Job, which will launch a separate Pod. Environment variable settings will only effect the current process and children it launches later, in the same Docker container, in the same Pod. That is: you can't use mechanisms like Helm pre-install hooks or Kubernetes initContainers to set environment variables like this.
If you just want to set environment variables to fixed strings like you show in the question, you can directly set that in a Pod spec. If the variables are, well, variable, but you don't want to hard-code them in your Pod spec, you can also put them in a ConfigMap and then set environment variables from that ConfigMap. You can also use Helm templating to inject settings from install-time configuration.
env:
- name: A_FIXED_VARIABLE
value: A fixed value
- name: SET_FROM_A_CONFIG_MAP
valueFrom:
configMapKeyRef:
name: the-config-map-name
key: someKey
- name: SET_FROM_HELM
value: {{ .Values.environmentValue | quote }}
With the specific values you're showing, the Helm values path is probably easiest. You can run a command like
helm install --set db=prod_sales --set domain=www.test.com ...
and then have access to .Values.db, .Values.domain, etc. in your templates.
If the value is really truly dynamic and you can't set it any other way, you can use a Docker entrypoint script to set it at container startup time. In this answer I describe the generic-Docker equivalents to this, including the entrypoint script setup.
You can take as an example the built-in helm-chart from arc* project, here is the source code.
*Arc - kind of bootstraper for Laravel projects, that can Dockerize/Kubernetize existing apps written in this PHP framework.
You can place ENV in POD.yaml under the template folder. That will be the easiest option.
Related
I have this issue. I need to setup oauth2-proxy in kubernetes via helm, and I need it to use injected vault secret for configuration of proxy. I know that this would be possible by defining
'command' : ['sh', '-c', 'source /vault/secrets/client-secret && '], in some override-values.yaml i would create, but problem is that this helm chart values.yaml file does not provide any keyword like "command" and i am using it as a dependency chart so I cannot directly edit it's manifests.
Is there any way how can I define command for a pod of dependency helm chart even if it does not have command key in values? chart link: https://artifacthub.io/packages/helm/oauth2-proxy/oauth2-proxy if somebody wants to see
I also tried to reference secrets in the configuartion file for the proxy but i got error that i should not provide values like this: client_secret=$(cat /vault/secrets/secret), and many other things
I plan to upgrade my project to HELM.
I have many environment variables that I have defined in deployment.yaml.
Best practice is it best to define the environment variables in the values.yaml file or the templates / deployment.yaml drop?
Can you help if there is a sample application you use?
Disclaimer: My answers are based on Helm 3. So let's get to it:
#1: No, in your values.yaml you define the static/default values. It's not the best approach to let static values in your template files (like deployment.yaml). To override the values of values.yaml The best practice is to use --set KEY=VALUE file. In this case, is totally possible to get the environment variable.
#2: Can you give an example? Yes, sure.
For example, I want to install Elasticsearch on my cluster using helm so I use the command:
helm install elastic/elasticsearch --version 7.8.0
But I do not want to use the default values of the chart. So I went to https://hub.helm.sh/charts/elastic/elasticsearch and https://github.com/elastic/helm-charts/blob/7.8/elasticsearch/values.yaml, saw what's possible to change, then I create the command:
helm install elastic/elasticsearch --set minimumMasterNodes=1 --set protocol=https --version 7.8.0
But in my CD tool, the minimum master nodes are different values and since this is an environment variable I changed my command line to this:
helm install elastic/elasticsearch --set minimumMasterNodes=$MIN_MASTER_NODES --set protocol=https --version 7.8.0
So, as a result, the command above will run with no problem in your CD tool once the MIN_MASTER_NODES environment variable is provided correctly.
Your use of values.yaml to define environment vars is totally up to you. Is the value static? I'd have no problem leaving it in the deployment yaml. If it's a secret you should manage it either with k8s secrets or input it when you use helm install --set-value.. If the value is dynamic and is changed often or could be changed in the future that is the true use for values.yaml imo
There's four possible places you could set environment variables, and each has a use.
Is the value essentially fixed, any time you'd run the container in any environment? For example, consider setting a Python application to not buffer its log output or specifying the container-internal port number. Set these in the image's Dockerfile:
ENV PYTHONUNBUFFERED=1
ENV PORT=8000
Is the value more or less fixed, any time you'd run the container in Kubernetes? Or, can you reliably calculate the value? In these cases, you can set the value directly in your templates/deployment.yaml file, maybe with Helm templating.
env:
- name: COORDINATOR_HOST
value: {{ .Release.Name }}-coordinator # another Service in the same chart
- name: DATABASE_DRIVER
value: postgresql # this chart doesn't support MySQL
Does the value have a sensible default, but needs to be overridden sometimes? Put it in your chart's values.yaml.
# concurrency specifies the maximum number of concurrent tasks
# to launch.
concurrency: 4
This needs to be repeated in the templates/deployment.yaml as well
env:
- name: CONCURRENCY
value: {{ quote .Values.concurrency }}
Is the value only available at deployment time; or do you need to override one of these defaults? Use the helm install -f option to provide a per-environment value
databaseHost: myapp-pg.qa.example.com
or the similar helm install --set option. If it's reasonable to include a default value, also do so as in the previous example, but if not, you can use the required template function to give a reasonable error.
env:
- name: DATABASE_HOST
value: {{ .Values.databaseHost | required "a databaseHost must be provided" }}
You can use any or all of these options, depending on the specific values, even within the same chart.
The one pattern I don't particularly recommend is giving an open-ended list of environment variables (or other raw Kubernetes YAML) in the values file. As an operator, this is hard to consume, and it especially doesn't interact well with the helm install --set option. I tend to prefer listing out each configurable option in the Helm values file, and would modify the templates/*.yaml (maybe behind a deploy-time flag) if I needed more advanced customization.
I have a config map that defines some variables like environment that are then passed into alot of deployment configurations like this
- name: ENV
valueFrom:
configMapKeyRef:
name: my-config-map
key: ENV
secrets and some volumes like ssl certs are common across the configs also. Is there some kubernetes type that I could create a base service deployment that extends a normal deployment? Or some other way to deal with this? Also using kustomize, there might be an option there.
You can use a PodPreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time.
Before starting using PodPreset you need to take few steps:
Firstly need to enable API type settings.k8s.io/v1alpha1/podpreset, which can be done by including settings.k8s.io/v1alpha1=true in the --runtime-config option for the API server
Enable the admission controller PodPreset. You can do it by including PodPreset in the --enable-admission-plugins option value specified for the API server
After that you need to creatie PodPreset objects in the namespace you will work in and create it by typing kubectl apply -f preset.yaml
Please refer to official documentation to see how it works.
I have a running pod and I want to change one of it's container's environment variable and made it work immediately. Can I achieve that? If I can, how to do that?
Simply put and in kube terms, you can not.
Environment for linux process is established on process startup, and there are certainly no kube tools that can achieve such goal.
For example, if you make a change to your Deployment (I assume you use it to create pods) it will roll the underlying pods.
Now, that said, there is a really hacky solution reported under Is there a way to change the environment variables of another process in Unix? that involves using GDB
Also, remember that even if you could do that, there is still application logic that would need to watch for such changes instead of, as it usually is now, just evaluate configuration from envs during startup.
This worked with me
kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N
check the official documentation here
Another approach for runtime pods you can get into the Pod command line and change the variables in the runtime
RUN kubectl exec -it <pod_name> -- /bin/bash
Then
Run export VAR1=VAL1 && export VAR2=VAL2 && your_cmd
I'm not aware of any way to do it and I can't think of real world scenario where this makes too much sense.
Usually you have to restart a process for it to notice the changed environment variables and the easiest way to do that is restart the pod.
The solution closest to what seem to want is to create a deployment and then use kubectl edit (kubectl edit deploy/name) to modify it's environment variables. A new pod is started and the old one is terminated after you save.
Kubernetes is designed in such a way that any changes to the pod should be redeployed through the config. If you go messing with pods that have already been deployed you can end up with weird clusters that are hard to debug.
If you really want to you can run additional commands in your running pod using kubectl exec, but this is only recommended for debug purposes.
kubectl exec -it <pod_name> export VARIABLENAME=<thing>
If you are using Helm 3> according to the documentation:
Automatically Roll Deployments
Often times ConfigMaps or Secrets are
injected as configuration files in containers or there are other
external dependencies changes that require rolling pods. Depending on
the application a restart may be required should those be updated with
a subsequent helm upgrade, but if the deployment spec itself didn't
change the application keeps running with the old configuration
resulting in an inconsistent deployment.
The sha256sum function can be used to ensure a deployment's annotation
section is updated if another file changes:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
In the event you always want
to roll your deployment, you can use a similar annotation step as
above, instead replacing with a random string so it always changes and
causes the deployment to roll:
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
[...]
Both of these methods allow your Deployment to leverage the built in update strategy
logic to avoid taking downtime.
NOTE: In the past we recommended using the --recreate-pods flag as
another option. This flag has been marked as deprecated in Helm 3 in
favor of the more declarative method above.
It is hard to change from outside. But it is easy to change from inside. Your App running in the pod can change it. Just oppose an Api to change environment variable.
You can use configmap with volumes to update environment variables on the go..
Refer: https://itnext.io/how-to-automatically-update-your-kubernetes-app-configuration-d750e0ca79ab
We have an app that runs on GKE Kubernetes and which expects an auth url (to which user will be redirected via his browser) to be passed as environment variable.
We are using different namespaces per environment
So our current pod config looks something like this:
env:
- name: ENV
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_URL
value: https://auth.$(ENV).example.org
And all works amazingly, we can have as many dynamic environments as we want, we just do apply -f config.yaml and it works flawlessly without changing a single config file and without any third party scripts.
Now for production we kind of want to use different domain, so the general pattern https://auth.$(ENV).example.org does not work anymore.
What options do we have?
Since configs are in git repo, create a separate branch for prod environment
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that, else use config.yaml) - but with this approach we cannot use kubectl directly anymore
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
Other...?
This seems like an ideal opportunity to use helm!
It's really easy to get started, simply install tiller into your cluster.
Helm gives you the ability to create "charts" (which are like packages) which can be installed into your cluster. You can template these really easily. As an example, you might have you config.yaml look like this:
env:
- name: AUTH_URL
value: {{ .Values.auth.url }}
Then, within the helm chart you have a values.yaml which contains defaults for the url, for example:
auth:
url: https://auth.namespace.example.org
You can use the --values option with helm to specify per environment values.yaml files, or even use the --set flag on helm to override them when using helm install.
Take a look at the documentation here for information about how values and templating works in helm. It seems perfect for your use case
jaxxstorms' answer is helpful, I just want to add what that means to the options you proposed:
Since configs are in git repo, create a separate branch for prod environment.
I would not recommend separate branches in GIT since the purpose of branches is to allow for concurrent editing of the same data, but what you have is different data (different configurations for the cluster).
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that,
else use config.yaml) - but with this approach we cannot use kubectl
directly anymore
Using Helm will solve this more elegantly. Instead of a script you use helm to generate the different files for different environments. And you can use kubectl (using the final files, which I would also check into GIT btw.).
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
This is a matter of opinion but I would recommend in general to split up the deployments by applications and technologies. For example when I deploy a cluster that runs 3 different applications A B and C and each application requires a Nginx, CockroachDB and Go app-servers then I'll have 9 configuration files, which allows me to separately deploy or update each of the technologies in the app context. This is important for allowing separate deployment actions in a CI server such as Jenkins and follows general separation of concerns.
Other...?
See jaxxstorms' answer about Helm.