How to pass gitlab ci/cd variables to kubernetes(AKS) deployment.yaml - kubernetes

I have a node.js (express) project checked into gitlab and this is running in Kubernetes . I know we can set env variables in Kubernetes(on Azure, aks) in deployment.yaml file.
How can i pass gitlab ci/cd env variables to kubernetes(aks) (deployment.yaml file) ?

You can develop your own helm charts. This will pay back in long perspective.
Other approach: there is an easy and versatile way is to put ${MY_VARIABLE} placeholders into the deployment.yaml file. Next, during the pipeline run, at the deployment job use the envsubst command to substitute vars with respective values and deploy the file.
Example deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-${MY_VARIABLE}
labels:
app: nginx
spec:
replicas: 3
(...)
Example job:
(...)
deploy:
stage: deploy
script:
- envsubst < deployment.yaml > deployment-${CI_JOB_NAME}.yaml
- kubectl apply -f deployment-${CI_JOB_NAME}.yaml

I'm going to give you an easy solution that may or may not be "the solution".
To do what you want you could simply add your gitlab env variables in a secret during the cd before launching your deployment. This will allow you to use env secret inside the deployment.
If you want to do it like this you will need to think of how to delete them when you want to update them for idempotence.

Another solution would be to create the thing you are deploying as a Helm Chart. This would allow you to have specific variables (called values) that you can use in the templating and override at install / upgrade time.
There are many articles around getting setup with something like this.
Here is one specifically around the context of CI/CD: https://medium.com/#gajus/the-missing-ci-cd-kubernetes-component-helm-package-manager-1fe002aac680.
Another specifically around GitLab: https://medium.com/#yanick.witschi/automated-kubernetes-deployments-with-gitlab-helm-and-traefik-4e54bec47dcf

For future readers. Another way is to use a template file and generate deployment.yaml from the template using envsubst.
Template file:
# template/deployment.tmpl
---
apiVersion: apps/v1
kind: deployment
metadata:
name: strapi-deployment
namespace: strapi
labels:
app: strapi
# deployment specifications
spec:
replicas: 1
selector:
matchLabels:
app: strapi
serviceName: strapi
# pod specifications
template:
metadata:
labels:
app: strapi
# pod blueprints
spec:
containers:
- name: strapi-container
image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}
imagePullPolicy: Always
imagePullSecrets:
- name: gitlab-registry-secret
deploy stage in .gitlab-ci.yml
(...)
deploy:
stage: deploy
script:
# deploy resources in k8s cluster
- envsubst < strapi-deployment.tmpl > strapi-deployment.yaml
- kubectl apply -f strapi-deployment.yaml
As defined here image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}, IMAGE_TAG is an environment variable defined in gitlab. envsubst would go through strapi-deployment.tmpl and substitute any variable defined there and generate strapi-deployment.yaml file.

sed command helped me with this:
In Deployment.yaml use some placeholder, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
#Other configs bla-bla-bla
spec:
containers:
- name: app
image: my.registry./myapp:<VERSION>
And in .gitlab-ci.yml use sed:
deploy:
stage: deploy
image: kubectl-img
script:
# - kubectl bla-bla-bla whatever you want to do before the apply command
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" Deployment.yaml
- kubectl apply -f Deployment.yaml
So the resulting Deployment.yaml will contain CI_COMMIT_SHORT_SHA value instead of <VERSION>
Source of the solution

Related

Build Kustomize with Helm Fails to Build

kustomize build --enable-helm .I have the following project structure:
project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
I'm trying to build my kustomization file using helm like below:
project/helm-k8s/templates/base/$ kubectl kustomize build . --enable-helm -> dummy.yml
I get an error message like this:
project/helm-k8s/templates/base$ kubectl kustomize . --enable-helm
error: accumulating resources: accumulation err='accumulating resources from 'project-namespace.yml': missing metadata.name in object {{v1 Namespace} {{ } map[name:] map[]}}': must build at directory: '/home/my-user/project/helm-k8s/templates/base/project-namespace.yml': file is not directory
Is it not possible for kustomize to use the values.yml which is located directly under helm-k8s folder and create the final manifest for my cluster? What am I doing wrong here?
EDIT: Here is how my kustomization.yml looks like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: open-electrons-monitoring-kustomization
resources:
# 0. Get the namespaces first
- project-namespace.yml
# 1. Set up monitoring services (prometheus)
#- monitoring/prometheus/prometheus-roles.yml
- prometheus/prometheus-config.yml
- prometheus/prometheus-roles.yml
- prometheus/prometheus-deployment.yml
- prometheus/prometheus-service.yml
# 2. Set up monitoring services (grafana)
- grafana/grafana-datasource-config.yml
- grafana/grafana-deployment.yml
- grafana/grafana-service.yml
I think you may have misunderstood the use of the --enable-helm parameter. It does not allow kustomize to perform helm-style templating on files, so when you write:
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.app.namespace }}
labels:
name: {{ .Values.app.namespace }}
That doesn't do anything useful. It just generates invalid YAML output.
The --enable-helm option allows you to explode Helm charts using Kustomize; see here for the documentation, but for example it allows you to process a kustomization.yaml file like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: traefik
repo: https://helm.traefik.io/traefik
includeCRDs: true
releaseName: example
version: 20.8.0
valuesInline:
deployment:
replicas: 3
logs:
access:
enabled: true
Running kubectl kustomize --enable-helm will cause kustomize to fetch the helm chart and run helm template on it, producing YAML manifests on stdout.

How do I set helm values (not files) in ArgoCD Application spec

I looked all over the ArgoCD docs for this but somehow I cannot seem to find an answer. I have an application spec like so:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
project: default
source:
helm:
valueFiles:
- my-values.yaml
path: .
repoURL: ssh://git#blah.git
targetRevision: HEAD
However, I also need to specify a particular helm value (like you'd do with --set in the helm command. I see in the ArgoCD web UI that it has a spot for Values, but I have tried every combination of entries I can think of (somekey=somevalue, somekey:somevalue, somekey,somevalue). I also tried editing the manifest directly, but I still get similar errors trying to do so.
The error is long nonsense that ends with error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {}
What is the correct syntax to set a single value, either through the web UI or the manifest file?
you would use parameters via spec.source.helm.parameters
something like:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: my-project
source:
repoURL: https://charts.my-company.com
targetRevision: "1234"
chart: my-chart
helm:
parameters:
- name: my.helm.key
value: some-val
destination:
name: k8s-dev
namespace: my-ns
Sample from Argo Docs - https://argo-cd.readthedocs.io/en/stable/user-guide/helm/#build-environment
To override just a few arbitrary parameters in the values you indeed can use parameters: as the equivalent of Helm's --set option or fileParameters: instead of --set-file:
...
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
But to answer your original question, for the "Values" option in the GUI you pass literal YAML block in the manifest, like:
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
# Values file as block file
values: |
ingress:
enabled: true
path: /
hosts:
- mydomain.example.com
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels: {}
Check ArgoCD sample application for more details.

Invoking a script file in helm k8s job template

I am trying to invoke a script inside a helm k8s job template. When I run helm with
helm install ./mychartname/ --generate-name
The job runs however, it couldn't find the script file (run.sh). Is this possible with helm?
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job
spec:
template:
spec:
containers:
- name: pre-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', '../run.sh']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
Here is my directory structure
├── mychartname
│ ├── templates
│ │ ├── test.job
│ │──run.sh
Below is the way to achieve running of script inside helm template
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job-v05
spec:
template:
spec:
containers:
- name: pre-install-v05
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c", {{.Files.Get "scripts/run.sh" }}]
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
In general, Kubernetes only runs software that's packaged in Docker images. Kubernetes will never run things off of your local system. In your example, the cluster will create a new unmodified busybox container, then from that container's root directory, try to run sh -c ../run.sh; since that script isn't part of the stock busybox image, it won't run.
The best approach here is to build an image out of your script and push it to a Docker registry. This is the standard way to run any custom software in Kubernetes, so you probably already have a workflow to do it. (For a test setup in Minikube you can point your local Docker at the Minikube environment and build a local image, but this doesn't scale to things hosted in the cloud or otherwise running on a multi-host environment.)
In principle you could upload the script in a config map in a separate Helm template file, mount it into your job spec, and run it (you may need to explicitly sh run.sh to get around file-permission issues). Depending on your environment this may work as well as actually having an image, but if you need to update and redeploy the Helm chart every time the script changes, it's "more normal" to do the same work by having your CI system build and upload a new image (and it'll be the same approach as for your application deployments).

How to set java environment variables in a helm chart?

What is the best practice to set environment variables for a java app's deployment in a helm chart so that I can use the same chart for dev and prod environments? I have separate kubernetes deployments for both the environments.
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc ..."
Similarly, my prod variables would something like
"-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc ..."
Now, how can I leverage helm to write a single chart but can create separated set of pods with different properties according to the environment as in
helm install my-app --set env=prod ./test-chart
or
helm install my-app --set env=dev ./test-chart
The best way is to use single deployment template and use separate value file for each environment.
It does not need to be only environment variable used in the application.
The same can be apply for any environment specific configuration.
Example:
deployment.yaml
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "{{ .Values.opts }}"
values-dev.yaml
# system opts
opts: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc "
values-prod.yaml
# system opts
opts: "-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc "
Then specify the related value file in the helm command.
For example, deploying on dev enviornemnt.
helm install -f values-dev.yaml my-app ./test-chart

How to set default value for Kubernetes ConfigMap (or does Helm overwrite ConfigMap values?)

Use case:
I want to be able to re-run a job from where the first job left off. I am using Helm to deploy into Kubernetes.
I have the idea of saving the state of the first job in a ConfigMap. The ConfigMap yaml defining the ConfigMap is packaged up with the job and both are deployed at the same time with Helm.
apiVersion: v1
kind: ConfigMap
metadata:
name: NameOfMyConfigMap
data:
someKey: someValue
MY_STATE: state <---- See below as to whether this should be included or not
The job is run with an ENV variable set from the ConfigMap:
env:
- name: MY_STATE
valueFrom:
configMapKeyRef:
name: NameOfMyConfigMap
key: MY_STATE
The job runs a script that looks to see if $MY_STATE is set and if it is not set then the job is being run for the first time, otherwise the job closes down the already running first job, saves the first job's state into the MY_STATE ConfigMap variable and launches the job again using the saved state.
If I don't declare the MY_STATE key in the initial ConfigMap definition then the first run of the job will fail, as the ENV definition above cannot find the ConfigMap variable.
If I do declare the value (MY_STATE: "") in the ConfigMap definition, then the first deployment will work. However, if I re-deploy the job with helm upgrade then does the value I enter in the definition not overwrite an existing value in the existing ConfigMap?
What is the best method of storing state in between runs of the same job?
Have you tried using volumes? In this case it should not be overwritten when using helm upgrade.
Could an example like this work? (From
https://groups.google.com/forum/#!msg/kubernetes-users/v2806ezEdPk/1geJCO8-AQAJ)
apiVersion: batch/v1
kind: Job
metadata:
name: keystore-configmap-job
spec:
template:
metadata:
name: keystore-configmap
spec:
containers:
- name: keystore
image: ubuntu
volumeMounts:
- name: keystore-configmap-volume
mountPath: /config-base64
command: [ "sh", "-c", "cat /config-base64/keystore.jks | base64 --decode | sha256sum" ]
restartPolicy: Never
volumes:
- name: keystore-configmap-volume
configMap:
name: keystore-configmap