Use external variable value in gitlab ci yaml - kubernetes

I need to deploy same application on different namespaces in Kubernetes cluster. I have a template manifest file app-template.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-env
data:
BASE_PATH: "https://{DOMAIN}/{NAME}/app/"
NON_PROXY_DOMAINS: "{DOMAIN}/{NAME}"
PROXY_SERVER: "{DOMAIN}"
In a bash I would usually do something like replace those with variables I define and create new manifest file and apply it in a namespace, something like:
export NAMESPACE="mynamespace"
export DOMAIN="mydomain"
export NAME="myname"
sed "s/{DOMAIN}/${DOMAIN}/g; s/{NAME}/${NAME}/g;" app-template.yml | tee app-${NAMESPACE}.yml
kubectl apply -f app-${NAMESPACE}.yml --namespace $NAMESPACE
I would like to do that in gitlab ci way. I have web app which is going to provide those 3 variables:
const varParams = {
token: TOKEN,
“variables[namespace]“: namespace,
“variables[domain]“: url,
“variables[name]“: shortnerUrl,
};
I would like to pass those variables to my template file. Tried this:
stages:
- test
variables:
DOMAIN: <>
NAME: <>
NAMESPACE: <>
deploy:
stage: test
image: devth/helm:latest
environment: dev
script:
- echo "${NAMESPACE}"
- # copy secret from default namespace
- "kubectl get secret pull-secret --namespace=app-main -o yaml | sed 's/namespace: .*/namespace: ${NAMESPACE}/' | kubectl apply -f -"
- "sed 's/{DOMAIN}/${DOMAIN}/g; s/{NAME}/${NAME}/g;' app-template.yml | tee app-${NAMESPACE}.yml"
I'm currently running manual pipeline and defining all 3 variables, but is there another, more "gitlab way" to do it instead of replacing the values with sed and creating new files and applying them with kubectl?

Related

How to create a Kubernetes configMap from part of a yaml file?

As I know the way to create a configMap in Kubernetes from a file is to use:
--from-file option for kubectl
What I am looking for is a way to only load part of the yaml file into the configMap.
Example:
Let's say I have this yml file:
family:
Boys:
- name: Joe
- name: Bob
- name: dan
Girls:
- name: Alice
- name: Jane
Now I want to create a configMap called 'boys' which will include only the 'Boys' section.
Possible?
Another thing that could help if the above is not possible is when I am exporting the configMap as environment variables to a pod (using envFrom) to be able to only export part of the configMap.
Both options will work for me.
Any idea?
The ConfigMap uses a key and value for its configuration. Based on your example, you get multiple arrays of data where there are multiple values with their own keys. But you can create multiple ConfigMap from different file for these issues.
First you need to create .yaml files to create a ConfigMap guided by the documentation.
First file call Boys.yaml
# Env-files contain a list of environment variables.
# These syntax rules apply:
# Each line in an env file has to be in VAR=VAL format.
# Lines beginning with # (i.e. comments) are ignored.
# Blank lines are ignored.
# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).
name=Joe
name=Bob
name=Dan
Second file call Girls.yaml
name=Alice
name=Jane
Create your ConfigMap
kubectl create configmap NmaeOfYourConfigmap --from-env-file=PathToYourFile/Boys.yaml
where the output is similar to this:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp:
name: NmaeOfYourConfigmap
namespace: default
resourceVersion:
uid:
data:
name: Joe
name: Bob
name: Dan
Finally, you can pass these ConfigMap to pod or deployment using configMapRef entries:
envFrom:
- configMapRef:
name: NmaeOfYourConfigmap-Boys
- configMapRef:
name: NmaeOfYourConfigmap-Girls
Configmaps cannot contain rich yaml data. Only key value pairs. So if you want to have a list of things, you need to express this as a multiline string.
With that in mind you could use certain tools, such a yq to query your input file and select the part you want.
For example:
podman run -rm --interactive bluebrown/tpl '{{ .family.Boys | toYaml }}' < fam.yaml \
| kubectl create configmap boys --from-file=Boys=/dev/stdin
The result looks like this
apiVersion: v1
kind: ConfigMap
metadata:
name: boys
namespace: sandbox
data:
Boys: |+
- name: Joe
- name: Bob
- name: dan
You could also encode the file or part of the file with base64 and use that as an environment variable, since you get a single string, which is easily processable, out of it. For example:
$ podman run --rm --interactive bluebrown/tpl \
'{{ .family.Boys | toYaml | b64enc }}' < fam.yaml
# use this string as env variable and decode it in your app
LSBuYW1lOiBKb2UKLSBuYW1lOiBCb2IKLSBuYW1lOiBkYW4K
Or with set env which you could further combine with dry run if required.
podman run --rm --interactive bluebrown/tpl \
'YAML_BOYS={{ .family.Boys | toYaml | b64enc }}' < fam.yaml \
| kubectl set env -e - deploy/myapp
Another thing is, that YAML is a superset of JSON, in many cases you are able to convert YAML to JSON or at least use JSON like syntax.
This can be useful in such a scenario in order to express this as a single line string rather than having to use multiline syntax. It's less fragile.
Every YAML parser will be able to parse JSON just fine. So if you are parsing the string in your app, you won't have problems.
$ podman run --rm --interactive bluebrown/tpl '{{ .family.Boys | toJson }}' < fam.yaml
[{"name":"Joe"},{"name":"Bob"},{"name":"dan"}]
Disclaimer, I created the above used tool tpl. As mentioned, you might as well use alternative tools such as yq.

How to trigger Tekton Pipeline from GitLab CI directly with predefined GitLab CI variables & Tekton logs streamed into GitLab Pipeline logs

We have a AWS EKS running (setup using Pulumi), where we installed Tekton as described in the Cloud Native Buildpacks Tekton docs. The example project is available.
Our Tekton pipeline is configured like this (which is derived from the Cloud Native Buildpacks Tekton docs also):
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
spec:
params:
- name: IMAGE
type: string
description: image URL to push
- name: SOURCE_URL
type: string
description: A git repo url where the source code resides.
- name: SOURCE_REVISION
description: The branch, tag or SHA to checkout.
default: ""
workspaces:
- name: source-workspace # Directory where application source is located. (REQUIRED)
- name: cache-workspace # Directory where cache is stored (OPTIONAL)
tasks:
- name: fetch-repository # This task fetches a repository from github, using the `git-clone` task you installed
taskRef:
name: git-clone
workspaces:
- name: output
workspace: source-workspace
params:
- name: url
value: "$(params.SOURCE_URL)"
- name: revision
value: "$(params.SOURCE_REVISION)"
- name: subdirectory
value: ""
- name: deleteExisting
value: "true"
- name: buildpacks # This task uses the `buildpacks` task to build the application
taskRef:
name: buildpacks
runAfter:
- fetch-repository
workspaces:
- name: source
workspace: source-workspace
- name: cache
workspace: cache-workspace
params:
- name: APP_IMAGE
value: "$(params.IMAGE)"
- name: BUILDER_IMAGE
value: paketobuildpacks/builder:base # This is the builder we want the task to use (REQUIRED)
We added SOURCE_URL and SOURCE_REVISION as parameters already.
The question is: How can we trigger a Tekton PipelineRun from GitLab CI (inside our .gitlab-ci.yml) adhering to the following requirements:
simplest possible approach
Do not use the extra complexity introduced by Tekton Triggers (incl. commit-status-tracker) but still keep GitLab as the source of truth (e.g. see green/red pipeline runs on commits etc.)
report successfully run Tekton Pipelines as green GitLab CI Pipelines & failed Tekton Pipelines as red GitLab CI Pipelines
preserve/stream the Tekton Pipeline logs into GitLab CI Pipeline logs - both in case of errors or success inside the Tekton Pipelines
use GitLab CI Predefined Variables for a generic approach
TLDR;
I created a fully comprehensible example project showing all necessary steps and running pipelines here: https://gitlab.com/jonashackt/microservice-api-spring-boot/ with the full .gitlab-ci.yml to directly trigger a Tekton Pipeline:
image: registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
before_script:
- mkdir ~/.kube
- echo "$EKSKUBECONFIG" > ~/.kube/config
- echo "--- Testdrive connection to cluster"
- kubectl get nodes
stages:
- build
build-image:
stage: build
script:
- echo "--- Create parameterized Tekton PipelineRun yaml"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--dry-run
--output yaml > pipelinerun.yml
- echo "--- Trigger PipelineRun in Tekton / K8s"
- PIPELINE_RUN_NAME=$(kubectl create -f pipelinerun.yml --output=jsonpath='{.metadata.name}')
- echo "--- Show Tekton PipelineRun logs"
- tkn pipelinerun logs $PIPELINE_RUN_NAME --follow
- echo "--- Check if Tekton PipelineRun Failed & exit GitLab Pipeline accordingly"
- kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}' | grep Failed && exit 1 || exit 0
Here are the brief steps you need to do:
1. Choose a base image for your .gitlab-ci.yml providing aws CLI, kubectl and Tekton CLI (tkn)
This is entirely up to you. I created an example project https://gitlab.com/jonashackt/aws-kubectl-tkn which provides an image, which is based on the official https://hub.docker.com/r/amazon/aws-cli image and is accessible via registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0.
2. CI/CD Variables for aws CLI & Kubernetes cluster access
Inside your GitLab CI project (or better: inside the group, where your GitLab CI project resides in) you need to create AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY as CI/CD Variables holding the the aws cli credentials (beware to mask them while creating them in order to prevent them beeing printed into the GitLab CI logs). Depending on your EKS clusters (or other K8s clusters) config, you need to provide a kubeconfig to access your cluster. One way is to create a GitLab CI/CD variable like EKSKUBECONFIG providing the necessary file (e.g. in the example project this is provided by Pulumi with pulumi stack output kubeconfig > kubeconfig). In this setup using Pulumi there are no secret credentials inside the kubeconfig so the variable doesn't need to be masked. But be aware of possible credentials here and protect them accordingly if needed.
Also define AWS_DEFAULT_REGION containing your EKS cluster's region:
# As we need kubectl, aws & tkn CLI we use https://gitlab.com/jonashackt/aws-kubectl-tkn
image: registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
3. Use kubeconfig and testdrive cluster connection in before_script section
Preparing things we need later inside other steps could be done inside the before_script section. So let's create the directory ~/.kube there and create the file ~/.kube/config from the contents of the variable EKSKUBECONFIG. Finally fire a kubectl get nodes to check if the cluster connection is working. Our before_script section now looks like this:
before_script:
- mkdir ~/.kube
- echo "$EKSKUBECONFIG" > ~/.kube/config
- echo "--- Testdrive connection to cluster"
- kubectl get nodes
4. Pass parameters to Tekton PipelineRun
Passing parameters via kubectl isn't trivial - or even needs to be done using a templating engine like Helm. But luckily the Tekton CLI has something for us: tkn pipeline start accepts parameters. So we can transform the Cloud Native Buildpacks Tekton PipelineRun Yaml file into a tkn CLI command like this:
tkn pipeline start buildpacks-test-pipeline \
--serviceaccount buildpacks-service-account-gitlab \
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc \
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc \
--param IMAGE=registry.gitlab.com/jonashackt/microservice-api-spring-boot \
--param SOURCE_URL=https://gitlab.com/jonashackt/microservice-api-spring-boot \
--param SOURCE_REVISION=main \
--timeout 240s \
--showlog
Now here are some points to consider. First the name buildpacks-test-pipeline right after the tkn pipeline start works as an equivalent to the yaml files spec: pipelineRef: name: buildpacks-test-pipeline definition.
It will also work as a reference to the Pipeline object defined inside the file pipeline.yml which starts with metadata: name: buildpacks-test-pipeline like:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
...
Second to define workspaces isn't trivial. Luckily there's help. We can define a workspace in tkn CLI like this: --workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc.
Third using the parameters as intended now becomes easy. Simply use --param accordingly. We also use --showlog to directly stream the Tekton logs into the commandline (or GitLab CI!) together with --timeout.
Finally using GitLab CI Predefined variables our .gitlab-ci.yml's build stage looks like this:
build-image:
stage: build
script:
- echo "--- Run Tekton Pipeline"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--timeout 240s
--showlog
5. Solve the every GitLab CI Pipeline is green problem
This could have been everything we need to do. But: right now every GitLab CI Pipeline is green, regardless of the Tekton Pipeline's status.
Therefore we remove --showlog and --timeout again, but add a --dry-run together with the --output yaml flags. Without the --dry-run the tkn pipeline start command would create a PipelineRun object definition already, which we can't create then using kubectl anymore:
build-image:
stage: build
script:
- echo "--- Create parameterized Tekton PipelineRun yaml"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--dry-run
--output yaml > pipelinerun.yml
Now that we removed --showlog and don't start an actual Tekton pipeline using tkn CLI, we need to create the pipeline run using:
- PIPELINE_RUN_NAME=$(kubectl create -f pipelinerun.yml --output=jsonpath='{.metadata.name}')
Having the temporary variable PIPELINE_RUN_NAME available containing the exact pipeline run id, we can stream the Tekton pipeline logs into our GitLab CI log again:
- tkn pipelinerun logs $PIPELINE_RUN_NAME --follow
Finally we need to check for Tekton pipeline run's status and exit our GitLab CI pipeline accordingly in order to prevent red Tekton pipelines resulting in green GitLab CI pipelines. Therefore let's check the status of the Tekton pipeline run first. This can be achieved using --output=jsonpath='{.status.conditions[*].reason}' together with a kubectl get pipelineruns:
kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}'
Then we pipe the result into a grep which checks, if Failed is inside the status.condiditons.reason field.
Finally we use a bash onliner (which is <expression to check true or false> && command when true || command when false) to issue the suitable exit command (see https://askubuntu.com/a/892605):
- kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}' | grep Failed && exit 1 || exit 0
Now every GitLab CI Pipeline becomes green, when the Tekton Pipeline succeeded - and gets red when the Tekton Pipeline failed. The example project has some logs if you're interested. It's pretty cool to see the Tekton logs inside the GitLab CI logs:

Restart a Kubernetes Job or Pod with a different command

I'm looking for a way to quickly run/restart a Job/Pod from the command line and override the command to be executed in the created container.
For context, I have a Kubernetes Job that gets executed as a part of our deploy process. Sometimes that Job crashes and I need to run certain commands inside the container the Job creates to debug and fix the problem (subsequent Jobs then succeed).
The way I have done this so far is:
Copy the YAML of the Job, save into a file
Clean up the YAML (delete Kubernetes-managed fields)
Change the command: field to tail -f /dev/null (so that the container stays alive)
kubectl apply -f job.yaml && kubectl get all && kubectl exec -ti pod/foobar bash
Run commands inside the container
kubectl delete job/foobar when I am done
This is very tedious. I am looking for a way to do something like the following
kubectl restart job/foobar --command "tail -f /dev/null"
# or even better
kubectl run job/foobar --exec --interactive bash
I cannot use the run command to create a Pod:
kubectl run --image xxx -ti
because the Job I am trying to restart has certain volumeMounts and other configuration I need to reuse. So I would need something like kubectl run --from-config job/foobar.
Is there a way to achieve this or am I stuck with juggling the YAML definition file?
Edit: the Job YAML looks approx. like this:
apiVersion: batch/v1
kind: Job
metadata:
name: database-migrations
labels:
app: myapp
service: myapp-database-migrations
spec:
backoffLimit: 0
template:
metadata:
labels:
app: myapp
service: myapp-database-migrations
spec:
restartPolicy: Never
containers:
- name: migrations
image: registry.example.com/myapp:977b44c9
command:
- "bash"
- "-c"
- |
set -e -E
echo "Running database migrations..."
do-migration-stuff-here
echo "Migrations finished at $(date)"
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/example/myapp/app/config/conf.yml
name: myapp-config-volume
subPath: conf.yml
- mountPath: /home/example/myapp/.env
name: myapp-config-volume
subPath: .env
volumes:
- name: myapp-config-volume
configMap:
name: myapp
imagePullSecrets:
- name: k8s-pull-project
The commands you suggested don't exist. Take a look at this reference where you can find all available commands.
Based on that documentation the task of the Job is to create one or more Pods and continue retrying execution them until the specified number of successfully terminated ones will be achieved. Then the Job tracks the successful completions. You cannot just update the Job because these fields are not updatable. To do what's you want you should delete current job and create one once again.
I recommend you to keep all your configurations in files. If you have a problem with configuring job commands, practice says that you should modify these settings in yaml and apply to the cluster - if your deployment crashes - by storing the configuration in files, you have a backup.
If you are interested how to improve this task, you can try those 2 examples describe below:
Firstly I've created several files:
example job (job.yaml):
apiVersion: batch/v1
kind: Job
metadata:
name: test1
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "sleep 300"]
volumeMounts:
- name: foo
mountPath: "/script/foo"
volumes:
- name: foo
configMap:
name: my-conf
defaultMode: 0755
restartPolicy: OnFailure
patch-file.yaml:
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "echo 'patching test' && sleep 500"]
and configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-conf
data:
test: |
#!/bin/sh
echo "skrypt test"
If you want to automate this process you can use plugin
A plugin is a standalone executable file, whose name begins with kubectl-. To install a plugin, move its executable file to anywhere on your PATH.
There is no plugin installation or pre-loading required. Plugin executables receive the inherited environment from the kubectl binary. A plugin determines which command path it wishes to implement based on its name.
Here is the file that can replace your job
A plugin determines the command path that it will implement based on its filename.
kubectl-job:
#!/bin/bash
kubectl patch -f job.yaml -p "$(cat patch-job.yaml)" --dry-run=client -o yaml | kubectl replace --force -f - && kubectl wait --for=condition=ready pod -l job-name=test1 && kubectl exec -it $(kubectl get pod -l job-name=test1 --no-headers -o custom-columns=":metadata.name") -- /bin/sh
This command uses an additional file (patch-job.yaml, see this link) - within we can put our changes for job.
Then you should change the permissions of this file and move it:
sudo chmod +x .kubectl-job
sudo mv ./kubectl-job /usr/local/bin
It's all done. Right now you can use it.
$ kubectl job
job.batch "test1" deleted
job.batch/test1 replaced
pod/test1-bdxtm condition met
pod/test1-nh2pv condition met
/ #
As you can see Job has been replaced (deleted and created).
You can also use single-line command, here is the example:
kubectl get job test1 -o json | jq "del(.spec.selector)" | jq "del(.spec.template.metadata.labels)" | kubectl patch -f - --patch '{"spec": {"template": {"spec": {"containers": [{"name": "test1", "image": "busybox", "command": ["/bin/sh", "-c", "sleep 200"]}]}}}}' --dry-run=client -o yaml | kubectl replace --force -f -
With this command you can change your job entering parameters "by hand". Here is the output:
job.batch "test1" deleted
job.batch/test1 replaced
As you can see this solution works as well.

Extracting resource in Argo workflow using "resource" template/step with "get" action and passing to downstream steps?

I'm exploring an easy way to read K8S resources in the Argo workflow. The current documentation is focusing mainly on create/patch with conditions (https://argoproj.github.io/argo/examples/#kubernetes-resources), while I'm curious if it's possible to perform "action: get", extra part of the resource state (or full resource) and pass it downstream as artifact or result output. Any ideas?
action: get is not a feature available from Argo.
However, it's easy to use kubectl from within a Pod and then send the JSON output to an output parameter. This uses a BASH script to send the JSON to the result output parameter, but an explicit output parameter or an output artifact are also viable options.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: kubectl-bash-
spec:
entrypoint: kubectl-example
templates:
- name: kubectl-example
steps:
- - name: generate
template: get-workflows
- - name: print
template: print-message
arguments:
parameters:
- name: message
value: "{{steps.generate.outputs.result}}"
- name: get-workflows
script:
image: bitnami/kubectl:latest
command: [bash]
source: |
some_workflow=$(kubectl get workflows -n argo | sed -n 2p | awk '{print $1;}')
kubectl get workflow "$some_workflow" -ojson
- name: print-message
inputs:
parameters:
- name: message
container:
image: alpine:latest
command: [sh, -c]
args: ["echo result was: '{{inputs.parameters.message}}'"]
Keep in mind that kubectl will run with the permissions of the Workflow's ServiceAccount. Be sure to submit the Workflow using a ServiceAccount which has access to the resource you want to get.

How to pass gitlab ci/cd variables to kubernetes(AKS) deployment.yaml

I have a node.js (express) project checked into gitlab and this is running in Kubernetes . I know we can set env variables in Kubernetes(on Azure, aks) in deployment.yaml file.
How can i pass gitlab ci/cd env variables to kubernetes(aks) (deployment.yaml file) ?
You can develop your own helm charts. This will pay back in long perspective.
Other approach: there is an easy and versatile way is to put ${MY_VARIABLE} placeholders into the deployment.yaml file. Next, during the pipeline run, at the deployment job use the envsubst command to substitute vars with respective values and deploy the file.
Example deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-${MY_VARIABLE}
labels:
app: nginx
spec:
replicas: 3
(...)
Example job:
(...)
deploy:
stage: deploy
script:
- envsubst < deployment.yaml > deployment-${CI_JOB_NAME}.yaml
- kubectl apply -f deployment-${CI_JOB_NAME}.yaml
I'm going to give you an easy solution that may or may not be "the solution".
To do what you want you could simply add your gitlab env variables in a secret during the cd before launching your deployment. This will allow you to use env secret inside the deployment.
If you want to do it like this you will need to think of how to delete them when you want to update them for idempotence.
Another solution would be to create the thing you are deploying as a Helm Chart. This would allow you to have specific variables (called values) that you can use in the templating and override at install / upgrade time.
There are many articles around getting setup with something like this.
Here is one specifically around the context of CI/CD: https://medium.com/#gajus/the-missing-ci-cd-kubernetes-component-helm-package-manager-1fe002aac680.
Another specifically around GitLab: https://medium.com/#yanick.witschi/automated-kubernetes-deployments-with-gitlab-helm-and-traefik-4e54bec47dcf
For future readers. Another way is to use a template file and generate deployment.yaml from the template using envsubst.
Template file:
# template/deployment.tmpl
---
apiVersion: apps/v1
kind: deployment
metadata:
name: strapi-deployment
namespace: strapi
labels:
app: strapi
# deployment specifications
spec:
replicas: 1
selector:
matchLabels:
app: strapi
serviceName: strapi
# pod specifications
template:
metadata:
labels:
app: strapi
# pod blueprints
spec:
containers:
- name: strapi-container
image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}
imagePullPolicy: Always
imagePullSecrets:
- name: gitlab-registry-secret
deploy stage in .gitlab-ci.yml
(...)
deploy:
stage: deploy
script:
# deploy resources in k8s cluster
- envsubst < strapi-deployment.tmpl > strapi-deployment.yaml
- kubectl apply -f strapi-deployment.yaml
As defined here image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}, IMAGE_TAG is an environment variable defined in gitlab. envsubst would go through strapi-deployment.tmpl and substitute any variable defined there and generate strapi-deployment.yaml file.
sed command helped me with this:
In Deployment.yaml use some placeholder, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
#Other configs bla-bla-bla
spec:
containers:
- name: app
image: my.registry./myapp:<VERSION>
And in .gitlab-ci.yml use sed:
deploy:
stage: deploy
image: kubectl-img
script:
# - kubectl bla-bla-bla whatever you want to do before the apply command
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" Deployment.yaml
- kubectl apply -f Deployment.yaml
So the resulting Deployment.yaml will contain CI_COMMIT_SHORT_SHA value instead of <VERSION>
Source of the solution