How to restart Tekton PipelineRun, having a pipeline-run.yml defined in git (e.g. using Cloud Native Buildpacks)? - kubernetes

We want to use the official Tekton buildpacks task from Tekton Hub to run our builds using Cloud Native Buildpacks. The buildpacks documentation for Tekton tells us to install the buildpacks & git-clone Task from Tekton Hub, create Secret, ServiceAccount, PersistentVolumeClaim and a Tekton Pipeline.
As the configuration is parameterized, we don't want to start our Tekton pipelines using a huge kubectl command but instead configure the PipelineRun using a separate pipeline-run.yml YAML file (as also stated in the docs) containing the references to the ServiceAccount, workspaces, image name and so on:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: buildpacks-test-pipeline-run
spec:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of output image
Now running the Tekton pipeline once is no problem using kubectl apply -f pipeline-run.yml. But how can we restart or reuse this YAML-based configuration for all the other pipelines runs?

There are some discussions about that topic in the Tekton GitHub project - see tektoncd/pipeline/issues/664 and tektoncd/pipeline/issues/685. Since Tekton is heavily based on Kubernetes, all Tekton objects are Kubernetes CRDs - which are in fact immutable. So it is intended to not be able to re-run an already run PipelineRun.
But as also discussed in tektoncd/pipeline/issues/685 we can simply use the generateName variable of the metadata field like this:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
spec:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of output image
Running kubectl create -f pipeline-run.yml will now work multiple times and kind of "restart" our Pipeline, while creating a new PipelineRun object like buildpacks-test-pipeline-run-dxcq6 everytime the command is issued.
Keep in mind to delete old PipelineRun objects once in a while though.

tkn cli has the switch --use-pipelinerun to the command tkn pipeline start, what this command does is to reuse the params/workspaces from that pipelinerun and create a new one, so effectively "restarting" it.
so to 'restart' the pipelinerun pr1 which belong to the pipeline p1 you would do:
tkn pipeline start p1 --use-pipelinerun pr1
maybe we should have a easier named command, I kicked the discussion sometime ago feel free to contribute a feedback :
https://github.com/tektoncd/cli/issues/1091

You cannot restart a pipelinerun.
Since in tekton, a pipelinerun is one time execution for a pipeline(treat as template), so it should not able to be restart, another kubectl apply for pipelinerun is another execution...

Related

Use one Kustomize patch to set environment variables for a deployment and a cron job

Is there an easy way to share a set of environment variables (coming in from various config maps and secrets) between the same container in a deployment and a cron job?
I'm using Kustomize, but I can't figure out how to approach since with a patch the patch itself would be a bit different depending on if patching a deployment or a cron job.
For example, in my deployment I have something like this:
spec:
containers:
- name: my_app
env:
name: SOME_SECRET
valueFrom:
secretKeyRef:
name: my_secret
key: my_key
envFrom:
- configmapRef:
name: some_config_map
I'd like to also have this applied to a cron job, but since the YAML schema is different between a deploymnet and a cron job I'm not sure if this is something that is possible/supported.
Thanks!
Don't know if it can be done with Kustomize or not...
But have you considered using helm instead? You can have dedicated templates for deployment and cron job and use the same value for both of their container env field...

Tekton running pipeline via passing parameter

I have a Tekton Pipeline and PipelineRun definitions. But, I couldn't achieve to run Pipeline via passing parameter.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: build-deploy-
labels:
tekton.dev/pipeline: build-deploy
spec:
serviceAccountName: tekton-build-bot
pipelineRef:
name: build-deploy
params:
- name: registry-address
value: $(REG_ADDRESS)
- name: repo-address
#value: $(REPO_ADDRESS)
value: $(REPO_ADDRESS)
- name: repo-name
value: $(REPO_NAME)
- name: version
value: $(VERSION)
workspaces:
- name: source
persistentVolumeClaim:
claimName: my-pvc
How can I pass while parameters while trying to run that runner with following command kubectl create -f pipelinerun.yaml?
Example:
value: $(REG_ADDRESS) -> I wanted to pass registry address as right before the running pipeline instead of giving hard-coded constant.
Any ideas?
You cannot pass those parameters when using kubectl create.
There are two alternatives:
Use tkn cli
You can use tkn, a purpose made CLI for Tekton. Then you can start a run of a Pipeline with, e.g.:
tkn pipeline start build-deploy \
--param registry-address=yay \
--param repo-name=nay \
--workspace name=source,claimName=my-pvc
Initiate pipeline with Trigger
You can setup a Trigger that initiates runs of your Pipeline on certain events, e.g. when you push to Git.
Then your PipelineRun template with parameter mapping is done using a TriggerTemplate

Extracting resource in Argo workflow using "resource" template/step with "get" action and passing to downstream steps?

I'm exploring an easy way to read K8S resources in the Argo workflow. The current documentation is focusing mainly on create/patch with conditions (https://argoproj.github.io/argo/examples/#kubernetes-resources), while I'm curious if it's possible to perform "action: get", extra part of the resource state (or full resource) and pass it downstream as artifact or result output. Any ideas?
action: get is not a feature available from Argo.
However, it's easy to use kubectl from within a Pod and then send the JSON output to an output parameter. This uses a BASH script to send the JSON to the result output parameter, but an explicit output parameter or an output artifact are also viable options.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: kubectl-bash-
spec:
entrypoint: kubectl-example
templates:
- name: kubectl-example
steps:
- - name: generate
template: get-workflows
- - name: print
template: print-message
arguments:
parameters:
- name: message
value: "{{steps.generate.outputs.result}}"
- name: get-workflows
script:
image: bitnami/kubectl:latest
command: [bash]
source: |
some_workflow=$(kubectl get workflows -n argo | sed -n 2p | awk '{print $1;}')
kubectl get workflow "$some_workflow" -ojson
- name: print-message
inputs:
parameters:
- name: message
container:
image: alpine:latest
command: [sh, -c]
args: ["echo result was: '{{inputs.parameters.message}}'"]
Keep in mind that kubectl will run with the permissions of the Workflow's ServiceAccount. Be sure to submit the Workflow using a ServiceAccount which has access to the resource you want to get.

How to pass gitlab ci/cd variables to kubernetes(AKS) deployment.yaml

I have a node.js (express) project checked into gitlab and this is running in Kubernetes . I know we can set env variables in Kubernetes(on Azure, aks) in deployment.yaml file.
How can i pass gitlab ci/cd env variables to kubernetes(aks) (deployment.yaml file) ?
You can develop your own helm charts. This will pay back in long perspective.
Other approach: there is an easy and versatile way is to put ${MY_VARIABLE} placeholders into the deployment.yaml file. Next, during the pipeline run, at the deployment job use the envsubst command to substitute vars with respective values and deploy the file.
Example deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-${MY_VARIABLE}
labels:
app: nginx
spec:
replicas: 3
(...)
Example job:
(...)
deploy:
stage: deploy
script:
- envsubst < deployment.yaml > deployment-${CI_JOB_NAME}.yaml
- kubectl apply -f deployment-${CI_JOB_NAME}.yaml
I'm going to give you an easy solution that may or may not be "the solution".
To do what you want you could simply add your gitlab env variables in a secret during the cd before launching your deployment. This will allow you to use env secret inside the deployment.
If you want to do it like this you will need to think of how to delete them when you want to update them for idempotence.
Another solution would be to create the thing you are deploying as a Helm Chart. This would allow you to have specific variables (called values) that you can use in the templating and override at install / upgrade time.
There are many articles around getting setup with something like this.
Here is one specifically around the context of CI/CD: https://medium.com/#gajus/the-missing-ci-cd-kubernetes-component-helm-package-manager-1fe002aac680.
Another specifically around GitLab: https://medium.com/#yanick.witschi/automated-kubernetes-deployments-with-gitlab-helm-and-traefik-4e54bec47dcf
For future readers. Another way is to use a template file and generate deployment.yaml from the template using envsubst.
Template file:
# template/deployment.tmpl
---
apiVersion: apps/v1
kind: deployment
metadata:
name: strapi-deployment
namespace: strapi
labels:
app: strapi
# deployment specifications
spec:
replicas: 1
selector:
matchLabels:
app: strapi
serviceName: strapi
# pod specifications
template:
metadata:
labels:
app: strapi
# pod blueprints
spec:
containers:
- name: strapi-container
image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}
imagePullPolicy: Always
imagePullSecrets:
- name: gitlab-registry-secret
deploy stage in .gitlab-ci.yml
(...)
deploy:
stage: deploy
script:
# deploy resources in k8s cluster
- envsubst < strapi-deployment.tmpl > strapi-deployment.yaml
- kubectl apply -f strapi-deployment.yaml
As defined here image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}, IMAGE_TAG is an environment variable defined in gitlab. envsubst would go through strapi-deployment.tmpl and substitute any variable defined there and generate strapi-deployment.yaml file.
sed command helped me with this:
In Deployment.yaml use some placeholder, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
#Other configs bla-bla-bla
spec:
containers:
- name: app
image: my.registry./myapp:<VERSION>
And in .gitlab-ci.yml use sed:
deploy:
stage: deploy
image: kubectl-img
script:
# - kubectl bla-bla-bla whatever you want to do before the apply command
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" Deployment.yaml
- kubectl apply -f Deployment.yaml
So the resulting Deployment.yaml will contain CI_COMMIT_SHORT_SHA value instead of <VERSION>
Source of the solution

How to set default value for Kubernetes ConfigMap (or does Helm overwrite ConfigMap values?)

Use case:
I want to be able to re-run a job from where the first job left off. I am using Helm to deploy into Kubernetes.
I have the idea of saving the state of the first job in a ConfigMap. The ConfigMap yaml defining the ConfigMap is packaged up with the job and both are deployed at the same time with Helm.
apiVersion: v1
kind: ConfigMap
metadata:
name: NameOfMyConfigMap
data:
someKey: someValue
MY_STATE: state <---- See below as to whether this should be included or not
The job is run with an ENV variable set from the ConfigMap:
env:
- name: MY_STATE
valueFrom:
configMapKeyRef:
name: NameOfMyConfigMap
key: MY_STATE
The job runs a script that looks to see if $MY_STATE is set and if it is not set then the job is being run for the first time, otherwise the job closes down the already running first job, saves the first job's state into the MY_STATE ConfigMap variable and launches the job again using the saved state.
If I don't declare the MY_STATE key in the initial ConfigMap definition then the first run of the job will fail, as the ENV definition above cannot find the ConfigMap variable.
If I do declare the value (MY_STATE: "") in the ConfigMap definition, then the first deployment will work. However, if I re-deploy the job with helm upgrade then does the value I enter in the definition not overwrite an existing value in the existing ConfigMap?
What is the best method of storing state in between runs of the same job?
Have you tried using volumes? In this case it should not be overwritten when using helm upgrade.
Could an example like this work? (From
https://groups.google.com/forum/#!msg/kubernetes-users/v2806ezEdPk/1geJCO8-AQAJ)
apiVersion: batch/v1
kind: Job
metadata:
name: keystore-configmap-job
spec:
template:
metadata:
name: keystore-configmap
spec:
containers:
- name: keystore
image: ubuntu
volumeMounts:
- name: keystore-configmap-volume
mountPath: /config-base64
command: [ "sh", "-c", "cat /config-base64/keystore.jks | base64 --decode | sha256sum" ]
restartPolicy: Never
volumes:
- name: keystore-configmap-volume
configMap:
name: keystore-configmap