We are running Jenkins, in Kubernetes via the official helm chart.
Every pipeline has the same agent definition in place.
pipeline {
agent {
kubernetes {
inheritFrom 'default'
yamlFile 'automation/Jenkins/KubernetesPod.yaml'
}
}
The KubernetesPod.yaml looks like this.
metadata:
labels:
job-name: cicd_application
spec:
containers:
- name: operations
image: xxxxx.dkr.ecr.us-west-1.amazonaws.com/operations:0.1.3
command:
- sleep
args:
- 99d
This works fine. Our job DSL looks like this and everything just works.
steps {
container('operations') {
The problem comes in, when that operations container bumps from 0.1.3 to 0.1.4. I now have to create a Merge Request against 40 pipelines.
Is there a way to
Pull this file in from another repo.
Define and refer to this in JcasC
Ideally, when we bump the image ( its things like TF, Ansible etc) we can just do it all at once.
Thanks.
Related
Is there an easy way to share a set of environment variables (coming in from various config maps and secrets) between the same container in a deployment and a cron job?
I'm using Kustomize, but I can't figure out how to approach since with a patch the patch itself would be a bit different depending on if patching a deployment or a cron job.
For example, in my deployment I have something like this:
spec:
containers:
- name: my_app
env:
name: SOME_SECRET
valueFrom:
secretKeyRef:
name: my_secret
key: my_key
envFrom:
- configmapRef:
name: some_config_map
I'd like to also have this applied to a cron job, but since the YAML schema is different between a deploymnet and a cron job I'm not sure if this is something that is possible/supported.
Thanks!
Don't know if it can be done with Kustomize or not...
But have you considered using helm instead? You can have dedicated templates for deployment and cron job and use the same value for both of their container env field...
We want to use the official Tekton buildpacks task from Tekton Hub to run our builds using Cloud Native Buildpacks. The buildpacks documentation for Tekton tells us to install the buildpacks & git-clone Task from Tekton Hub, create Secret, ServiceAccount, PersistentVolumeClaim and a Tekton Pipeline.
As the configuration is parameterized, we don't want to start our Tekton pipelines using a huge kubectl command but instead configure the PipelineRun using a separate pipeline-run.yml YAML file (as also stated in the docs) containing the references to the ServiceAccount, workspaces, image name and so on:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: buildpacks-test-pipeline-run
spec:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of output image
Now running the Tekton pipeline once is no problem using kubectl apply -f pipeline-run.yml. But how can we restart or reuse this YAML-based configuration for all the other pipelines runs?
There are some discussions about that topic in the Tekton GitHub project - see tektoncd/pipeline/issues/664 and tektoncd/pipeline/issues/685. Since Tekton is heavily based on Kubernetes, all Tekton objects are Kubernetes CRDs - which are in fact immutable. So it is intended to not be able to re-run an already run PipelineRun.
But as also discussed in tektoncd/pipeline/issues/685 we can simply use the generateName variable of the metadata field like this:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
spec:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of output image
Running kubectl create -f pipeline-run.yml will now work multiple times and kind of "restart" our Pipeline, while creating a new PipelineRun object like buildpacks-test-pipeline-run-dxcq6 everytime the command is issued.
Keep in mind to delete old PipelineRun objects once in a while though.
tkn cli has the switch --use-pipelinerun to the command tkn pipeline start, what this command does is to reuse the params/workspaces from that pipelinerun and create a new one, so effectively "restarting" it.
so to 'restart' the pipelinerun pr1 which belong to the pipeline p1 you would do:
tkn pipeline start p1 --use-pipelinerun pr1
maybe we should have a easier named command, I kicked the discussion sometime ago feel free to contribute a feedback :
https://github.com/tektoncd/cli/issues/1091
You cannot restart a pipelinerun.
Since in tekton, a pipelinerun is one time execution for a pipeline(treat as template), so it should not able to be restart, another kubectl apply for pipelinerun is another execution...
I have created an mock.Dockerfile which just contains one line.
FROM eu.gcr.io/some-org/mock-service:0.2.0
With that config and a reference to it the build section, skaffold builds that dockerfile using the private GCR registry. However, if I remove that Dockerfile, skaffold does not build it, and when starting skaffold it only loads the images which are referenced in that build section(public images, like postgres work as well). So in that local kubernetes config, like minikube, this results in a
ImagePullBackOff
Failed to pull image "eu.gcr.io/some-org/mock-service:0.2.0": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials
So basically when I create a one-line Dockerfile, and include that, skaffold builds that image and loads it into minikube. Now it is possible to change the minikube config so that request to GCR succeds, but the goal is that developers don't have to change their minikube config...
Is there any other way to get that image loaded into Minikube, without changing the config and without that one-line Dockerfile?
skaffold.yaml:
apiVersion: skaffold/v2beta8
kind: Config
metadata:
name: some-service
build:
artifacts:
- image: eu.gcr.io/some-org/some-service
docker:
dockerfile: Dockerfile
- image: eu.gcr.io/some-org/mock-service
docker:
dockerfile: mock.Dockerfile
local: { }
profiles:
- name: mock
activation:
- kubeContext: (minikube|kind-.*|k3d-(.*))
deploy:
helm:
releases:
- name: postgres
chartPath: test/postgres
- name: mock-service
chartPath: test/mock-service
- name: skaffold-some-service
chartPath: helm/some-service
artifactOverrides:
image: eu.gcr.io/some-org/some-service
setValues:
serviceAccount.create: true
Although GKE comes pre-configured to pull from registries within the same project, Kubernetes clusters generally require special configuration at the pod level to pull from private registries. It's a bit involved.
Fortunately minikube introduced a registry-creds add-on that will configure the minikube instance with appropriate credentials to pull images.
Currently using KubernetesPodOperator with default_image_policy (IfNotPresent). Will be using static tag IDs for different environments. For example, in dev env, the tag will be dev, in qa env, the tag will be qa, and so on. The issue is if there is actually a new version (different sha digest) of the image but same tag ID. I can change the image policy to Always but then it will download all the time. The Airflow DAG contain several tasks that uses KubernetesPodOperator of the same image and I don't want an image downloaded always for each of the task runs.
Is there an image policy that checks the sha digest(instead of tag ID) if exists and downloads it if not?
In the containers image definition you can specify the sha to be sure it pulls/uses the correct one. For example:
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu#sha256:bc2f7250f69267c9c6b66d7b6a81a54d3878bb85f1ebb5f951c896d13e6ba537
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
or for Airflow KubernetesPodOperator you can use:
example = KubernetesPodOperator(
image=ubuntu#sha256:bc2f7250f69267c9c6b66d7b6a81a54d3878bb85f1ebb5f951c896d13e6ba537,
task_id="example_task",
...
)
I want to create and remove a job using Google Cloud Builder. Here's my configuration which builds my Docker image and pushes to GCR.
# cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xyz/abc:latest','-f','Dockerfile.ng-unit','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/xyz/abc:latest']
Now I want to create a job , I want to run something like
kubectl create -R -f ./kubernetes
which creates job in kubernetes folder.
I know cloud builder has - name: 'gcr.io/cloud-builders/kubectl' but I can't figure out how to use it. Plus how can I authenticate it to run kubectl commands? How can I use service_key.json
I wasn't able to connect and get cluster credentials. Here's what I did
Go to IAM, add another Role to xyz#cloudbuild.gserviceaccount.com. I used Project Editor.
Wrote this on cloudbuild.yaml name: 'gcr.io/cloud-builders/kubectl'
args: ['create', '-R', '-f','./dockertests/unit-tests/kubernetes']