With ansible: is it possible to patch resources with json or yaml snippets? I basically want to be able to accomplish the same thing as kubectl patch <Resource> <Name> --type='merge' -p='{"spec":{ "test":"hello }}', to append/modify resource specs.
https://docs.ansible.com/ansible/latest/modules/k8s_module.html
Is it possible to do this with the k8s ansible module? It says that if a resource already exists and "status: present" is set that it will patch it, however it isn't patching as far as I can tell
Thanks
Yes, you can provide just a patch and if the resource already exists it should send a strategic-merge-patch (or just a merge-patch if it's a custom resource). Here's an example playbook that creates and modifies a configmap:
---
- hosts: localhost
connection: local
gather_facts: no
vars:
cm: "{{ lookup('k8s',
api_version='v1',
kind='ConfigMap',
namespace='default',
resource_name='test') }}"
tasks:
- name: Create the ConfigMap
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
hello: world
- name: We will see the ConfigMap defined above
debug:
var: cm
- name: Add a field to the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
added: field
- name: The same ConfigMap as before, but with an extra field in data
debug:
var: cm
- name: Change a field in the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
hello: everyone
- name: The added field is unchanged, but the hello field has a new value
debug:
var: cm
- name: Delete the added field in the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
added: null
- name: The hello field is unchanged, but the added field is now gone
debug:
var: cm
Related
I am trying to use a container image from a private container registry in one of my tasks.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: echo-hello-world
spec:
steps:
- name: echo
image: de.icr.io/reporting/status:latest
command:
- echo
args:
- "Hello World"
But when I run this task within an IBM Cloud Delivery Pipeline (Tekton) the image can not be pulled
message: 'Failed to pull image "de.icr.io/reporting/status:latest": rpc error: code = Unknown desc = failed to pull and unpack image "de.icr.io/reporting/status:latest": failed to resolve reference "de.icr.io/reporting/status:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized'
I read several tutorials and blogs, but so far couldn't find a solution. This is probably what I need to accomplish, so that the IBM Cloud Delivery Pipeline (Tekton) can access my private container registry: https://tekton.dev/vault/pipelines-v0.15.2/auth/#basic-authentication-docker
So far I have created a secret.yaml file in my .tekton directory:
apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/docker-0: https://de.icr.io # Described below
type: kubernetes.io/basic-auth
stringData:
username: $(params.DOCKER_USERNAME)
password: $(params.DOCKER_PASSWORD)
I am also creating a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: default-runner
secrets:
- name: basic-user-pass
And in my trigger definition I telling the pipeline to use the default-runner ServiceAccount:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
serviceAccountName: default-runner
pipelineRef:
name: hello-goodbye
I found a way to pass my API key to my IBM Cloud Delivery Pipeline (Tekton) and the tasks in my pipeline are now able to pull container images from my private container registry.
This is my working trigger template:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
resourcetemplates:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
pipelineRef:
name: hello-goodbye
podTemplate:
imagePullSecrets:
- name: pipeline-pull-secret
It first defines a parameter called pipeline-dockerconfigjson:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
The second part turns the value passed into this parameter into a Kubernetes secret:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
And this secret is then pushed into the imagePullSecrets field of the PodTemplate.
The last step is to populate the parameter with a valid dockerconfigjson and this can be accomplished within the Delivery Pipeline UI (IBM Cloud UI).
To create a valid dockerconfigjson for my registry de.icr.io I had to use the following kubectl command:
kubectl create secret docker-registry mysecret \
--dry-run=client \
--docker-server=de.icr.io \
--docker-username=iamapikey \
--docker-password=<MY_API_KEY> \
--docker-email=<MY_EMAIL> \
-o yaml
and then within the output there is a valid base64 encoded .dockerconfigjson field.
Please also note that there is a public catalog of sample tekton tasks:
https://github.com/open-toolchain/tekton-catalog/tree/master/container-registry
More on IBM Cloud Continuous Delivery Tekton:
https://www.ibm.com/cloud/blog/ibm-cloud-continuous-delivery-tekton-pipelines-tools-and-resources
Tektonized Toolchain Templates: https://www.ibm.com/cloud/blog/toolchain-templates-with-tekton-pipelines
The secret you created (type basic-auth) would not allow Kubelet to pull your Pods images.
The doc mentions those secrets are meant to provision some configuration, inside your tasks containers runtime. Which may then be used during your build jobs, pulling or pushing images to registries.
Although Kubelet needs some different configuration (eg: type dockercfg), to authenticate when pulling images / starting containers.
We're working on the integration of GitLab and Tekton / OpenShift Pipelines via Webhooks and Tekton Triggers. We followed this example project and crafted our EventListener that ships with the needed Interceptor, TriggerBinding and TriggerTemplate as gitlab-push-listener.yml:
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: gitlab-listener
spec:
serviceAccountName: tekton-triggers-example-sa
triggers:
- name: gitlab-push-events-trigger
interceptors:
- name: "verify-gitlab-payload"
ref:
name: "gitlab"
kind: ClusterInterceptor
params:
- name: secretRef
value:
secretName: "gitlab-secret"
secretKey: "secretToken"
- name: eventTypes
value:
- "Push Hook"
bindings:
- name: gitrevision
value: $(body.checkout_sha)
- name: gitrepositoryurl
value: $(body.repository.git_http_url)
template:
spec:
params:
- name: gitrevision
- name: gitrepositoryurl
- name: message
description: The message to print
default: This is the default message
- name: contenttype
description: The Content-Type of the event
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
#name: buildpacks-test-pipeline-run
spec:
serviceAccountName: buildpacks-service-account-gitlab # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: IMAGE
value: registry.gitlab.com/jonashackt/microservice-api-spring-boot # This defines the name of output image
- name: SOURCE_URL
value: https://gitlab.com/jonashackt/microservice-api-spring-boot
- name: SOURCE_REVISION
value: main
As stated in the example (and in the Tekton docs) we also created and kubectl applyed a ServiceAccount named tekton-triggers-example-sa, RoleBinding and ClusterRoleBinding:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-triggers-example-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: triggers-example-eventlistener-binding
subjects:
- kind: ServiceAccount
name: tekton-triggers-example-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-triggers-eventlistener-roles
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: triggers-example-eventlistener-clusterbinding
subjects:
- kind: ServiceAccount
name: tekton-triggers-example-sa
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-triggers-eventlistener-clusterroles
Now installing our EventListener via kubectl apply -f gitlab-push-listener.yml, no Triggering from GitLab or even a curl is triggering a PipelineRun as intended. Looking into the logs of the el-gitlab-listener Deployment and Pod, we see the following error:
kubectl logs el-gitlab-listener-69f4c5c8f8-t4zdj
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:116","msg":"Successfully created the logger."}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:117","msg":"Logging level set to: info"}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:79","msg":"Fetch GitHub commit ID from kodata failed","error":"\"KO_DATA_PATH\" does not exist or is empty"}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","logger":"eventlistener","caller":"logging/logging.go:46","msg":"Starting the Configuration eventlistener","knative.dev/controller":"eventlistener"}
{"level":"info","ts":"2021-11-30T09:38:32.445Z","logger":"eventlistener","caller":"profiling/server.go:64","msg":"Profiling enabled: false","knative.dev/controller":"eventlistener"}
{"level":"fatal","ts":"2021-11-30T09:38:32.451Z","logger":"eventlistener","caller":"eventlistenersink/main.go:104","msg":"Error reading ConfigMap config-observability-triggers","knative.dev/controller":"eventlistener","error":"configmaps \"config-observability-triggers\" is forbidden: User \"system:serviceaccount:default:tekton-triggers-example-sa\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"tekton-triggers-eventlistener-clusterroles\" not found, clusterrole.rbac.authorization.k8s.io \"tekton-triggers-eventlistener-roles\" not found]","stacktrace":"main.main\n\t/opt/app-root/src/go/src/github.com/tektoncd/triggers/cmd/eventlistenersink/main.go:104\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:203"}
The OpenShift Pipelines documentation does not directly document it. But if you skim the docs especially in the Triggers section, you might recognize that there is no ServiceAccount created whatsoever. But one is used by every Trigger component. It's called pipeline. Simply run kubectl get serviceaccount to see it:
$ kubectl get serviceaccount
NAME SECRETS AGE
default 2 49d
deployer 2 49d
pipeline 2 48d
This pipeline ServiceAccount is ready to use inside your Tekton Triggers & EventListeners. So your gitlab-push-listener.yml can directly reference it:
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: gitlab-listener
spec:
serviceAccountName: pipeline
triggers:
- name: gitlab-push-events-trigger
interceptors:
...
You can simply delete your manually created ServiceAccount tekton-triggers-example-sa. It's not needed in OpenShift Pipelines! Now your Tekton Triggers EventListener should work and trigger your Tekton Pipelines as defined.
I'm using GKE and Helm v3 and I'm trying to create/reserve a static IP address using ComputeAddress and then to create DNS A record with the previously reserved IP address.
Reserve IP address
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: ip-address
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
location: global
Get reserved IP address
kubectl get computeaddress ip-address -o jsonpath='{.spec.address}'
Create DNS A record
apiVersion: dns.cnrm.cloud.google.com/v1beta1
kind: DNSRecordSet
metadata:
name: dns-record-a
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
name: "{{ .Release.Name }}.example.com"
type: "A"
ttl: 300
managedZoneRef:
external: example-com
rrdatas:
- **IP-ADDRESS-VALUE** <----
Is there a way to reference the IP address value, created by ComputeAddress, in the DNSRecordSet resource?
Basically, I need something similar to the output values in Terraform.
Thanks!
Currently, there is not possible to assign different value as string (IP Address) on the field "rrdatas". So you are not able to "call" another resource like the IP Address created before. You need to put the value on format x.x.x.x
It's interesting that something similar exists for GKE Ingress where we can reference reserved IP address and managed SSL certificate using annotations:
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-address
I have no idea why there is not something like this for DNSRecordSet resource. Hopefully, GKE will introduce it in the future.
Instead of running two commands, I've found a workaround by using Helm's hooks.
First, we need to define Job as post-install and post-upgrade hook which will pick up the reserved IP address when it becomes ready and then create appropriate DNSRecordSet resource with it. The script which retrieves the IP address, and manifest for DNSRecordSet are passed through ConfigMap and mounted to Pod.
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-dns-record-set-hook"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}-dns-record-set-hook"
spec:
restartPolicy: OnFailure
containers:
- name: post-install-job
image: alpine:latest
command: ['sh', '-c', '/opt/run-kubectl-command-to-set-dns.sh']
volumeMounts:
- name: volume-dns-record-scripts
mountPath: /opt
- name: volume-writable
mountPath: /mnt
volumes:
- name: volume-dns-record-scripts
configMap:
name: dns-record-scripts
defaultMode: 0777
- name: volume-writable
emptyDir: {}
ConfigMap definition with the script and manifest file:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: dns-record-scripts
data:
run-kubectl-command-to-set-dns.sh: |-
# install kubectl command
apk add curl && \
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/linux/amd64/kubectl && \
chmod u+x kubectl && \
mv kubectl /bin/kubectl
# wait for reserved IP address to be ready
kubectl wait --for=condition=Ready computeaddress/ip-address
# get reserved IP address
IP_ADDRESS=$(kubectl get computeaddress ip-address -o jsonpath='{.spec.address}')
echo "Reserved address: $IP_ADDRESS"
# update IP_ADDRESS in manifest
sed "s/##IP_ADDRESS##/$IP_ADDRESS/g" /opt/dns-record.yml > /mnt/dns-record.yml
# create DNS record
kubectl apply -f /mnt/dns-record.yml
dns-record.yml: |-
apiVersion: dns.cnrm.cloud.google.com/v1beta1
kind: DNSRecordSet
metadata:
name: dns-record-a
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
name: "{{ .Release.Name }}.example.com"
type: A
ttl: 300
managedZoneRef:
external: example-com
rrdatas:
- "##IP_ADDRESS##"
And, finally, for (default) Service Account to be able to retrieve the IP address and create/update DNSRecordSet, we need to assign some roles to it:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dnsrecord-setter
rules:
- apiGroups: ["compute.cnrm.cloud.google.com"]
resources: ["computeaddresses"]
verbs: ["get", "list"]
- apiGroups: ["dns.cnrm.cloud.google.com"]
resources: ["dnsrecordsets"]
verbs: ["get", "create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dnsrecord-setter
subjects:
- kind: ServiceAccount
name: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dnsrecord-setter
I have a need to define a standalone patch as YAML.
More specifically, I want to do the following:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "registry-my-registry"}]}'
The catch is I can't use kubectl patch. I'm using a GitOps workflow with flux, and that resource I want to patch is a default resource created outside of flux.
In other terms, I need to do the same thing as the command above but with kubectl apply only:
kubectl apply patch.yaml
I wasn't able to figure out if you can define such a patch.
The key bit is that I can't predict the name of the default secret token on a new cluster (as the name is random, i.e. default-token-uudge)
Fields set and deleted from Resource Config are merged into Resources by Kubectl apply:
If a Resource already exists, Apply updates the Resources by merging the
local Resource Config into the remote Resources
Fields removed from the Resource Config will be deleted from the remote Resource
You can learn more about Kubernetes Field Merge Semantics.
If your limitation is not knowing the secret default-token-xxxxx name, no problem, just keep that field out of your yaml.
As long as the yaml has enough fields to identify the target resource (name, kind, namespace) it will add/edit the fields you set.
I created a cluster (minikube in this example, but it could be any) and retrieved the current default serviceAccount:
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "330"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
Then, we create a yaml file with the content's that we want to add:
$ cat add-image-pull-secrets.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
imagePullSecrets:
- name: registry-my-registry
Now we apply and verify:
$ kubectl apply -f add-image-pull-secrets.yaml
serviceaccount/default configured
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
imagePullSecrets:
- name: registry-my-registry
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","imagePullSecrets":[{"name":"registry-my-registry2"}],"kind":"ServiceAccount","metadata":{"annotations":{},"name":"default","namespace":"default"}}
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "2382"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
As you can see, the ImagePullPolicy was added to the resource.
I hope it fits your needs. If you have any further questions let me know in the comments.
Let say, your service account YAML looks like bellow:
$ kubectl get sa demo -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
Now, you want to add or change the imagePullSecrets for that service account. To do so, edit the YAML file and add imagePullSecrets.
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
imagePullSecrets:
- name: myregistrykey
And finally, apply the changes:
$ kubectl apply -f service-account.yaml
Given the following kustomize patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de
I want to use kubectl apply -k and somehow pass a value for ${PASSWORD} which I can set from my build script.
The only solution I got to work so far was replacing the ${PASSWORD} with sed, but I would prefer a kustomize solution.
As #Jonas already suggested you should consider using Secret. It's nicely described in this article.
I want to use kubectl apply -k and somehow pass a value for
${PASSWORD} which I can set from my build script.
I guess your script can store the generated password as a variable or save it to some file. You can easily create a Secret as follows:
$ kustomize edit add secret sl-demo-app --from-literal=db-password=$PASSWORD
or from a file:
$ kustomize edit add secret sl-demo-app --from-file=file/path
As you can read in the mentioned article:
These commands will modify your kustomization.yaml and add a
SecretGenerator inside it.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
kustomize build run in your project directory will create among others following Secret:
apiVersion: v1
data:
db-password: MTIzNDU=
kind: Secret
metadata:
name: sl-demo-app-6ft88t2625
type: Opaque
...
More details you can fine in the article.
If we want to use this secret from our deployment, we just have, like
before, to add a new layer definition which uses the secret.
For example, this file will mount the db-password value as
environement variables
apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db.password
In your Deployment definition file it may look similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
env:
- name: "PASSWORD"
valueFrom:
secretKeyRef:
name: git-secret
key: git.password
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de