I want to patch (overwrite) list in kubernetes manifest with Kustomize.
I am using patchesStrategicMerge method.
When I patch the parameters which are not in list the patching works as expected - only addressed parameters in patch.yaml are replaced, rest is untouched.
When I patch list the whole list is replaced.
How can I replace only specific items in the list and the res of the items in list stay untouched?
I found these two resources:
https://github.com/kubernetes-sigs/kustomize/issues/581
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md
but wasn't able to make desired solution of it.
exmaple code:
orig-file.yaml
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: test
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: test-user
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
patch.yaml:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: brase-yourself
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patchesStrategicMerge:
- patch.yaml
What I get:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
test: brase-yourself
What I want:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
test: brase-yourself
What you can do is to use jsonpatch instead of patchesStrategicMerge, so in your case:
cat <<EOF >./kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patches:
- path: patch.yaml
target:
group: monitoring.coreos.com
version: v1alpha1
kind: AlertmanagerConfig
name: alertmanager-slack-config
EOF
patch:
cat <<EOF >./patch.yaml
- op: replace
path: /spec/receivers/0/slackConfigs/0/username
value: Karl
EOF
Related
I have this kind of yaml file to define a trigger
`
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: app-template-pr-deploy
spec:
params:
- name: target-branch
- name: commit
- name: actor
- name: pull-request-number
- name: namespace
resourcetemplates:
- apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
generateName: app-pr-$(tt.params.actor)-
labels:
actor: $(tt.params.actor)
spec:
serviceAccountName: myaccount
pipelineRef:
name: app-pr-deploy
podTemplate:
nodeSelector:
location: somelocation
params:
- name: branch
value: $(tt.params.target-branch)
** - name: namespace
value: $(tt.params.target-branch)**
- name: commit
value: $(tt.params.commit)
- name: pull-request-number
value: $(tt.params.pull-request-number)
resources:
- name: app-cluster
resourceRef:
name: app-location-cluster
`
The issue is that sometime target-branch is like "integration/feature" and then the namespace is not valid
I would like to check if there is an unvalid character in the value and replace it if there is.
Any way to do it ?
Didn't find any valuable way to do it beside creating a task to execute this via shell script later in the pipeline.
This is something you could do from your EventListener, using something such as:
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: xx
spec:
triggers:
- name: demo
interceptors:
- name: addvar
ref:
name: cel
params:
- name: overlays
value:
- key: branch_name
expression: "body.ref.split('/')[2]"
bindings:
- ref: your-triggerbinding
template:
ref: your-triggertemplate
Then, from your TriggerTemplate, you would add the "branch_name" param, parsed from your EventListener.
Note: payload from git notification may vary. Sample above valid with github. Translating remote/origin/master into master, or abc/def/ghi/jkl into ghi.
I've created a separate task that is doing all the magic I needed and output a valid namespace name into a different variable.
Then instead of use namespace variable, i use valid-namespace all the way thru the pipeline.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: validate-namespace-task-v1
spec:
description: >-
This task will validate namespaces
params:
- name: namespace
type: string
default: undefined
results:
- name: valid-namespace
description: this should be a valid namespace
steps:
- name: triage-validate-namespace
image: some-image:0.0.1
script: |
#!/bin/bash
echo -n "$(params.namespace)" | sed "s/[^[:alnum:]-]/-/g" | tr '[:upper:]' '[:lower:]'| tee $(results.valid-namespace.path)
Thanks
Based on documentation, we can trigger the creation of a workflow. Is there is a way to trigger an existing workflow (deployed in argo namespace) from a sensor in argo-events namespace?
Something like:
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: test-dep
eventSourceName: webhook
eventName: example
triggers:
- template:
name: webhook-workflow-trigger
argoWorkflow:
source:
resource: existing-workflow-in-another-namespace
Existing Workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: sb1-
labels:
workflows.argoproj.io/archive-strategy: "false"
spec:
entrypoint: full
serviceAccountName: argo
volumes:
- name: kaniko-secret
secret:
secretName: regcred
items:
- key: .dockerconfigjson
path: config.json
- name: github-access
secret:
secretName: github-access
items:
- key: token
path: token
templates:
- name: full
dag:
tasks:
- name: build
templateRef:
name: container-image
template: build-kaniko-git
clusterScope: true
arguments:
parameters:
- name: repo_url
value: git://github.com/letthefireflieslive/test-app-sb1
- name: repo_ref
value: refs/heads/main
- name: container_image
value: legnoban/test-app-sb1
- name: container_tag
value: 1.0.2
- name: promote-dev
templateRef:
name: promote
template: promote
clusterScope: true
arguments:
parameters:
- name: repo_owner
value: letthefireflieslive
- name: repo_name
value: vcs
- name: repo_branch
value: master
- name: deployment_path
value: overlays/eg/dev/sb1/deployment.yml
- name: image_owner
value: legnoban
- name: image_name
value: test-app-sb1
- name: tag
value: 1.0.2
dependencies:
- build
In Argo, a Workflow is representation of a job that is running or has completed running, as such this is probably not what you want to do.
What you can do is create a template that will create a workflow (run a job) and then refer to this template in your trigger. In this way you can create a workflow based on the template.
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: sb1-workflowtemplate
spec:
entrypoint: full
templates:
- name: full
dag:
tasks:
- name: build
templateRef:
name: container-image
template: build-kaniko-git
clusterScope: true
arguments:
parameters:
- name: repo_url
value: git://github.com/letthefireflieslive/test-app-sb1
- name: repo_ref
value: refs/heads/main
- name: container_image
value: legnoban/test-app-sb1
- name: container_tag
value: 1.0.2
- name: promote-dev
templateRef:
name: promote
template: promote
clusterScope: true
arguments:
parameters:
- name: repo_owner
value: letthefireflieslive
- name: repo_name
value: vcs
- name: repo_branch
value: master
- name: deployment_path
value: overlays/eg/dev/sb1/deployment.yml
- name: image_owner
value: legnoban
- name: image_name
value: test-app-sb1
- name: tag
value: 1.0.2
dependencies:
- build
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: test-dep
eventSourceName: webhook
eventName: example
triggers:
- template:
name: webhook-workflow-trigger
argoWorkflow:
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: sb1-
spec:
workflowTemplateRef:
name: sb1-workflowtemplate
You should be able to do this, but you need to have serviceaccount in the sensor who can manage workflows. This means clusterrole and clusterrbinding assigned to this account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: argo-events-core
namespace: argo-events
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argo-events-core
namespace: argo-events
rules:
- apiGroups:
- argoproj.io
resources:
- workflows
- workflowtemplates
- cronworkflows
- clusterworkflowtemplates
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argo-events-core
namespace: argo-events
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argo-events-core
subjects:
- kind: ServiceAccount
name: argo-events-core
namespace: argo-events
I have tried many versions of this template below
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: tibco-events-sensor
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: 'false'
serviceAccountName: operate-workflow-sa
dependencies:
- name: tibco-dep
eventSourceName: tibco-events-source
eventName: whatever
triggers:
- template:
name: has-wf-event-trigger
argoWorkflow:
group: argoproj.io
version: v1alpha1
resource: Workflow
operation: resubmit
metadata:
generateName: has-wf-argo-events-
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: has-wf-full-refresh
Keep getting errors of workflows not found
"rpc err
or: code = NotFound desc = workflows.argoproj.io \"has-wf-full-refresh\" not found"
I have hundreds of workflows launched as cronworkflows. And i would like to switch them to be event driven vs cron based. Id prefer not to change already existing flows. I just want to submit or resubmit them.
I figured out that the argoWorkflow trigger template doesnt support CronWorkflows. I ended up using the httptrigger template.
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: tibco-events-sensor
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: 'false'
serviceAccountName: operate-workflow-sa
dependencies:
- name: tibco-dep
eventSourceName: tibco-events-source
eventName: whatever
triggers:
- template:
name: http-trigger
http:
url: http://argo-workflows.argo-workflows:2746/api/v1/workflows/lab-uat/submit
secureHeaders:
- name: Authorization
valueFrom:
secretKeyRef:
name: argo-workflows-sa-token
key: bearer-token
payload:
- src:
dependencyName: tibco-dep
value: CronWorkflow
dest: resourceKind
- src:
dependencyName: tibco-dep
value: coinflip
dest: resourceName
- src:
dependencyName: tibco-dep
value: coinflip-event-
dest: submitOptions.generateName
method: POST
retryStrategy:
steps: 3
duration: 3s
policy:
status:
allow:
- 200
How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.
Hi I have created following CustomResourceDefinition - crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: test.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpod
singular: testpod
kind: testpod
The corresponding resource is as below - cr.yaml
kind: testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
When i use client-go program to fetch the spec value of cr object 'testpodcr' The value comes as null.
func (c *TestConfigclient) AddNewPodForCR(obj *TestPodConfig) *v1.Pod {
log.Println("logging obj \n", obj.Name) // Prints the name as testpodcr
log.Println("Spec value: \n", obj.Spec) //Prints null
dep := &v1.Pod{
ObjectMeta: meta_v1.ObjectMeta{
//Labels: labels,
GenerateName: "test-pod-",
},
Spec: obj.Spec,
}
return dep
}
Can anyone please help in figuring this out why the spec value is resulting to null
There is an error with Your crd.yaml file. I am getting the following error:
$ kubectl apply -f crd.yaml
The CustomResourceDefinition "test.demo.k8s.com" is invalid: metadata.name: Invalid value: "test.demo.k8s.com": must be spec.names.plural+"."+spec.group
In Your configuration the name: test.demo.k8s.com does not match plurar: testpod found in spec.names.
I modified Your crd.yaml and now it works:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: testpods.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpods
singular: testpod
kind: Testpod
$ kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/testpods.demo.k8s.com created
After that Your cr.yaml also had to be fixed:
apiVersion: "demo.k8s.com/v1"
kind: Testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
After that I created namespace testns and finally created Testpod object successfully:
$ kubectl create namespace testns
namespace/testns created
$ kubectl apply -f cr.yaml
testpod.demo.k8s.com/testpodcr created