How to patch multi document yaml file on condition using yq? - kubernetes

Having YAML document something like:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-scraping
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: allow-webhooks
I am trying to get something like
---
apiVersion: **networking.k8s.io/v1beta1**
kind: NetworkPolicy
metadata:
name: allow-scraping
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: allow-webhooks
So basically get document, if document has kind: NetworkPolicy then patch apiVersion: networking.k8s.io/v1beta1.
Ideally one liner, ideally with yq v4, but other solutions will be helpful too.

With mikefarah/yq on versions beyond 4, you could do a select and update |= operation on the required document
yq e 'select(.kind == "NetworkPolicy").apiVersion |= "networking.k8s.io/v1beta1"' yaml
The above works fine on yq version 4.6.0. Use the -i flag to replace the file in-place.

Given that other solutions will be helpful - an alternative solution would be using kustomize:
Create the kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- network-policy.yaml
patches:
- target:
kind: NetworkPolicy
group: networking.k8s.io
version: v1
patch: |
- op: replace
path: /apiVersion
value: networking.k8s.io/v1beta1
Run
kustomize build | kubectl apply -f -
or
kubectl apply -k .

Related

How to remove ingress annotation using Kustomize

In the Base Ingress file I have added the following annotation nginx.ingress.kubernetes.io/auth-snippet and it needs to be removed in one of the environment.
Base Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/auth-snippet: test
I created a ingress-patch.yml in overlays and added the below
- op: remove
path: /metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet
But it gives the below error when executing Kustomize Build
Error: remove operation does not apply: doc is missing path: "/metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet": missing value
The path /metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet doesn't work because / is the character that JSONPath uses to separate elements in the document; there's no way for a JSONPath parser to know that the / in nginx.ingress.kubernetes.io/auth-snippet means something different from the / in /metadata/annotations.
The JSON Pointer RFC (which is the syntax used to specify the path component of a patch) tells us that we need to escape / characters using ~1. If we have the following in ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
example-annotation: foo
nginx.ingress.kubernetes.io/auth-snippet: test
And write our kustomization.yaml like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ingress.yaml
patches:
- target:
kind: Ingress
name: ingress
patch: |
- op: remove
path: /metadata/annotations/nginx.ingress.kubernetes.io~1auth-snippet
Then the output of kustomize build is:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
example-annotation: foo
name: ingress

kustomize uses wrong api version

When I kustomize the cockroachdb helm chart with kubectl kustomize, the wrong kubernetes api version is used for some ressources.
kustomization
piVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: apps
generators:
- cockroachdbChart.yaml
Helm Chart Inflator:
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
name: crdb
name: cockroachdb
repo: https://charts.cockroachdb.com/
version: 10.0.3
releaseName: crdb
namespace: apps
IncludeCRDs: true
When I now run kubectl kustomize --enable-helm in the directory with those files, some are rendered with the v1beta1 version, even if the kubernetes version only supports version v1:
» kubectl kustomize --enable-helm crdb-test | grep -A 5 -B 1 v1beta
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
--
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
--
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
These are the kubectl and helm versions I have installed:
» kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.25.4
» helm version --short
v3.10.3+gd506314
Is this a kustomize error?
Can I set the api Version that kustomize uses in the kustomization file?
Kustomize doesn't know anything about what API versions are supported by your target environment, nor does it change the API versions in your source manifests.
If you're getting output with inappropriate API versions, the problem is not with Kustomize but with the source manifests.
We see the same behavior if we remove Kustomize from the equation:
$ helm template cockroachdb/cockroachdb | grep -B1 CronJob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
--
apiVersion: batch/v1beta1
kind: CronJob
metadata:
The problem here is the logic in the Helm chart, which looks like this:
{{- if and .Values.tls.enabled (and .Values.tls.certs.selfSigner.enabled (not .Values.tls.certs.selfSigner.caProvided)) }}
{{- if .Values.tls.certs.selfSigner.rotateCerts }}
{{- if .Capabilities.APIVersions.Has "batch/v1/CronJob" }}
apiVersion: batch/v1
{{- else }}
apiVersion: batch/v1beta1
{{- end }}
That relies on the value of .Capabilities.APIVersions.Has "batch/v1/CronJob", which requires Helm to query the remote Kubernetes environment to check if the server supports that API version. That doesn't happen when using helm template (or Kustomize, which is really just wrapping helm template when exploding helm charts).
The correct fix would be for the CockroachDB folks to update the helm charts to introduce a variable that controls this logic explicitly.
You can patch this in your kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generators:
- cockroachdbChart.yaml
patches:
- target:
kind: CronJob
patch: |
- op: replace
path: /apiVersion
value: batch/v1
Which results in:
$ kustomize build --enable-helm | grep -B1 CronJob
apiVersion: batch/v1
kind: CronJob
--
apiVersion: batch/v1
kind: CronJob

How to update a multi document yaml file on condition using yq?

I am trying to add namespace to a multi document kubernetes yaml file if it doesn't exist but if the namespace field already exists don't update it using yq but i can't seem to get it to work. Is this possible?
e.g.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount2
namespace: namespace2
should end up something like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount1
namespace: namespace1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount2
namespace: namespace2
UPDATE:
After successfully updating the yaml file thanks to:
https://stackoverflow.com/a/74736018/16126608
I am getting the namespace populated however due to the fact my yaml file was produced as an output of a helm template I am seeing the following issue where an empty document is having {} added as a result of the yq command, e.g.
---
# Source: path/to/chart/templates/configmap.yaml
---
# Source: ppath/to/chart/templates/network-policy.yaml
---
Is coming out like this afterwards:
---
# Source: path/to/chart/templates/configmap.yaml
---
{}
# Source: path/to/chart/templates/network-policy.yaml
---
{}
Is it possible to have yq to not add the {} and ignore the empty documents?
I am using mikefarah/yq
You can use has to check the existence of a field, and not to negate, then select it and add the new field.
Which implementation of yq are you using?
Using kislyuk/yq:
yq -y 'select(.metadata | has("namespace") | not).metadata.namespace = "namespace1"'
Using mikefarah/yq:
yq 'select(.metadata | has("namespace") | not) |= .metadata.namespace = "namespace1"'

How to make Deployment, Ingress and Service Yaml file on one file

I want to make a YAML file with Deployment, Ingress, and Service (maybe with clusterissuer, issuer and cert) on one file, how can I do that? I tried
kubectl apply -f (name_file.yaml)
You can it with three dashes on your yaml file
like this
apiVersion: v1
kind: Service
metadata:
name: mock
spec:
...
---
apiVersion: v1
kind: ReplicationController
metadata:
name: mock
spec:
Source : https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a

Can we use "data" as a yaml file instead of Json file in Config map

lets take this example of a config map
apiVersion: v1
kind: ConfigMap
data:
abc.yml: |-
<yml here>
Getting an error like failed to parse yaml to Json.
Yes you can do that, but you should care about the syntax. You can also follow techniques for yaml from here.
If you use kubectl create configmap myconfig --from-file=abc.yml, then it is ok.
But if you write the whole yaml file for your configmap in myconfig.yaml and then run kubectl create -f myconfig.yaml, then you should care about syntax.
Say your abc.yml file is as followings:
a:
b: b1
c: c1
d: d1
Then write your myconfig.yaml file:
apiVersion: v1
kind: ConfigMap
data:
abc.yml: |
a:
b: b1
c: c1
d: d1
Now just run kubectl create -f myconfig.yaml.
That's it.
Happy Kubernetes!!!.
Create ConfigMap from file.
kubectl create configmap myconfig --from-file=youfile.yml.
You can check more examples on kubernetes docs
These could be the problems
1. most likely the issue could with the indentation.
2. remove '-' from abc.yml: |- and check
I followed the below steps and was able to load yaml file into configmap. it worked fine.
master $ cat c.yaml
apiVersion: v1
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru
kind: ConfigMap
metadata:
name: example-redis-config
master $ kubectl create configmap testcfg --from-file=./c.yaml
master $ kubectl get cm testcfg -oyaml
apiVersion: v1
data:
c.yaml: |
apiVersion: v1
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru
kind: ConfigMap
metadata:
name: example-redis-config
kind: ConfigMap
metadata:
creationTimestamp: 2019-03-07T08:35:18Z
name: testcfg
namespace: default
resourceVersion: "7520"
selfLink: /api/v1/namespaces/default/configmaps/testcfg
uid: f033536d-40b3-11e9-a67d-0242ac11005b