I have a file like this
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{...}
---
apiVersion: v1
kind: Service
{...}
---
apiVersion: v1
kind: ConfigMap
{...}
This has 3 objects separated by ---. I want to reference ConfigMap object inside Deployment to use with checksum annotation. Is it possible to do so?
You will have to use a template system like this one, which will process your YAMLs and generate the desired manifests for your resources.
I suspect you will have to declare your ConfigMap as a variable that can be substituted in your deployment.yaml by the template system.
Alternatively, you can look at the kustomize system which also provides templatized generation of manifests. An example on how to deal with annotations with kustomize can be found here.
Related
When I kustomize the cockroachdb helm chart with kubectl kustomize, the wrong kubernetes api version is used for some ressources.
kustomization
piVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: apps
generators:
- cockroachdbChart.yaml
Helm Chart Inflator:
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
name: crdb
name: cockroachdb
repo: https://charts.cockroachdb.com/
version: 10.0.3
releaseName: crdb
namespace: apps
IncludeCRDs: true
When I now run kubectl kustomize --enable-helm in the directory with those files, some are rendered with the v1beta1 version, even if the kubernetes version only supports version v1:
» kubectl kustomize --enable-helm crdb-test | grep -A 5 -B 1 v1beta
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
--
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
--
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
These are the kubectl and helm versions I have installed:
» kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.25.4
» helm version --short
v3.10.3+gd506314
Is this a kustomize error?
Can I set the api Version that kustomize uses in the kustomization file?
Kustomize doesn't know anything about what API versions are supported by your target environment, nor does it change the API versions in your source manifests.
If you're getting output with inappropriate API versions, the problem is not with Kustomize but with the source manifests.
We see the same behavior if we remove Kustomize from the equation:
$ helm template cockroachdb/cockroachdb | grep -B1 CronJob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
--
apiVersion: batch/v1beta1
kind: CronJob
metadata:
The problem here is the logic in the Helm chart, which looks like this:
{{- if and .Values.tls.enabled (and .Values.tls.certs.selfSigner.enabled (not .Values.tls.certs.selfSigner.caProvided)) }}
{{- if .Values.tls.certs.selfSigner.rotateCerts }}
{{- if .Capabilities.APIVersions.Has "batch/v1/CronJob" }}
apiVersion: batch/v1
{{- else }}
apiVersion: batch/v1beta1
{{- end }}
That relies on the value of .Capabilities.APIVersions.Has "batch/v1/CronJob", which requires Helm to query the remote Kubernetes environment to check if the server supports that API version. That doesn't happen when using helm template (or Kustomize, which is really just wrapping helm template when exploding helm charts).
The correct fix would be for the CockroachDB folks to update the helm charts to introduce a variable that controls this logic explicitly.
You can patch this in your kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generators:
- cockroachdbChart.yaml
patches:
- target:
kind: CronJob
patch: |
- op: replace
path: /apiVersion
value: batch/v1
Which results in:
$ kustomize build --enable-helm | grep -B1 CronJob
apiVersion: batch/v1
kind: CronJob
--
apiVersion: batch/v1
kind: CronJob
I am trying to add namespace to a multi document kubernetes yaml file if it doesn't exist but if the namespace field already exists don't update it using yq but i can't seem to get it to work. Is this possible?
e.g.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount2
namespace: namespace2
should end up something like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount1
namespace: namespace1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount2
namespace: namespace2
UPDATE:
After successfully updating the yaml file thanks to:
https://stackoverflow.com/a/74736018/16126608
I am getting the namespace populated however due to the fact my yaml file was produced as an output of a helm template I am seeing the following issue where an empty document is having {} added as a result of the yq command, e.g.
---
# Source: path/to/chart/templates/configmap.yaml
---
# Source: ppath/to/chart/templates/network-policy.yaml
---
Is coming out like this afterwards:
---
# Source: path/to/chart/templates/configmap.yaml
---
{}
# Source: path/to/chart/templates/network-policy.yaml
---
{}
Is it possible to have yq to not add the {} and ignore the empty documents?
I am using mikefarah/yq
You can use has to check the existence of a field, and not to negate, then select it and add the new field.
Which implementation of yq are you using?
Using kislyuk/yq:
yq -y 'select(.metadata | has("namespace") | not).metadata.namespace = "namespace1"'
Using mikefarah/yq:
yq 'select(.metadata | has("namespace") | not) |= .metadata.namespace = "namespace1"'
Is there any option to reference service's property from another entity, Like Config Map or Deployment? To be more specific I want to put service's name in ConfigMap, not by myself, but rather to link it programmatically.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
namespace: ConfigMap-Namespace
Data:
ServiceName: <referenced-service-name>
---
apiVersion: v1
kind: Service
metadata:
name: service-name /// that name I want to put in ConfigMap.
namespace: ConfigMap-Namespace
spec:
....
Thanks...
Using plain kubectl, there is no way to dynamically fill in content like this. There are very limited exceptions around injecting values into environment variables in Pods (and PodSpecs inside other objects) but a ConfigMap must contain fixed content.
In this example, the Service object name is fixed, and I'd just embed the same fixed string into the ConfigMap.
If you were using a templating engine like Helm then you could call the same template code to render the Service name in both places. For example:
{{- define "service.name" -}}
{{ .Release.Name }}-service
{{- end -}}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "service.name" . }}
...
---
apiVersion: v1
kind: ConfigMap
metadata: { ... }
data:
serviceName: {{ include "service.name" . }}
I have a file like this
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{...}
---
apiVersion: v1
kind: Service
{...}
---
apiVersion: v1
kind: ConfigMap
{...}
This has 3 objects separated by ---. I want to reference ConfigMap object inside Deployment to use with checksum annotation. Is it possible to do so?
You will have to use a template system like this one, which will process your YAMLs and generate the desired manifests for your resources.
I suspect you will have to declare your ConfigMap as a variable that can be substituted in your deployment.yaml by the template system.
Alternatively, you can look at the kustomize system which also provides templatized generation of manifests. An example on how to deal with annotations with kustomize can be found here.
Having YAML document something like:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-scraping
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: allow-webhooks
I am trying to get something like
---
apiVersion: **networking.k8s.io/v1beta1**
kind: NetworkPolicy
metadata:
name: allow-scraping
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: allow-webhooks
So basically get document, if document has kind: NetworkPolicy then patch apiVersion: networking.k8s.io/v1beta1.
Ideally one liner, ideally with yq v4, but other solutions will be helpful too.
With mikefarah/yq on versions beyond 4, you could do a select and update |= operation on the required document
yq e 'select(.kind == "NetworkPolicy").apiVersion |= "networking.k8s.io/v1beta1"' yaml
The above works fine on yq version 4.6.0. Use the -i flag to replace the file in-place.
Given that other solutions will be helpful - an alternative solution would be using kustomize:
Create the kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- network-policy.yaml
patches:
- target:
kind: NetworkPolicy
group: networking.k8s.io
version: v1
patch: |
- op: replace
path: /apiVersion
value: networking.k8s.io/v1beta1
Run
kustomize build | kubectl apply -f -
or
kubectl apply -k .