kustomize uses wrong api version - kubernetes-helm

When I kustomize the cockroachdb helm chart with kubectl kustomize, the wrong kubernetes api version is used for some ressources.
kustomization
piVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: apps
generators:
- cockroachdbChart.yaml
Helm Chart Inflator:
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
name: crdb
name: cockroachdb
repo: https://charts.cockroachdb.com/
version: 10.0.3
releaseName: crdb
namespace: apps
IncludeCRDs: true
When I now run kubectl kustomize --enable-helm in the directory with those files, some are rendered with the v1beta1 version, even if the kubernetes version only supports version v1:
» kubectl kustomize --enable-helm crdb-test | grep -A 5 -B 1 v1beta
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
--
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
--
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
These are the kubectl and helm versions I have installed:
» kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.25.4
» helm version --short
v3.10.3+gd506314
Is this a kustomize error?
Can I set the api Version that kustomize uses in the kustomization file?

Kustomize doesn't know anything about what API versions are supported by your target environment, nor does it change the API versions in your source manifests.
If you're getting output with inappropriate API versions, the problem is not with Kustomize but with the source manifests.
We see the same behavior if we remove Kustomize from the equation:
$ helm template cockroachdb/cockroachdb | grep -B1 CronJob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
--
apiVersion: batch/v1beta1
kind: CronJob
metadata:
The problem here is the logic in the Helm chart, which looks like this:
{{- if and .Values.tls.enabled (and .Values.tls.certs.selfSigner.enabled (not .Values.tls.certs.selfSigner.caProvided)) }}
{{- if .Values.tls.certs.selfSigner.rotateCerts }}
{{- if .Capabilities.APIVersions.Has "batch/v1/CronJob" }}
apiVersion: batch/v1
{{- else }}
apiVersion: batch/v1beta1
{{- end }}
That relies on the value of .Capabilities.APIVersions.Has "batch/v1/CronJob", which requires Helm to query the remote Kubernetes environment to check if the server supports that API version. That doesn't happen when using helm template (or Kustomize, which is really just wrapping helm template when exploding helm charts).
The correct fix would be for the CockroachDB folks to update the helm charts to introduce a variable that controls this logic explicitly.
You can patch this in your kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generators:
- cockroachdbChart.yaml
patches:
- target:
kind: CronJob
patch: |
- op: replace
path: /apiVersion
value: batch/v1
Which results in:
$ kustomize build --enable-helm | grep -B1 CronJob
apiVersion: batch/v1
kind: CronJob
--
apiVersion: batch/v1
kind: CronJob

Related

Overwrite values.yaml in flux helm deplyoment from other source than the helmchart

I want to deploy a helm chart via flux. The helm chart is inside a repository like for example artifacthub.io where I cannot change it.
The release.yaml looks something like that
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cp-helmrelease
namespace: wordpress
spec:
chart:
spec:
chart: wordpress
sourceRef:
kind: HelmRepository
name: artifacthub
namespace: bitnami
version: 15.0.18
serviceAccountName: m2m-sa
interval: 10m
install:
remediation:
retries: 3
Now I want to overwrite the values.yaml. Using helm, I can easily say helm install xyz and then define the path to the values file. From what I see, I cannot define a path in flux to a file which is not inside the helm chart.
Is there a chance to use a helm chart from artifacthub and store the values.yaml inside my personal git repository and deploy this together with flux?
The official guide has an example of this, but values are inline in the HelmRelease rather than being in a separate file (but would obviously be in your Git repo):
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
namespace: default
spec:
interval: 5m
chart:
spec:
chart: <name|path>
version: '4.0.x'
sourceRef:
kind: <HelmRepository|GitRepository|Bucket>
name: podinfo
namespace: flux-system
interval: 1m
values:
replicaCount: 2
See: https://fluxcd.io/flux/guides/helmreleases/

helm rollout restart with changes in configmap.yaml [duplicate]

I have a file like this
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{...}
---
apiVersion: v1
kind: Service
{...}
---
apiVersion: v1
kind: ConfigMap
{...}
This has 3 objects separated by ---. I want to reference ConfigMap object inside Deployment to use with checksum annotation. Is it possible to do so?
You will have to use a template system like this one, which will process your YAMLs and generate the desired manifests for your resources.
I suspect you will have to declare your ConfigMap as a variable that can be substituted in your deployment.yaml by the template system.
Alternatively, you can look at the kustomize system which also provides templatized generation of manifests. An example on how to deal with annotations with kustomize can be found here.

yaml reference "___" seperated kubernetes object

I have a file like this
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{...}
---
apiVersion: v1
kind: Service
{...}
---
apiVersion: v1
kind: ConfigMap
{...}
This has 3 objects separated by ---. I want to reference ConfigMap object inside Deployment to use with checksum annotation. Is it possible to do so?
You will have to use a template system like this one, which will process your YAMLs and generate the desired manifests for your resources.
I suspect you will have to declare your ConfigMap as a variable that can be substituted in your deployment.yaml by the template system.
Alternatively, you can look at the kustomize system which also provides templatized generation of manifests. An example on how to deal with annotations with kustomize can be found here.

How to patch multi document yaml file on condition using yq?

Having YAML document something like:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-scraping
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: allow-webhooks
I am trying to get something like
---
apiVersion: **networking.k8s.io/v1beta1**
kind: NetworkPolicy
metadata:
name: allow-scraping
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: allow-webhooks
So basically get document, if document has kind: NetworkPolicy then patch apiVersion: networking.k8s.io/v1beta1.
Ideally one liner, ideally with yq v4, but other solutions will be helpful too.
With mikefarah/yq on versions beyond 4, you could do a select and update |= operation on the required document
yq e 'select(.kind == "NetworkPolicy").apiVersion |= "networking.k8s.io/v1beta1"' yaml
The above works fine on yq version 4.6.0. Use the -i flag to replace the file in-place.
Given that other solutions will be helpful - an alternative solution would be using kustomize:
Create the kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- network-policy.yaml
patches:
- target:
kind: NetworkPolicy
group: networking.k8s.io
version: v1
patch: |
- op: replace
path: /apiVersion
value: networking.k8s.io/v1beta1
Run
kustomize build | kubectl apply -f -
or
kubectl apply -k .

How to correctly update apiVersion of manifests prior to cluster upgrade?

So I did update the manifest and replaced apiVersion: extensions/v1beta1 to apiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: secretmanager
namespace: kube-system
spec:
selector:
matchLabels:
app: secretmanager
template:
metadata:
labels:
app: secretmanager
spec:
...
I then applied the change
k apply -f deployment.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/secretmanager configured
I also tried
k replace --force -f deployment.yaml
That recreated the POD (downtime :( ) but still if you try to output the yaml config of the deployment I see the old value
k get deployments -n kube-system secretmanager -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment",
"metadata":{"annotations":{},"name":"secretmanager","namespace":"kube-system"}....}
creationTimestamp: "2020-08-21T21:43:21Z"
generation: 2
name: secretmanager
namespace: kube-system
resourceVersion: "99352965"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/secretmanager
uid: 3d49aeb5-08a0-47c8-aac8-78da98d4c342
spec:
So I still see this apiVersion: extensions/v1beta1
What I am doing wrong?
I am preparing eks kubernetes v1.15 to be migrated over to v1.16
The Deployment exists in multiple apiGroups, so it is ambiguous. Try to specify e.g. apps/v1 with:
kubectl get deployments.v1.apps
and you should see your Deployment but with apps/v1 apiGroup.