While deploying a Kubernetes application, I want to check if a resource is already present. If so it shall not be rendered. To archive this behaviour the lookup function of helm is used. As it seems is always empty while deploying (no dry-run). Any ideas what I am doing wrong?
---
{{- if not (lookup "v1" "ServiceAccount" "my-namespace" "my-sa") }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}#{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
running the corresponding kubectl command return the expected service account
kubectl get ServiceAccount my-sa -n my-namespace lists the expected service account
helm version: 3.5.4
i think you cannot use this if-statement to validate what you want.
the lookup function returns a list of objects that were found by your lookup. so, if you want to validate that there are no serviceaccounts with the properties you specified, you should check if the returned list is empty.
test something like
---
{{ if eq (len (lookup "v1" "ServiceAccount" "my-namespace" "my-sa")) 0 }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}#{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
see: https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
Related
When running:
helm upgrade --install backend ./k8s "$#"
Gives me the next error (did not happen before):
Error: UPGRADE FAILED: cannot patch "api" with kind ExternalSecret: Internal error occurred: failed calling webhook "validate.externalsecret.external-secrets.io": Post "https://external-secrets-webhook.external-secrets.svc:443/validate-external-secrets-io-v1beta1-externalsecret?timeout=5s": no endpoints available for service "external-secrets-webhook"
Any idea on how what is it or how to debug, --atomic also doesn't roll back for the same reason.
The helm config is:
{{- if .Values.awsSecret.enabled }}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ .Values.applicationName }}
namespace: {{ .Values.namespace }}
labels:
{{- include "application.labels" . | nindent 4 }}
spec:
refreshInterval: 1m
secretStoreRef:
name: cluster-secret-store
kind: ClusterSecretStore
target:
name: {{ .Values.applicationName }}
creationPolicy: Owner
dataFrom:
- extract:
key: {{ .Values.awsSecret.name }}
{{- end }}
and the gihutbActions
- helm/upgrade-helm-chart:
atomic: false
chart: ./k8s
helm-version: v3.8.2
release-name: backend
namespace: default
values: ./k8s/values-${ENV}.yaml
values-to-override:
"image.tag=${CIRCLE_TAG},\
image.repository=trak-${ENV}-backend,\
image.registry=${AWS_ECR_ACCOUNT},\
env=${ENV},\
applicationName=api,\
applicationVersion=${CIRCLE_TAG}"
Thank you
I have tried setting --atomic to true but doesn't rollBack, this morning we did a few changes on roles and permissions but should not affect this at all.
I am trying to use ne and eq in deployment.yaml but while template helm getting below error
Error:YAML parse error on cdp/templates/cdp-deployment.yaml: error converting YAML to JSON: yaml: line 50: did not find expected key
{{- if (or (ne .Values.metadata.name "application-A") (eq .Values.metadata.name "application-B") )}}
ports:
- containerPort: {{ .Values.service.port }}
envFrom:
- configMapRef:
name: {{ .Values.metadata.name }}
- secretRef:
name: {{ .Values.metadata.name }}
{{- end }}
Thank you in advance
There is no problem with this if statement. I tried to write a demo to test it. There is no problem with this paragraph.
values.yaml
metadata:
name: application-B
templates/cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
cfg: |-
{{- if (or (ne .Values.metadata.name "application-A") (eq .Values.metadata.name "application-B") )}}
ok
{{- else }}
notok
{{- end }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test-v32
labels:
helm.sh/chart: test-0.1.0
app.kubernetes.io/name: test
app.kubernetes.io/instance: test
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
data:
cfg: |-
ok
The rendered line labels are not the same as the actual line labels, as the fool said, you should call the helm template --debug test . command to debug to see what the problem is.
Consider that I have built a complex Kubernetes deployment/workload consisting of deployments, stateful sets, services, operators, CRDs with specific configuration etc… The workload/deployment was created by individual commands (kubectl create, helm install…)…
1-) Is there a way to dynamically (not manually) generate a script or a special file that describes the deployment and that could be used to redeploy/reinstall my deployment without going into each command one by one again.
2-) Is there a domain specific language (DSL) or something similar through which one can describe a Kubernetes deployment independently from the final target kubernetes cluster target, whether GKE, AWS, Azure, or on premises … kind of write once deploy anywhere…
Thanks.
I think Kustomize and Helm are your best bet. We can write the helm chart as a template and decide which template to use with Go-Templating and conditions.
For example, look at the below configuration file which has a few conditions.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
annotations:
labels:
app: {{ .Values.appName }}
spec:
containers:
- name: {{ .Values.appName }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.hasSecretVolume }}
volumeMounts:
- name: {{ .Values.appName }}-volume-sec
mountPath: {{ .Values.secretVolumeMountPath }}
{{- end}}
{{- if or .Values.env.configMap .Values.env.secrets }}
envFrom:
{{- if .Values.env.configMap }}
- configMapRef:
name: {{ .Values.appName }}-env-configmap
{{- end }}
{{- if .Values.env.secrets }}
- secretRef:
name: {{ .Values.appName }}-env-secret
{{- end }}
{{- end }}
ports:
- containerPort: {{ .Values.containerPort }}
protocol: TCP
{{- if .Values.railsContainerHealthChecks}}
{{ toYaml .Values.railsContainerHealthChecks | indent 8 }}
{{- end}}
{{- if .Values.hasSecretVolume }}
volumes:
- name: {{ .Values.appName }}-volume-sec
secret:
secretName: {{ .Values.appName }}-volume-sec
{{- end}}
{{- if .Values.imageCredentials}}
imagePullSecrets:
- name: {{.Values.imageCredentials.secretName}}
{{- end}}
For instance, this condition checks for Secret volume and mounts it.
{{- if .Values.hasSecretVolume }}
In case you are interested in helm and generic templating, you can refer to this medium blog: https://medium.com/srendevops/helm-generic-spring-boot-templates-c9d9800ddfee
I'm running a Migration Job as a pre-install hook so I created a secret also with DB values as a pre-install hook with lesser weight(should run before migration) and everything works fine, both secret and migration. The problem is the secret is deleted afterwards, which causes the regular pods to fail because it can't find the secret and I can't figure out why.
apiVersion: v1
kind: Secret
metadata:
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.secrets.name }}
chart: {{ .Values.secrets.name }}
name: {{ .Values.secrets.name }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-5"
type: Opaque
data:
{{- range $key, $val := .Values.secrets.values }}
{{ $key }}: {{ $val }}
{{- end}}
This is what the migration job looks like:
kind: Job
metadata:
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.migration.name }}
chart: {{ .Values.migration.name }}
name: {{ .Values.migration.name }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
spec:
backoffLimit: 4
template:
metadata:
labels:
app: {{ .Values.migration.name }}
release: {{ .Values.migration.name }}
spec:
containers:
#other config container values
env:
- name: APP_ROLE
value: {{ .Values.migration.role | quote }}
envFrom:
- secretRef:
name: {{ .Values.secrets.name }}
restartPolicy: Never
You've been caught using chart hooks in a way that's not really intended.
Have a look at the official helm docs for chart hooks here: Helm Docs
Scroll to the very bottom, to "Hook Deletion Policies", you'll read:
If no hook deletion policy annotation is specified, the before-hook-creation behavior applies by default.
What happens, is helm runs the hook that creates the secret, it creates it, succeeds, goes on to run the next hook ( your migration ) and deletes the secret again before executing that.
Hooks are not intended to create resources that are stay. You could try to hack your way around it by setting a hook-deletion-policy of hook-failed to the secret, but i'm not really sure what the outcome will be.
Ideally, you don't run the Migration job of your app in an Init Container of your app. This way, you would create the secrets normally, without a hook, and the init container and the app could reuse the same secret.
I am trying to pass a toleration when deploying to a chart located in stable. The toleration should be applied to a specific YAML file in the templates directory, NOT the values.yaml file as it is doing by default.
I've applied using patch and I can see that the change I need would work if it were applied to the right Service, which is a DaemonSet.
Currently I'm trying "helm install -f tolerations.yaml --name release_here"
This is simply creating a one-off entry when running get chart release_here, and is not in the correct service YAML
Quoting your requirement
The toleration should be applied to a specific YAML file in the
templates directory
First, in order to make it happen your particular helm chart file needs to allow such an end-user customization.
Here is the example based on stable/kiam chart:
Definition of kiam/templates/server-daemonset.yaml
{{- if .Values.server.enabled -}}
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.server.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kiam.fullname" . }}-server
spec:
selector:
matchLabels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
template:
metadata:
{{- if .Values.server.podAnnotations }}
annotations:
{{ toYaml .Values.server.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
{{- if .Values.server.podLabels }}
{{ toYaml .Values.server.podLabels | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "kiam.serviceAccountName.server" . }}
hostNetwork: {{ .Values.server.useHostNetwork }}
{{- if .Values.server.nodeSelector }}
nodeSelector:
{{ toYaml .Values.server.nodeSelector | indent 8 }}
{{- end }}
tolerations: <---- TOLERATIONS !
{{ toYaml .Values.server.tolerations | indent 8 }}
{{- if .Values.server.affinity }}
affinity:
{{ toYaml .Values.server.affinity | indent 10 }}
{{- end }}
volumes:
- name: tls
Override default values.yaml with your customs-values to set toleration in Pod spec of DeamonSet.
server:
enabled: true
tolerations: ## Agent container resources
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
Render the resulting manifest file, to see how it would look like when overriding default values with install/upgrade helm command using --values/--set argument:
helm template --name my-release . -x templates/server-daemonset.yaml --values custom-values.yaml
Rendered file (output truncated):
---
# Source: kiam/templates/server-daemonset.yaml
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: kiam
chart: kiam-2.5.1
component: "server"
heritage: Tiller
release: my-release
name: my-release-kiam-server
spec:
selector:
matchLabels:
app: kiam
component: "server"
release: my-release
template:
metadata:
labels:
app: kiam
component: "server"
release: my-release
spec:
serviceAccountName: my-release-kiam-server
hostNetwork: false
tolerations:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
volumes:
...
I hope this will help you to solve your problem.