Helm chart restart pods when configmap changes - kubernetes

I am trying to restart the pods when there is a confimap or secret change. I have tried the same piece of code as described in: https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change
However, after updating the configmap, my pod does not get restarted. Would you have any idea what could has been done wrong here?
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
{{- include "global_labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yml") . | sha256sum }}

https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments Helm3 has this feature now. deployments are rolled out when there is change in configmap template file.

Neither Helm nor Kubernetes provide a specific rolling update for a ConfigMap change. The workaround has been for a while is to just patch the deployment which triggers the rolling update:
kubectl patch deployment your-deployment -n your-namespace -p '{"spec":{"template":{"metadata":{"annotations":{"date":"$(date)"}}}}}'
And you can see the status:
kubectl rollout status deployment your-deployment
Note this works on a nix machine. This is until this feature is added.
Update 05/05/2021
Helm and kubectl provide this now:
Helm: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
kubectl: kubectl rollout restart deploy WORKLOAD_NAME

it worked for me, below is the code snippet from my deployment.yaml file, make sure your configmap and secret yaml file are same as what referred in the annotations:
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/my-configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/my-secret.yaml") . | sha256sum }}

I deployed pod with configmap with this feature https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments.
When I edited the configmap at runtime , it didn't trigger the roll-deployment.

Related

helm: no endpoints available for service "external-secrets-webhook"

When running:
helm upgrade --install backend ./k8s "$#"
Gives me the next error (did not happen before):
Error: UPGRADE FAILED: cannot patch "api" with kind ExternalSecret: Internal error occurred: failed calling webhook "validate.externalsecret.external-secrets.io": Post "https://external-secrets-webhook.external-secrets.svc:443/validate-external-secrets-io-v1beta1-externalsecret?timeout=5s": no endpoints available for service "external-secrets-webhook"
Any idea on how what is it or how to debug, --atomic also doesn't roll back for the same reason.
The helm config is:
{{- if .Values.awsSecret.enabled }}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ .Values.applicationName }}
namespace: {{ .Values.namespace }}
labels:
{{- include "application.labels" . | nindent 4 }}
spec:
refreshInterval: 1m
secretStoreRef:
name: cluster-secret-store
kind: ClusterSecretStore
target:
name: {{ .Values.applicationName }}
creationPolicy: Owner
dataFrom:
- extract:
key: {{ .Values.awsSecret.name }}
{{- end }}
and the gihutbActions
- helm/upgrade-helm-chart:
atomic: false
chart: ./k8s
helm-version: v3.8.2
release-name: backend
namespace: default
values: ./k8s/values-${ENV}.yaml
values-to-override:
"image.tag=${CIRCLE_TAG},\
image.repository=trak-${ENV}-backend,\
image.registry=${AWS_ECR_ACCOUNT},\
env=${ENV},\
applicationName=api,\
applicationVersion=${CIRCLE_TAG}"
Thank you
I have tried setting --atomic to true but doesn't rollBack, this morning we did a few changes on roles and permissions but should not affect this at all.

kubernetes get endpint in the containers

on kubernetes vm Im running for example : kubectl get endpoints
how can I get the same output inside the pod , what should I run within a pod?
I understood there is a kubeapi but Im new to kubernetes can someone explain how can I use it
this is my clusterrolebinding:
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: {{ template "elasticsearch.fullname" . }}
labels:
app: {{ template "elasticsearch.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
subjects:
- kind: ServiceAccount
name: {{ template "elasticsearch.serviceAccountName.client" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ template "elasticsearch.fullname" . }}
apiGroup: rbac.authorization.k8s.io
clusterrole.yaml:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "elasticsearch.fullname" . }}
labels:
app: {{ template "elasticsearch.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
rules:
#
# Give here only the privileges you need
#
- apiGroups: [""]
resources:
- pods
- endpoints
verbs:
- get
- watch
- list
serviceaccount:
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.client.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.client.fullname" . }}
You don't have to have kubectl installed in pod to access the Kubernetes API. You will be ableto do it with any tool that can make HTTP requests.
The Kubernetes API is a simple HTTP REST API, and all the authentication information that you need is present in the container if it runs as a Pod in the cluster.
To get the Endpoints object named your-service from within a container in the cluster, you can do:
$ curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc:443/api/v1/namespaces/{namespace}/endpoints/your-service
Replace {namespace} with the namespace of the your-service Endpoints resource._
To extract the IP addresses of the returned JSON pipe the output to a tool like jq:
... | jq -r '.subsets[].addresses[].ip'
IMPORTANT:
The Pod from which you are executing this needs read permissions for the Endpoints resource, otherwise the API request will be denied.
You can do this by creating a ClusterRole, ClusterRoleBinding, and Service Account - set this up once:
$ kubectl create sa endpoint-reader-sa
$ kubectl create clusterrole endpoint-reader-cr --verb=get,list --resource=endpoints
$ kubectl create clusterrolebinding endpoint-reader-crb --serviceaccount=default:endpoint-reader-sa --clusterrole=endpoint-reader-cr
Next use created ServiceAccount - endpoint-reader-sa for the Pod from which you want to execute the above curl command by specifying it in the pod.spec.serviceAccountName field.
Granting permissions for any different API operations works in the same way.
Source: get-pod-ip.
And as also #ITChap mentioned similar answer: kubectl-from-inside-the-pod.

How to append Secret/ConfigMap hash prefix properly in Helm?

I want to append the hash of my Secret or ConfigMap contents to the name of the resource in order to trigger a rolling update and keep the old version of that resource around in case there is a mistake in the new configuration.
This can almost be achieved using "helm.sh/resource-policy": keep on the Secret/ConfigMap but these will never be cleaned up. Is there a way of saying 'keep all but the last two' in Helm or an alternative way of achieving this behaviour?
$ helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
Automatically Roll Deployments
In order to update resource when Secret or Configmap changes, you can add checksum annotation to your deployment
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
You can revert to your previous configuration with helm rollback command
Update:
A ssuming that your Configmap is generated using values.yaml file, you can add a _helper.tpl function
{{- define "mychart.configmapChecksum" -}}
{{ printf "configmap-%s" (.Values.bar | sha256sum) }}
{{- end }}
And use {{ include "mychart.configmapChecksumed" . }} both as configmap name and reference in deployment.
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.configmapChecksumed" . }}
annotations:
"helm.sh/resource-policy": keep
data:
config.properties: |
foo={{ .Values.bar }}
deployment.yaml
...
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: {{ include "mychart.configmapChecksumed" . }}
Please note that you have to keep "helm.sh/resource-policy": keep annotation on Configmap telling helm to not delete the previous versions.
You can not use {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} as a configmap name directly because helm rendering will fail with
error calling include: rendering template has a nested reference name

How to see VS 2019 YAML Template Output generation

I have yaml template in VS 2019 with variables like below.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "kubernetes1.fullname" . }}
labels:
app: {{ template "kubernetes1.name" . }}
chart: {{ template "kubernetes1.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
etc.....
Now I want to see the output of fully generated yaml by filling those variable values. Is there a way?
If I understand you correctly, what you are looking for is kubectl flag --dry-run.
Here is a link to the documentation references for this kubectl create.
If you use the dry-run flag this will take your yaml and create it without applying it to the cluster.
Also if you want to see the output of that yaml you should use -o yaml, which redirects the output to yaml format.

Helm upgrade --install isn't picking up new changes

I'm using the command below in my build CI such that the deployments to helm happen on each build. However, I'm noticing that the changes aren't being deployed.
helm upgrade --install --force \
--namespace=default \
--values=kubernetes/values.yaml \
--set image.tag=latest \
--set service.name=my-service \
--set image.pullPolicy=Always \
myService kubernetes/myservice
Do I need to tag the image each time? Does helm not do the install if the same version exists?
You don't have to tag the image each time with a new tag. Just add
date: "{{ now | unixEpoch }}"
under spec/template/metadata/labels and set imagePullPolicy: Always. Helm will detect the changes in the deployment object and will pull the latest image each time:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-{{ .Values.app.frontendName }}-deployment"
labels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ now | unixEpoch }}"
spec:
containers:
- name: {{ .Values.app.frontendName }}
image: "rajesh12/myimage:latest"
imagePullPolicy: Always
Run helm upgrade releaseName ./my-chart to upgrade your release
With helm 3, the --recreate-pods flag is deprecated.
Instead you can use
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
This will create a random string annotation, that always changes and causes the deployment to roll.
Helm - AUTOMATICALLY ROLL DEPLOYMENTS
Another label, perhaps more robust than the seconds you can add is simply the chart revision number:
...
metadata:
...
labels:
helm-revision: "{{ .Release.Revision }}"
...
Yes, you need to tag each build rather than use 'latest'. Helm does a diff between the template evaluated from your parameters and the currently deployed one. Since both are 'latest' it sees no change and doesn't apply any upgrade (unless something else changed). This is why the helm best practices guide advises that "container image should use a fixed tag or the SHA of the image". (See also https://docs.helm.sh/chart_best_practices/ and Helm upgrade doesn't pull new container )