In Helm, what is the difference between a release and a revision? - kubernetes-helm

Both terms appear in the official documentation in very similar contexts. What is the difference, if any?

A release in Helm is an instance of a chart running in a K8 cluster. A revision is linked to a release to track the number of updates/changes that release encounters. Let's say we have a chart named minio and we want to install 2 instances of this chart into our K8 cluster, we run:
helm install myrelease minio
helm install myrelease2 minio
where myrelease and myrelease2 are the release names that I give to each release. So now we have 2 running instances of minio in our cluster. Let's look at the pods created via kubectl get pods, we see:
NAME READY STATUS RESTARTS AGE
pod/myrelease-minio-6b7bc5dfdf-5lfgq 1/1 Running 0 3m10s
pod/myrelease2-minio-b8987d769-xtlgl 1/1 Running 0 37s
where each pod has its release name as its prefix. In addition, we can run helm list to show the charts we've installed:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myrelease default 1 2022-09-14 15:50:42.625388 -0400 EDT deployed minio-0.1.0 2020.10.28
myrelease2 default 1 2022-09-14 15:53:15.478714 -0400 EDT deployed minio-0.1.0 2020.10.28
notice that each of the release has its own revision variable, this tracks any changes to a release and REVISION=1 means this this chart/release has not been changed/updated. If I update myrelease by upgrading version or changing pod replicas, REVISION would increment and in this case, change to 2.
helm upgrade myrelease minio --set replicas=2
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myrelease default 2 2022-09-14 15:58:12.220863 -0400 EDT deployed minio-0.1.0 2020.10.28
myrelease2 default 1 2022-09-14 15:53:15.478714 -0400 EDT deployed minio-0.1.0 2020.10.28
See how after updating the desired replicas for myrelase, the revision number is updated. Say there's an issue with this upgrade, I can simply rollback and this will also increase the REVISION count as it tracks the number of updates to our chat:
helm rollback myrelease
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myrelease default 3 2022-09-14 16:00:39.942524 -0400 EDT deployed minio-0.1.0 2020.10.28
myrelease2 default 1 2022-09-14 15:53:15.478714 -0400 EDT deployed minio-0.1.0 2020.10.28
RELEASE is a running instance of our chart in a K8 cluster.
REVISION tracks the number of changes on a release.
Hope that helps!

Related

Failed pods of previous helm release are not removed automatically

I have an application Helm chart with two deployments:
app (2 pod replicas)
app-dep (1 pod replica)
app-dep has an init container that waits for the app pods (using its labels) to be ready:
initContainers:
- name: wait-for-app-pods
image: groundnuty/k8s-wait-for:v1.5.1
imagePullPolicy: Always
args:
- "pod"
- "-l app.kubernetes.io/component=app"
I am using helm to deploy an application:
helm upgrade --install --wait --create-namespace --timeout 10m0s app ./app
Revision 1 of the release app is deployed:
helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
app default 1 2023-02-03 01:10:18.796554241 +1100 AEDT deployed app-0.1.0 1.0.0
Everything goes fine probably.
After some time, one of the app pods is evicted due to the low Memory available.
These are some lines from the pod's description details:
Status: Failed
Reason: Evicted
Message: The node was low on resource: memory. Container app was using 2513780Ki, which exceeds its request of 0.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Evicted 12m kubelet The node was low on resource: memory. Container app was using 2513780Ki, which exceeds its request of 0.
Normal Killing 12m kubelet Stopping container app
Warning ExceededGracePeriod 12m kubelet Container runtime did not kill the pod within specified grace period.
Later a new pod was added automatically to match the deployment's replica count too.
But the Failed pod still remains in the namespace.
Now comes the next helm upgrade. The pods of app for release revision 2 are ready.
But the init-container of app-dep of the latest revision remains to wait for all the pods with the label app.kubernetes.io/component=app" to become ready. After 10 minutes of timeout helm release 2 is declared as failed.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-7595488c8f-4v42n 1/1 Running 0 7m37s
app-7595488c8f-xt4qt 1/1 Running 0 6m17s
app-86448b6cd-7fq2w 0/1 Error 0 36m
app-dep-546d897d6c-q9sw6 1/1 Running 0 38m
app-dep-cd9cfd975-w2fzn 0/1 Init:0/1 0 7m37s
ANALYSIS FOR SOLUTION:
In order to address this issue, we can try two approaches:
Approach 1:
Find and remove all the failed pods of the previous revision first, just before doing a helm upgrade.
kubectl get pods --field-selector status.phase=Failed -n default
You can do it as part of the CD pipeline or add that task as a pre-install hook job to the helm chart too.
Approach 2:
Add one more label to the pods that change on every helm upgrade ( something like helm/release-revision=2 )
Add that label also in the init-container so that it waits for the pods that have both labels.
It will then ignore the Failed pods of the previous release that have a different label.
initContainers:
- name: wait-for-app-pods
image: groundnuty/k8s-wait-for:v1.5.1
imagePullPolicy: Always
args:
- "pod"
- "-l app.kubernetes.io/component=app, helm/release-revision=2"
This approach causes a frequent updation of pod labels and therefore recreates the pod each time. Also, it is better to update the pod labels only in the deployment because as per official Kubernetes documentation of Deployment resource:
It is generally discouraged to make label selector updates
Also, there is no need to add the revision label to the selector field in the service manifest.
QUESTION:
Which approach would be better practice?
What would be the caveats and benefits of each method?
Is there any other approach to fix this issue?

NGINX Controller Upgrade Using Helm

I installed NGINX Controller 2 years ago using Helm 2 in our AKS clusters, and it pulled the image from quay.io at the time:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.0
We are now looking to upgrade our NGINX ingress controllers, and in our new clusters I see the image repo is gcr.io:
k8s.gcr.io/ingress-nginx/controller:v1.20.0#sha256:8xxxxxxxxxxxxxxxxxxxxxxxxxxxx3
I ran the following command using Helm 3 to upgrade Kubernetes NGINX Controller to no avail in our old cluster with controller from quay.io:
helm upgrade awesome-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx -f nginx-reuse-values-file.yaml
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
The K8s version is 1.20.9.
The current quay.io NGINX ingress controller manifest shows following version:
apiVersion: apps/v1
Well, figured it out:
https://github.com/helm/helm-mapkubeapis
The Helm mapkubeapis plugin for the win. I had to update deprecated APIs as evident in the error message in my original post. Ran Helm upgrade after updating to the latest APIs for my K8s version successfully.

wiki.js exec user process caused: exec format error on postgress container

I'm trying to deploy a wiki.js into my K3S cluster of four RPi4.
For this, I run this commands according to the install instructions (https://docs.requarks.io/install/kubernetes):
$ helm repo add requarks https://charts.js.wiki
$ helm repo update
$ helm install wikijs requarks/wiki
After those commands, I get the following:
NAME: wikijs
LAST DEPLOYED: Tue Jun 14 13:25:30 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
http://wiki.minikube.localmap[path:/ pathType:Prefix]
However, when I get the pods, I get the following:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
wikijs-7f6c8b9f54-lz55k 0/1 ContainerCreating 0 3s
wikijs-postgresql-0 0/1 Error 0 3s
Finally, viewing the postgres logs, I get:
$ kubectl logs wikijs-postgresql-0
standard_init_linux.go:228: exec user process caused: exec format error
I believe this is an error about an executable running in the wrong architecture but, both, wikijs and postgresql support ARM64 so, by deploying the app, the right architecture should be selected, shouldn't it?
If I need to select the architecture manually, how can I do so? I've viewed the chart for wikijs and I can't find the place to select the postgres image.
Many thanks!
I was running into the same issue. The issue is running the postgres image on your rpi. I was able to get this to work on my rpi4 using this image for my postgresql statefulset: arm64v8/postgres:14 from docker.io.
I had to change this image in two places within the helm chart:
# charts/postgresql/values.yaml
image:
registry: docker.io
repository: arm64v8/postgres
tag: 14
volumePermissions:
enabled: true
image:
registry: docker.io
repository: arm64v8/postgres
tag: 14
The latter is for the initContainer (see statefulset template within the postgresql chart).

Cannot install kube-prometheus-stack in k8s 1.15

Running kubernetes 1.15 in azure.
I need a basic alert (e-mail/slack notification) when one or more of my applications/pods are down in kubernetes.
As an example I have https://cert-manager.io/docs/ running in multiple clusters (hosted in azure) and I would like to get an alert (e-mail/slack notification) if it stops running.
Based on this post:
How do I set up a hook to send an email on Kubernetes pod restart?
it seems to get an e-mail alert I need to install Prometheus + Grafana access the web-ui and configure alerts so based on:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack
I have tried:
helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo update
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring
But that gives:
Error: failed to install CRD crds/crd-alertmanager.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
Here there is some guide on how to create the crds manually:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#helm-fails-to-create-crds
but that should only be necessary if running helm 2.x which I am not I am running 3.1.2.
Also if I try to install them manually I get:
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
...
Also I found this kube-prometheus stack compatibility matrix:
https://github.com/prometheus-operator/kube-prometheus#compatibility
but versions in that matris does not match the ones I get:
$ helm search repo prometheus-community/kube-prometheus-stack --versions
NAME CHART VERSION APP VERSION DESCRIPTION
prometheus-community/kube-prometheus-stack 10.1.2 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.1.1 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.1.0 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.0.2 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.0.1 0.42.1 kube-prometheus-stack collects Kubernetes manif...
So seems that might be a 3rd way to install Prometheus
Any input appreciated.
UPDATE:
Randomly selecting the previous major version (9.4.10) seems to work:
$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring --version 9.4.10
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
NAME: kube-prometheus-stack
LAST DEPLOYED: Fri Oct 23 15:15:03 2020
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
Guess trial and error is the way to go when installing stuff on older k8s version, could be great with compatibility matrices though.
Based on the kube-prometheus-stack repo, this helm chart is restricted for K8s versions 1.16.0 or above;
kubeVersion: ">=1.16.0-0"
Even though the github README says the prerequisites as Kubernetes 1.10+ with Beta APIs, internally the helm chart checks for the kube version to be 1.16.0 or above.
So I believe, you will need to try this on an upgrade K8s cluster.
If upgrading the cluster is not an option, maybe you could try the deprecated old version of this.
https://github.com/helm/charts/tree/master/stable/prometheus

kubectl get hpa targets:unknow

I have installed kubeadm. Heapster show me metrics, but hpa no
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
httpd Deployment/httpd <unknown> / 2% 2 5 2 19m
kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
You may have had to enable a metrics-server. Heapster is now deprecated. Also make sure you have Kubernetes version greater than 1.7. You can check this buy typing kubectl get nodes.
You can enable the metrics server by looking at the minikube addons.
minikube addons list gives you the list of addons.
minikube addons enable metrics-server enables metrics-server.
Wait a few minutes, then if you type kubectl get hpa the percentage for the TARGETS <unknown> should appear.
I found the solution:
kubectl describe hpa
failed to get cpu utilization: missing request for cpu on container httpd in pod default/httpd-796666570-2h1c6
Change the yaml of deployment and add:
resources:
requests:
cpu:400m
Then kubectl describe hpa
failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from heapster
Wait a few minutes and all works fine.
In kubernetes it can say unknown for hpa. In this situation you should check several places.
In K8s 1.9 uses custom metrics. so In order to work your k8s cluster
with heapster you should check kube-controller-manager.
Add these parameters.
--horizontal-pod-autoscaler-use-rest-clients=false
--horizontal-pod-autoscaler-sync-period=10s
based on https://github.com/kubernetes/kubernetes/issues/57673
Case you should change your heapster deployment.
--source=kubernetes:https://kubernetes.default?kubeletPort=10250&kubeletHttps=true&insecure=true parameter is enough.
I found this link very informative https://blog.inkubate.io/deploy-kubernetes-1-9-from-scratch-on-vmware-vsphere/
you have to enable the metrics server which you can do it using the helm chart
helm chart is easy way to add the metrics server
helm install stable/metrics-server
wait for 3-4 minutes after pods started running and you open kubectl get hpa you can check there target is showing values.
Make sure your spec has this part properly configured:
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage}}
In my case I had name: Memory with uppercase M and that cost me a day to find out.