Get helm rollback image version - kubernetes

I'm using a C# code to run the command helm rollback <ImageName> to rollback to the previous version (by creating new Proccess() with helm).
Is there a way to know the tag of the image the command rolled back to?

Environment
Small cluster with 2 helm charts deployed:
ingress-nginx
traefik
helm v3.7.0 is used.
Also yq was installed to work with output in yaml in the similar way jq works with json.
Rollback logic
If no revision is set, rollback will be performed on previous revision. If rollback is run again without revision, previous one again will be used.
$ helm history traefik
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Tue Oct 12 11:28:22 2021 superseded traefik-10.3.5 2.5.3 Install complete
2 Tue Oct 12 11:42:47 2021 superseded traefik-10.3.6 2.5.3 Upgrade complete
3 Tue Oct 12 11:44:32 2021 superseded traefik-10.3.2 2.5.1 Upgrade complete
4 Tue Oct 12 12:03:26 2021 superseded traefik-10.3.6 2.5.3 Rollback to 2
5 Tue Oct 12 13:26:02 2021 deployed traefik-10.3.2 2.5.1 Rollback to 3
6 Tue Oct 12 13:26:53 2021 deployed traefik-10.3.6 2.5.3 Rollback to 4
So to figure out details of the rolled back revision, we can use current revision.
The same way it can be used to find details from other revisions, flag --revision XX will be used, for example
$ helm get manifest traefik --revision 3
Answer
After some research I found at least 3 options where this information can be retrieved:
From manifest which was applied (most precise approach):
$ helm get manifest ingress-nginx -n ingress-nginx | yq eval '.spec.template.spec.containers[].image' - | grep -oE '[v]?[0-9]\.[0-9]\.[0-9]'
v1.0.0
$ helm get manifest traefik | yq eval '.spec.template.spec.containers[].image' - | grep -oE '[v]?[0-9]\.[0-9]\.[0-9]'
2.5.3
Using yq in this example, because helm manifest provides output only in yaml:
$ helm get manifest --help
This command fetches the generated manifest for a given release.
A manifest is a YAML-encoded representation of the Kubernetes
resources that were generated from this release's chart(s). If a chart
is dependent on other charts, those resources will also be included in
the manifest.
From values (not always works, depends on a chart and/or if image details are located in values.yaml or were set using --set flag):
$ helm get values ingress-nginx --all -n ingress-nginx -o json | jq '.controller.image.tag'
"v1.0.0"
$ helm get values traefik --all -o json | jq '.controller.image.tag'
null
From kubernetes secrets (most difficult):
All revisions are stored as secrets in the same namespace where chart is deployed, for instance:
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.traefik.v1 helm.sh/release.v1 1 134m
sh.helm.release.v1.traefik.v2 helm.sh/release.v1 1 119m
sh.helm.release.v1.traefik.v3 helm.sh/release.v1 1 118m
sh.helm.release.v1.traefik.v4 helm.sh/release.v1 1 99m
sh.helm.release.v1.traefik.v5 helm.sh/release.v1 1 16m
sh.helm.release.v1.traefik.v6 helm.sh/release.v1 1 15m
The way the data is stored in these secrets is even more complicated, however image tag can be retrieved from it as well. Provided link contains details on how to extract data from these secrets.
Here's a quote of command to decode one of the secrets:
kubectl get secrets sh.helm.release.v1.wordpress.v1 -o json | jq .data.release | tr -d '"' | base64 -d | base64 -d | gzip -d
Useful links:
helm get manifest
helm get values

Related

Where does Helm store installation state?

When you run a helm install command, Helm outputs information like the revision of this installation.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
Depends on configuration
I found the answer in the docs.
Helm 3 changed the default release information storage to Secrets in the namespace of the release.
https://helm.sh/docs/topics/advanced/#storage-backends
It goes on to say that you can configure it to instead store that state in a ConfigMap or in a PostgreSQL database.
So by default, kubectl get secret --namespace my-namespace will include an entry like
sh.helm.release.v1.st.v1 helm.sh/release.v1 1 13m
And kubectl describe secret sh.helm.release.v1.st.v1 will output something like
Name: sh.helm.release.v1.st.v1
Namespace: my-namespace
Labels: modifiedAt=1613580504
name=st
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
The storage is changed in Helm 3 as follows:
Releases are stored as Secrets by default (it could use PostgreSQL).
Storage is in the namespace of the release.
Naming is changed to sh.helm.release.v1.<release_name>.v<revision_version>.
The Secret type is set as helm.sh/release.v1.
List installed helm Charts:
$ helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chrt-foobar default 2 2019-10-14 15:18:31.529138228 +0100 IST deployed chrt-foobar-0.1.0 1.16.0
chrt-test test 1 2019-10-14 15:20:28.196338611 +0100 IST deployed chrt-test-0.1.0 1.16.0
List helm releases history
$ kubectl get secret -l "owner=helm" --all-namespaces
NAMESPACE NAME TYPE DATA AGE
default sh.helm.release.v1.chrt-foobar.v1 helm.sh/release.v1 1 3m2s
default sh.helm.release.v1.chrt-foobar.v2 helm.sh/release.v1 1 2m40s
test sh.helm.release.v1.chrt-test.v1 helm.sh/release.v1 1 43s
There are two parts to Helm in Helm2: The Helm client (helm) and the Helm server (Tiller) (removed in Helm3).
When we run helm init it install the Tiller part on Kubernetes cluster. You can confirm the installation
kubectl get pods --namespace kube-system
#see Tiller running.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
As for
By default, tiller stores release information in ConfigMaps in the namespace where it is running, the new version also supports SQL storage backend for release information.
storage-backends
To get release information
kubectl get configmap -n kube-system -l "OWNER=TILLER"
then check the release info from config map
kubectl get configmap -n kube-system -o yaml myapp.v2:
how-helm-uses-configmaps-to-store-data

Can I purge history revision for a deployment in kubernetes?

I would like to control the number of revision in the resultset of this command in my k8s cluster:
kubectl rollout history deployment.v1.apps/<<my_deployment>>
Here it is what I have:
REVISION CHANGE-CAUSE
10 set app version to 1.1.10
11 set app version to 1.1.11
12 set app version to 1.1.12
13 set app version to 1.1.13
14 set app version to 1.1.14
15 set app version to 1.1.15
16 set app version to 1.1.16
17 set app version to 1.1.17
18 set app version to 1.1.18
19 set app version to 1.1.19
20 set app version to 1.1.20
21 set app version to 1.1.21
I would like to have only:
21 set app version to 1.1.21
Is there a magical command like:
kubectl rollout history clean deployment.v1.apps/<<my_deployment>>
Yes, as per the documentation, it can be done by setting .spec.revisionHistoryLimit in your Deployment to 0:
Clean up Policy
You can set .spec.revisionHistoryLimit field in a Deployment to
specify how many old ReplicaSets for this Deployment you want to
retain. The rest will be garbage-collected in the background. By
default, it is 10.
Note: Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment thus that Deployment
will not be able to roll back.
The easiest way to do it is by patching your Deployment. It can be done by executing the following command:
kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 0}]'
Then you can set it back to the previous value:
kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 10}]'
UPDATE:
Thx, I have already try this. The history revision table is still
present. The only way I have found is to delete the deployment
configuration. – Sunitrams 20 mins ago
Are you sure you did it the same way ? 🤔 Take a quick look and see how it works in my case:
$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
4 <none>
After running:
$ kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 0}]'
deployment.apps/nginx-deployment patched
the revision history is reduced to the latest one:
$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
4 <none>
When I set .spec.revisionHistoryLimit back to 10:
$ kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 10}]'
deployment.apps/nginx-deployment patched
there is still only latest revision:
$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
4 <none>

FAILED cassandra-operator-1.0.0

I am getting the below error while setting up cassandra-operator.
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
r8e789c134d 1 Fri Jun 26 07:00:39 2020 FAILED cassandra-operator-1.0.0 1.0.1 apiconnect
$ helm status r8e789c134d
LAST DEPLOYED: Fri Jun 26 07:00:39 2020
NAMESPACE: apiconnect
STATUS: FAILED
NOTES:
1. Get the cassandra-cluster admin URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace apiconnect -l "app=cassandra-operator,release=r8e789c134d" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:1770 to use your application"
kubectl port-forward $POD_NAME 1770:1770
$ export POD_NAME=$(kubectl get pods --namespace apiconnect -l "app=cassandra-operator,release=r8e789c134d" -o jsonpath="{.items[0].metadata.name}")
error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template:
template was:
{.items[0].metadata.name}
object given to jsonpath engine was:
map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}
Please let me know if there is any issue in the set up.
Thanks in advance.

How to get which user has created a revision of a resource?

I can get all revisions of a resource my_resource
$ helm history my_resource
It gives me an output
REVISION UPDATED STATUS CHART DESCRIPTION
1 Thu Jun 2 11:25:22 2018 SUPERSEDED my_resource-1.0.0 Install complete
2 Mon Jun 6 15:11:50 2018 SUPERSEDED my_resource-1.0.1 Upgrade complete
3 Tue Jun 11 18:40:55 2018 SUPERSEDED my_resource-1.0.2 Upgrade complete
4 Thu Oct 9 16:12:45 2018 DEPLOYED my_resource-1.0.3 Upgrade complete
Is there any way to get a username/account which created a specific revision?
By default, helm tracks deployed releases using component Tiller that is installed in kube-system namespace.
It has the following jobs:
- Answer requests from Helm clients
- Expand and render charts into a set of Kubernetes resources
- Manage releases
When we run helm list, Tiller shows us all of the releases. And we can use helm history to see all of the revisions for a given release.
Tiller stores all of this information in Kubernetes ConfigMap objects. And those objects are located in the same namespace as Tiller.
Release list:
kubectl get configmap -n kube-system -l "OWNER=TILLER"
NAME DATA AGE
elastic1.v1 1 57m
kubectl get configmap -n kube-system -l "OWNER=TILLER" -o yaml
kind: ConfigMap metadata:
creationTimestamp: 2018-10-05T08:54:50Z
labels:
MODIFIED_AT: "1538731409"
NAME: elastic1
OWNER: TILLER
STATUS: DEPLOYED
VERSION: "1"
name: elastic1.v1
namespace: kube-system
resourceVersion: "103223"
selfLink: /api/v1/namespaces/kube-system/configmaps/elastic1.v1
uid: 5170941d-c87c-11e8-aa86-42010a840002 kind: List metadata: resourceVersion: "" selfLink: ""
Good article: click here
Also, there is an open proposal on GitHub to add an additional label like release owner into helm ls command: github
Hope it will help you in further investigations.

How helm rollback works in kubernetes?

While going through the helm documentation, i came across rollback feature.
Its a cool feature, but i have some doubts about the implementation of that feature.
How they have implemented it? If they might have used some datastore to preserve old release config, what datastore it is?
Is there any upper limit on consecutive rollbacks? If so, Upto how many rollbacks will it support? Can we change this limit?
As the documentation says, it rolls back the entire release. Helm generally stores release metadata in its own configmaps. Every-time you release changes, it appends that to the existing data. Your changes can have new deployment image, new configmaps, storages, etc. On rollback, everything goes to the previous version.
Helm 3 changed the default release information storage to Secrets in the namespace of the release. Following helm documentation should provide some of the details in this regard:
https://helm.sh/docs/topics/advanced/#storage-backends
For example (only for illustration purpose here) -
$ helm install test-release-1 .
NAME: test-release-1
LAST DEPLOYED: Sun Feb 20 13:27:53 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
We can now see history and secret information for above release as follows:
$ helm history test-release-1
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sun Feb 20 13:27:53 2022 deployed fleetman-helm-chart-test-1-0.1.0 1.16.0 Install complete
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.test-release-1.v1 helm.sh/release.v1 1 41s
$ kubectl describe secrets sh.helm.release.v1.test-release-1.v1
Name: sh.helm.release.v1.test-release-1.v1
Namespace: default
Labels: modifiedAt=1645363673
name=test-release-1
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
Data
====
release: 1924 bytes
Now, it is upgraded to a new version as follows:
$ helm upgrade test-release-1 .
Release "test-release-1" has been upgraded. Happy Helming!
NAME: test-release-1
LAST DEPLOYED: Sun Feb 20 13:30:26 2022
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
Following is the updated information in Kubernetes Secrets:
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.test-release-1.v1 helm.sh/release.v1 1 2m53s
sh.helm.release.v1.test-release-1.v2 helm.sh/release.v1 1 20s