While going through the helm documentation, i came across rollback feature.
Its a cool feature, but i have some doubts about the implementation of that feature.
How they have implemented it? If they might have used some datastore to preserve old release config, what datastore it is?
Is there any upper limit on consecutive rollbacks? If so, Upto how many rollbacks will it support? Can we change this limit?
As the documentation says, it rolls back the entire release. Helm generally stores release metadata in its own configmaps. Every-time you release changes, it appends that to the existing data. Your changes can have new deployment image, new configmaps, storages, etc. On rollback, everything goes to the previous version.
Helm 3 changed the default release information storage to Secrets in the namespace of the release. Following helm documentation should provide some of the details in this regard:
https://helm.sh/docs/topics/advanced/#storage-backends
For example (only for illustration purpose here) -
$ helm install test-release-1 .
NAME: test-release-1
LAST DEPLOYED: Sun Feb 20 13:27:53 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
We can now see history and secret information for above release as follows:
$ helm history test-release-1
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sun Feb 20 13:27:53 2022 deployed fleetman-helm-chart-test-1-0.1.0 1.16.0 Install complete
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.test-release-1.v1 helm.sh/release.v1 1 41s
$ kubectl describe secrets sh.helm.release.v1.test-release-1.v1
Name: sh.helm.release.v1.test-release-1.v1
Namespace: default
Labels: modifiedAt=1645363673
name=test-release-1
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
Data
====
release: 1924 bytes
Now, it is upgraded to a new version as follows:
$ helm upgrade test-release-1 .
Release "test-release-1" has been upgraded. Happy Helming!
NAME: test-release-1
LAST DEPLOYED: Sun Feb 20 13:30:26 2022
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
Following is the updated information in Kubernetes Secrets:
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.test-release-1.v1 helm.sh/release.v1 1 2m53s
sh.helm.release.v1.test-release-1.v2 helm.sh/release.v1 1 20s
Related
I am following the devops guy tutorial for setting up CERT manager.
Steps:
Create new kind cluster
kind create cluster --name certmanager --image kindest/node:v1.19.1
get cert-manager yaml
curl -LO https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml
Install cert-manager
kubectl apply -f cert-manager-1.0.4.yaml
Test the certificate creation process
kubectl create ns cert-manager-test
kubectl apply -f ./selfsigned/issuer.yaml
I modified the cert to look like (add duration and renewBefore)
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: selfsigned-cert
namespace: cert-manager-test
spec:
duration: 1h
renewBefore: 20m
dnsNames:
- example.com
secretName: selfsigned-cert-tls
issuerRef:
name: test-selfsigned
Apply cert
kubectl apply -f ./selfsigned/certificate.yaml
kubectl describe certificate selfsigned-cert
Show the following
Spec:
Dns Names:
example.com
Duration: 1h0m0s
Issuer Ref:
Name: test-selfsigned
Renew Before: 20m0s
Secret Name: selfsigned-cert-tls
Status:
Conditions:
Last Transition Time: 2021-12-14T00:35:09Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2022-03-14T00:35:09Z
Not Before: 2021-12-14T00:35:09Z
Renewal Time: 2022-03-14T00:15:09Z
Revision: 1
Why is the renewal time 90 days from today? It should be 1 hour from the time I created it ~ (2021-12-14T00:35:09Z) as I set the duration to 1 hour!
EDIT: I actually updated to the latest cert-manager (v.1.6.1) and did the exact same steps. It seems to work. Maybe it was bug in that version. Weird!
Posted community wiki answer for better visibility based on the OP edit in the main question. Feel free to expand it.
The solution for the issue is to upgrade to the current, supported version (from the OP edit in main question):
I actually updated to the latest cert-manager (v.1.6.1) and did the exact same steps. It seems to work. Maybe it was bug in that version. Weird!
Version 1.6.1 is currently supported (as of today - 14.12.2021) until Feb 9, 2022.
Version 1.0.4 is outdated, not supported since Feb 10, 2021.
I'm using a C# code to run the command helm rollback <ImageName> to rollback to the previous version (by creating new Proccess() with helm).
Is there a way to know the tag of the image the command rolled back to?
Environment
Small cluster with 2 helm charts deployed:
ingress-nginx
traefik
helm v3.7.0 is used.
Also yq was installed to work with output in yaml in the similar way jq works with json.
Rollback logic
If no revision is set, rollback will be performed on previous revision. If rollback is run again without revision, previous one again will be used.
$ helm history traefik
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Tue Oct 12 11:28:22 2021 superseded traefik-10.3.5 2.5.3 Install complete
2 Tue Oct 12 11:42:47 2021 superseded traefik-10.3.6 2.5.3 Upgrade complete
3 Tue Oct 12 11:44:32 2021 superseded traefik-10.3.2 2.5.1 Upgrade complete
4 Tue Oct 12 12:03:26 2021 superseded traefik-10.3.6 2.5.3 Rollback to 2
5 Tue Oct 12 13:26:02 2021 deployed traefik-10.3.2 2.5.1 Rollback to 3
6 Tue Oct 12 13:26:53 2021 deployed traefik-10.3.6 2.5.3 Rollback to 4
So to figure out details of the rolled back revision, we can use current revision.
The same way it can be used to find details from other revisions, flag --revision XX will be used, for example
$ helm get manifest traefik --revision 3
Answer
After some research I found at least 3 options where this information can be retrieved:
From manifest which was applied (most precise approach):
$ helm get manifest ingress-nginx -n ingress-nginx | yq eval '.spec.template.spec.containers[].image' - | grep -oE '[v]?[0-9]\.[0-9]\.[0-9]'
v1.0.0
$ helm get manifest traefik | yq eval '.spec.template.spec.containers[].image' - | grep -oE '[v]?[0-9]\.[0-9]\.[0-9]'
2.5.3
Using yq in this example, because helm manifest provides output only in yaml:
$ helm get manifest --help
This command fetches the generated manifest for a given release.
A manifest is a YAML-encoded representation of the Kubernetes
resources that were generated from this release's chart(s). If a chart
is dependent on other charts, those resources will also be included in
the manifest.
From values (not always works, depends on a chart and/or if image details are located in values.yaml or were set using --set flag):
$ helm get values ingress-nginx --all -n ingress-nginx -o json | jq '.controller.image.tag'
"v1.0.0"
$ helm get values traefik --all -o json | jq '.controller.image.tag'
null
From kubernetes secrets (most difficult):
All revisions are stored as secrets in the same namespace where chart is deployed, for instance:
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.traefik.v1 helm.sh/release.v1 1 134m
sh.helm.release.v1.traefik.v2 helm.sh/release.v1 1 119m
sh.helm.release.v1.traefik.v3 helm.sh/release.v1 1 118m
sh.helm.release.v1.traefik.v4 helm.sh/release.v1 1 99m
sh.helm.release.v1.traefik.v5 helm.sh/release.v1 1 16m
sh.helm.release.v1.traefik.v6 helm.sh/release.v1 1 15m
The way the data is stored in these secrets is even more complicated, however image tag can be retrieved from it as well. Provided link contains details on how to extract data from these secrets.
Here's a quote of command to decode one of the secrets:
kubectl get secrets sh.helm.release.v1.wordpress.v1 -o json | jq .data.release | tr -d '"' | base64 -d | base64 -d | gzip -d
Useful links:
helm get manifest
helm get values
When you run a helm install command, Helm outputs information like the revision of this installation.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
Depends on configuration
I found the answer in the docs.
Helm 3 changed the default release information storage to Secrets in the namespace of the release.
https://helm.sh/docs/topics/advanced/#storage-backends
It goes on to say that you can configure it to instead store that state in a ConfigMap or in a PostgreSQL database.
So by default, kubectl get secret --namespace my-namespace will include an entry like
sh.helm.release.v1.st.v1 helm.sh/release.v1 1 13m
And kubectl describe secret sh.helm.release.v1.st.v1 will output something like
Name: sh.helm.release.v1.st.v1
Namespace: my-namespace
Labels: modifiedAt=1613580504
name=st
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
The storage is changed in Helm 3 as follows:
Releases are stored as Secrets by default (it could use PostgreSQL).
Storage is in the namespace of the release.
Naming is changed to sh.helm.release.v1.<release_name>.v<revision_version>.
The Secret type is set as helm.sh/release.v1.
List installed helm Charts:
$ helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chrt-foobar default 2 2019-10-14 15:18:31.529138228 +0100 IST deployed chrt-foobar-0.1.0 1.16.0
chrt-test test 1 2019-10-14 15:20:28.196338611 +0100 IST deployed chrt-test-0.1.0 1.16.0
List helm releases history
$ kubectl get secret -l "owner=helm" --all-namespaces
NAMESPACE NAME TYPE DATA AGE
default sh.helm.release.v1.chrt-foobar.v1 helm.sh/release.v1 1 3m2s
default sh.helm.release.v1.chrt-foobar.v2 helm.sh/release.v1 1 2m40s
test sh.helm.release.v1.chrt-test.v1 helm.sh/release.v1 1 43s
There are two parts to Helm in Helm2: The Helm client (helm) and the Helm server (Tiller) (removed in Helm3).
When we run helm init it install the Tiller part on Kubernetes cluster. You can confirm the installation
kubectl get pods --namespace kube-system
#see Tiller running.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
As for
By default, tiller stores release information in ConfigMaps in the namespace where it is running, the new version also supports SQL storage backend for release information.
storage-backends
To get release information
kubectl get configmap -n kube-system -l "OWNER=TILLER"
then check the release info from config map
kubectl get configmap -n kube-system -o yaml myapp.v2:
how-helm-uses-configmaps-to-store-data
I am using Helm v3.3.0, with a Kubernetes 1.16.
The cluster has the Kubernetes Service Catalog installed, so external services implementing the Open Service Broker API spec can be instantiated as K8S resources - as ServiceInstances and ServiceBindings.
ServiceBindings reflect as K8S Secrets and contain the binding information of the created external service. These secrets are usually mapped into the Docker containers as environment variables or volumes in a K8S Deployment.
Now I am using Helm to deploy my Kubernetes resources, and I read here that...
The [Helm] install order of Kubernetes types is given by the enumeration InstallOrder in kind_sorter.go
In that file, the order does neither mention ServiceInstance nor ServiceBinding as resources, and that would mean that Helm installs these resource types after it has installed any of its InstallOrder list - in particular Deployments. That seems to match the output of helm install --dry-run --debug run on my chart, where the order indicates that the K8S Service Catalog resources are applied last.
Question: What I cannot understand is, why my Deployment does not fail to install with Helm.
After all my Deployment resource seems to be deployed before the ServiceBinding is. And it is the Secret generated out of the ServiceBinding that my Deployment references. I would expect it to fail, since the Secret is not there yet, when the Deployment is getting installed. But that is not the case.
Is that just a timing glitch / lucky coincidence, or is this something I can rely on, and why?
Thanks!
As said in the comment I posted:
In fact your Deployment is failing at the start with Status: CreateContainerConfigError. Your Deployment is created before Secret from the ServiceBinding. It's only working as it was recreated when the Secret from ServiceBinding was available.
I wanted to give more insight with example of why the Deployment didn't fail.
What is happening (simplified in order):
Deployment -> created and spawned a Pod
Pod -> failing pod with status: CreateContainerConfigError by lack of Secret
ServiceBinding -> created Secret in a background
Pod gets the required Secret and starts
Previously mentioned InstallOrder will leave ServiceInstace and ServiceBinding as last by comment on line 147.
Example
Assuming that:
There is a working Kubernetes cluster
Helm3 installed and ready to use
Following guides:
Kubernetes.io: Instal Service Catalog using Helm
Magalix.com: Blog: Kubernetes Service Catalog
There is a Helm chart with following files in templates/ directory:
ServiceInstance
ServiceBinding
Deployment
Files:
ServiceInstance.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: example-instance
spec:
clusterServiceClassExternalName: redis
clusterServicePlanExternalName: 5-0-4
ServiceBinding.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: example-binding
spec:
instanceRef:
name: example-instance
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu
spec:
selector:
matchLabels:
app: ubuntu
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
# part below responsible for getting secret as env variable
env:
- name: DATA
valueFrom:
secretKeyRef:
name: example-binding
key: host
Applying above resources to check what is happening can be done in 2 ways:
First method is to use timestamp from $ kubectl get RESOURCE -o yaml
Second method is to use $ kubectl get RESOURCE --watch-only=true
First method
As said previously the Pod from the Deployment couldn't start as Secret was not available when the Pod tried to spawn. After the Secret was available to use, the Pod started.
The statuses this Pod had were the following:
Pending
ContainerCreating
CreateContainerConfigError
Running
This is a table with timestamps of Pod and Secret:
| Pod | Secret |
|-------------------------------------------|-------------------------------------------|
| creationTimestamp: "2020-08-23T19:54:47Z" | - |
| - | creationTimestamp: "2020-08-23T19:54:55Z" |
| startedAt: "2020-08-23T19:55:08Z" | - |
You can get this timestamp by invoking below commands:
$ kubectl get pod pod_name -n namespace -o yaml
$ kubectl get secret secret_name -n namespace -o yaml
You can also get get additional information with:
$ kubectl get event -n namespace
$ kubectl describe pod pod_name -n namespace
Second method
This method requires preparation before running Helm chart. Open another terminal window (for this particular case 2) and run:
$ kubectl get pod -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
$ kubectl get secret -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
After that apply your Helm chart.
Disclaimer!
Above commands will watch for changes in resources and display them with a timestamp from the OS. Please remember that this command is only for example purposes.
The output for Pod:
21:54:47:534823000 NAME READY STATUS RESTARTS AGE
21:54:47:542107000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:553799000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:655593000 ubuntu-65976bb789-l48wz 0/1 ContainerCreating 0 0s
-> 21:54:52:001347000 ubuntu-65976bb789-l48wz 0/1 CreateContainerConfigError 0 4s
21:55:09:205265000 ubuntu-65976bb789-l48wz 1/1 Running 0 22s
The output for Secret:
21:54:47:385714000 NAME TYPE DATA AGE
21:54:47:393145000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:47:719864000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:51:182609000 understood-squid-redis Opaque 1 0s
21:54:52:001031000 understood-squid-redis Opaque 1 0s
-> 21:54:55:686461000 example-binding Opaque 6 0s
Additional resources:
Stackoverflow.com: Answer: Helm install in certain order
Alibabacloud.com: Helm charts and templates hooks and tests part 3
So to answer my own question (and thanks to #dawid-kruk and the folks on Service Catalog Sig on Slack):
In fact, the initial start of my Pods (the ones referencing the Secret created out of the ServiceBinding) fails! It fails because the Secret is actually not there the moment K8S tries to start the pods.
Kubernetes has a self-healing mechanism, in the sense that it tries (and retries) to reach the target state of the cluster as described by the various deployed resources.
By Kubernetes retrying to get the pods running, eventually (when the Secret is finally there) all conditions will be satisfied to make the pods start up nicely. Therefore, eventually, evth. is running as it should.
How could this be streamlined? One possibility would be for Helm to include the custom resources ServiceBinding and ServiceInstance into its ordered list of installable resources and install them early in the installation phase.
But even without that, Kubernetes actually deals with it just fine. The order of installation (in this case) really does not matter. And that is a good thing!
I can get all revisions of a resource my_resource
$ helm history my_resource
It gives me an output
REVISION UPDATED STATUS CHART DESCRIPTION
1 Thu Jun 2 11:25:22 2018 SUPERSEDED my_resource-1.0.0 Install complete
2 Mon Jun 6 15:11:50 2018 SUPERSEDED my_resource-1.0.1 Upgrade complete
3 Tue Jun 11 18:40:55 2018 SUPERSEDED my_resource-1.0.2 Upgrade complete
4 Thu Oct 9 16:12:45 2018 DEPLOYED my_resource-1.0.3 Upgrade complete
Is there any way to get a username/account which created a specific revision?
By default, helm tracks deployed releases using component Tiller that is installed in kube-system namespace.
It has the following jobs:
- Answer requests from Helm clients
- Expand and render charts into a set of Kubernetes resources
- Manage releases
When we run helm list, Tiller shows us all of the releases. And we can use helm history to see all of the revisions for a given release.
Tiller stores all of this information in Kubernetes ConfigMap objects. And those objects are located in the same namespace as Tiller.
Release list:
kubectl get configmap -n kube-system -l "OWNER=TILLER"
NAME DATA AGE
elastic1.v1 1 57m
kubectl get configmap -n kube-system -l "OWNER=TILLER" -o yaml
kind: ConfigMap metadata:
creationTimestamp: 2018-10-05T08:54:50Z
labels:
MODIFIED_AT: "1538731409"
NAME: elastic1
OWNER: TILLER
STATUS: DEPLOYED
VERSION: "1"
name: elastic1.v1
namespace: kube-system
resourceVersion: "103223"
selfLink: /api/v1/namespaces/kube-system/configmaps/elastic1.v1
uid: 5170941d-c87c-11e8-aa86-42010a840002 kind: List metadata: resourceVersion: "" selfLink: ""
Good article: click here
Also, there is an open proposal on GitHub to add an additional label like release owner into helm ls command: github
Hope it will help you in further investigations.