How to get which user has created a revision of a resource? - kubernetes

I can get all revisions of a resource my_resource
$ helm history my_resource
It gives me an output
REVISION UPDATED STATUS CHART DESCRIPTION
1 Thu Jun 2 11:25:22 2018 SUPERSEDED my_resource-1.0.0 Install complete
2 Mon Jun 6 15:11:50 2018 SUPERSEDED my_resource-1.0.1 Upgrade complete
3 Tue Jun 11 18:40:55 2018 SUPERSEDED my_resource-1.0.2 Upgrade complete
4 Thu Oct 9 16:12:45 2018 DEPLOYED my_resource-1.0.3 Upgrade complete
Is there any way to get a username/account which created a specific revision?

By default, helm tracks deployed releases using component Tiller that is installed in kube-system namespace.
It has the following jobs:
- Answer requests from Helm clients
- Expand and render charts into a set of Kubernetes resources
- Manage releases
When we run helm list, Tiller shows us all of the releases. And we can use helm history to see all of the revisions for a given release.
Tiller stores all of this information in Kubernetes ConfigMap objects. And those objects are located in the same namespace as Tiller.
Release list:
kubectl get configmap -n kube-system -l "OWNER=TILLER"
NAME DATA AGE
elastic1.v1 1 57m
kubectl get configmap -n kube-system -l "OWNER=TILLER" -o yaml
kind: ConfigMap metadata:
creationTimestamp: 2018-10-05T08:54:50Z
labels:
MODIFIED_AT: "1538731409"
NAME: elastic1
OWNER: TILLER
STATUS: DEPLOYED
VERSION: "1"
name: elastic1.v1
namespace: kube-system
resourceVersion: "103223"
selfLink: /api/v1/namespaces/kube-system/configmaps/elastic1.v1
uid: 5170941d-c87c-11e8-aa86-42010a840002 kind: List metadata: resourceVersion: "" selfLink: ""
Good article: click here
Also, there is an open proposal on GitHub to add an additional label like release owner into helm ls command: github
Hope it will help you in further investigations.

Related

Where does Helm store installation state?

When you run a helm install command, Helm outputs information like the revision of this installation.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
Depends on configuration
I found the answer in the docs.
Helm 3 changed the default release information storage to Secrets in the namespace of the release.
https://helm.sh/docs/topics/advanced/#storage-backends
It goes on to say that you can configure it to instead store that state in a ConfigMap or in a PostgreSQL database.
So by default, kubectl get secret --namespace my-namespace will include an entry like
sh.helm.release.v1.st.v1 helm.sh/release.v1 1 13m
And kubectl describe secret sh.helm.release.v1.st.v1 will output something like
Name: sh.helm.release.v1.st.v1
Namespace: my-namespace
Labels: modifiedAt=1613580504
name=st
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
The storage is changed in Helm 3 as follows:
Releases are stored as Secrets by default (it could use PostgreSQL).
Storage is in the namespace of the release.
Naming is changed to sh.helm.release.v1.<release_name>.v<revision_version>.
The Secret type is set as helm.sh/release.v1.
List installed helm Charts:
$ helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chrt-foobar default 2 2019-10-14 15:18:31.529138228 +0100 IST deployed chrt-foobar-0.1.0 1.16.0
chrt-test test 1 2019-10-14 15:20:28.196338611 +0100 IST deployed chrt-test-0.1.0 1.16.0
List helm releases history
$ kubectl get secret -l "owner=helm" --all-namespaces
NAMESPACE NAME TYPE DATA AGE
default sh.helm.release.v1.chrt-foobar.v1 helm.sh/release.v1 1 3m2s
default sh.helm.release.v1.chrt-foobar.v2 helm.sh/release.v1 1 2m40s
test sh.helm.release.v1.chrt-test.v1 helm.sh/release.v1 1 43s
There are two parts to Helm in Helm2: The Helm client (helm) and the Helm server (Tiller) (removed in Helm3).
When we run helm init it install the Tiller part on Kubernetes cluster. You can confirm the installation
kubectl get pods --namespace kube-system
#see Tiller running.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
As for
By default, tiller stores release information in ConfigMaps in the namespace where it is running, the new version also supports SQL storage backend for release information.
storage-backends
To get release information
kubectl get configmap -n kube-system -l "OWNER=TILLER"
then check the release info from config map
kubectl get configmap -n kube-system -o yaml myapp.v2:
how-helm-uses-configmaps-to-store-data

Can not override `replicas` of deployment k8s

I have met a problem like this:
Firstly, I using helm to create a release nginx:
helm upgrade --install --namespace test nginx bitnami/nginx --debug
LAST DEPLOYED: Wed Jul 22 15:17:50 2020
NAMESPACE: test
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
nginx-server-block 1 2s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 2s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
nginx-6bcbfcd548-kdf4x 0/1 ContainerCreating 0 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.219.6.148 <pending> 80:30811/TCP,443:31260/TCP 2s
NOTES:
Get the NGINX URL:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace test -w nginx'
export SERVICE_IP=$(kubectl get svc --namespace test nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo "NGINX URL: http://$SERVICE_IP/"
K8s only create a deployment with 1 pods:
# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app.kubernetes.io/name: nginx
helm.sh/chart: nginx-6.0.2
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Tiller
spec:
selector:
matchLabels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: nginx
replicas: 1
...
Secondly, I using kubectl command to edit the deployment to scaling up to 2 pods
kubectl -n test edit deployment nginx
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-07-22T08:17:51Z"
generation: 1
labels:
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: nginx
helm.sh/chart: nginx-6.0.2
name: nginx
namespace: test
resourceVersion: "128636260"
selfLink: /apis/extensions/v1beta1/namespaces/test/deployments/nginx
uid: d63b0f05-cbf3-11ea-99d5-42010a8a00f1
spec:
progressDeadlineSeconds: 600
replicas: 2
...
And i save this, check status to see the deployment has scaled up to 2 pods:
kubectl -n test get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 7m50s
Finally, I using helm to upgrade release, as expected, helm will override the deployment to 1 pod like first step but in for now, the deployment will keep the values replicas: 2 even you set the values (in values.yaml file of helm) to any number.
I have using option --recreate-pods of helm command:
helm upgrade --install --namespace test nginx bitnami/nginx --debug --recreate-pods
Release "nginx" has been upgraded. Happy Helming!
LAST DEPLOYED: Wed Jul 22 15:31:24 2020
NAMESPACE: test
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
nginx-server-block 1 13m
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/2 2 0 13m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
nginx-6bcbfcd548-b4bfs 0/1 ContainerCreating 0 1s
nginx-6bcbfcd548-bzhf2 0/1 ContainerCreating 0 1s
nginx-6bcbfcd548-kdf4x 0/1 Terminating 0 13m
nginx-6bcbfcd548-xfxbv 1/1 Terminating 0 6m16s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.219.6.148 34.82.120.134 80:30811/TCP,443:31260/TCP 13m
NOTES:
Get the NGINX URL:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace test -w nginx'
export SERVICE_IP=$(kubectl get svc --namespace test nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo "NGINX URL: http://$SERVICE_IP/"
Result: after I edit the replicas in deployment manually, I can not use helm to override this values replicas, but I still can change the images and etc, ... only replicas will not change
I have run --debug and helm still create the deployment with replicas: 1
# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app.kubernetes.io/name: nginx
helm.sh/chart: nginx-6.0.2
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Tiller
spec:
selector:
matchLabels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: nginx
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: nginx
helm.sh/chart: nginx-6.0.2
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Tiller
spec:
containers:
- name: nginx
image: docker.io/bitnami/nginx:1.19.1-debian-10-r0
imagePullPolicy: "IfNotPresent"
ports:
- name: http
containerPort: 8080
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
tcpSocket:
port: http
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 5
tcpSocket:
port: http
timeoutSeconds: 3
resources:
limits: {}
requests: {}
volumeMounts:
- name: nginx-server-block-paths
mountPath: /opt/bitnami/nginx/conf/server_blocks
volumes:
- name: nginx-server-block-paths
configMap:
name: nginx-server-block
items:
- key: server-blocks-paths.conf
path: server-blocks-paths.conf
But the k8s deployment will keep the values replicas the same like the edit manual once replicas: 2
As far as I know, the output of helm command is create k8s yaml file, Why I can not use helm to override the specific values replicas in this case?
Tks in advance!!!
P/S: I just want to know what is behavior here, Tks
Helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Follow the offical document from Helm: Helm | Docs
Helm 2 used a two-way strategic merge patch. During an upgrade, it compared the most recent chart's manifest against the proposed chart's manifest (the one supplied during helm upgrade). It compared the differences between these two charts to determine what changes needed to be applied to the resources in Kubernetes. If changes were applied to the cluster out-of-band (such as during a kubectl edit), those changes were not considered. This resulted in resources being unable to roll back to its previous state: because Helm only considered the last applied chart's manifest as its current state, if there were no changes in the chart's state, the live state was left unchanged.
And this thing will be improved in Helm v3, because Helm v3 have removed Tiller, your values will be apply exactly to Kubernetes resources, and values of Helm and Kubernetes will be consistent.
==> Result is you will not meet this problem again if you use Helm version 3
Please take a look at Supported Version Skew.
When a new version of Helm is released, it is compiled against a particular minor version of Kubernetes. For example, Helm 3.0.0 interacts with Kubernetes using the Kubernetes 1.16.2 client, so it is compatible with Kubernetes 1.16.
As of Helm 3, Helm is assumed to be compatible with n-3 versions of Kubernetes it was compiled against. Due to Kubernetes' changes between minor versions, Helm 2's support policy is slightly stricter, assuming to be compatible with n-1 versions of Kubernetes.
For example, if you are using a version of Helm 3 that was compiled against the Kubernetes 1.17 client APIs, then it should be safe to use with Kubernetes 1.17, 1.16, 1.15, and 1.14. If you are using a version of Helm 2 that was compiled against the Kubernetes 1.16 client APIs, then it should be safe to use with Kubernetes 1.16 and 1.15.
It is not recommended to use Helm with a version of Kubernetes that is newer than the version it was compiled against, as Helm does not make any forward compatibility guarantees.
If you choose to use Helm with a version of Kubernetes that it does not support, you are using it at your own risk.
I have tested those behavior using 1.17.9 k8s version with helm 3.2v and all below mentioned approaches for deployment update are working as expected.
helm upgrade --install nginx bitnami/nginx
helm fetch bitnami/nginx --untar (create custom vaules.yaml and change the replicaCount parameter in values.yaml and save it)
helm upgrade --install nginx bitnami/nginx -f values.yaml ./nginx
helm upgrade --install nginx bitnami/nginx -f values.yaml ./nginx --set replicaCount=2
Note: Values Files
values.yaml is the default, which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
So my advice is to keep your tools up to date.
Note:
Helm 2 support plan.
For Helm 2, we will continue to accept bug fixes and fix any security issues that arise, but no new features will be accepted. All feature development will be moved over to Helm 3.
6 months after Helm 3's public release, Helm 2 will stop accepting bug fixes. Only security issues will be accepted.
12 months after Helm 3's public release, support for Helm 2 will formally end.
Please use the replicaCount field from helm to manage replicas.
I see it as option here
let me know helm version you are using. There is a known bug as well, where it is not upgrading the replicas, check the link
https://github.com/helm/helm/issues/4654

helm init Error: error installing: deployments.extensions is forbidden when run inside gitlab runner

I have Gitlab (11.8.1) (self-hosted) connected to self-hosted K8s Cluster (1.13.4). There're 3 projects in gitlab name shipment, authentication_service and shipment_mobile_service.
All projects add the same K8s configuration exception project namespace.
The first project is successful when install Helm Tiller and Gitlab Runner in Gitlab UI.
The second and third projects only install Helm Tiller success, Gitlab Runner error with log in install runner pod:
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Error: cannot connect to Tiller
+ sleep 1s
+ echo 'Retrying (30)...'
+ helm repo add runner https://charts.gitlab.io
Retrying (30)...
"runner" has been added to your repositories
+ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "runner" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
+ helm upgrade runner runner/gitlab-runner --install --reset-values --tls --tls-ca-cert /data/helm/runner/config/ca.pem --tls-cert /data/helm/runner/config/cert.pem --tls-key /data/helm/runner/config/key.pem --version 0.2.0 --set 'rbac.create=true,rbac.enabled=true' --namespace gitlab-managed-apps -f /data/helm/runner/config/values.yaml
Error: UPGRADE FAILED: remote error: tls: bad certificate
I don't config gitlab-ci with K8s cluster on first project, only setup for the second and third. The weird thing is with the same helm-data (only different by name), the second run success but the third is not.
And because there only one gitlab runner available (from the first project), I assign both 2nd and 3rd project to this runner.
I use this gitlab-ci.yml for both 2 projects with only different name in helm upgrade command.
stages:
- test
- build
- deploy
variables:
CONTAINER_IMAGE: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:${CI_PIPELINE_ID}
CONTAINER_IMAGE_LATEST: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:latest
CI_REGISTRY: dockerhub.linhnh.vn
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375 # required when use dind
# test phase and build phase using docker:dind success
deploy_beta:
stage: deploy
image: alpine/helm
script:
- echo "Deploy test start ..."
- helm init --upgrade
- helm upgrade --install --force shipment-mobile-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data
- echo "Deploy test completed!"
environment:
name: staging
tags: ["kubernetes_beta"]
only:
- master
The helm-data is very simple so I think don't really need to paste here.
Here is the log when second project deploy success:
Running with gitlab-runner 11.7.0 (8bb608ff)
on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image linkyard/docker-helm ...
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending
Running on runner-xrmajzy2-project-15-concurrent-0x2bms via runner-gitlab-runner-6c8555c86b-gjt9f...
Cloning into '/root/authentication_service'...
Cloning repository...
Checking out 5068bf1f as master...
Skipping Git submodules setup
$ echo "Deploy start ...."
Deploy start ....
$ helm init --upgrade --dry-run --debug
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
spec:
automountServiceAccountToken: true
containers:
- env:
- name: TILLER_NAMESPACE
value: kube-system
- name: TILLER_HISTORY_MAX
value: "0"
image: gcr.io/kubernetes-helm/tiller:v2.13.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /liveness
port: 44135
initialDelaySeconds: 1
timeoutSeconds: 1
name: tiller
ports:
- containerPort: 44134
name: tiller
- containerPort: 44135
name: http
readinessProbe:
httpGet:
path: /readiness
port: 44135
initialDelaySeconds: 1
timeoutSeconds: 1
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
spec:
ports:
- name: tiller
port: 44134
targetPort: tiller
selector:
app: helm
name: tiller
type: ClusterIP
status:
loadBalancer: {}
...
$ helm upgrade --install --force authentication-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data
WARNING: Namespace "gitlab-managed-apps" doesn't match with previous. Release will be deployed to default
Release "authentication-service" has been upgraded. Happy Helming!
LAST DEPLOYED: Tue Mar 26 05:27:51 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
authentication-service 1/1 1 1 17d
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
authentication-service-966c997c4-mglrb 0/1 Pending 0 0s
authentication-service-966c997c4-wzrkj 1/1 Terminating 0 49m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
authentication-service NodePort 10.108.64.133 <none> 80:31340/TCP 17d
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services authentication-service)
echo http://$NODE_IP:$NODE_PORT
$ echo "Deploy completed"
Deploy completed
Job succeeded
And the third project fail:
Running with gitlab-runner 11.7.0 (8bb608ff)
on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image alpine/helm ...
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Running on runner-xrmajzy2-project-18-concurrent-0bv4bx via runner-gitlab-runner-6c8555c86b-gjt9f...
Cloning repository...
Cloning into '/canhnv5/shipmentmobile'...
Checking out 278cbd3d as master...
Skipping Git submodules setup
$ echo "Deploy test start ..."
Deploy test start ...
$ helm init --upgrade
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Error: error installing: deployments.extensions is forbidden: User "system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account" cannot create resource "deployments" in API group "extensions" in the namespace "kube-system"
ERROR: Job failed: command terminated with exit code 1
I could see they use the same runner XrmajZY2 that I install in the first project, same k8s namespace gitlab-managed-apps.
I think they use privilege mode but don't know why the second can get the right permission, and the third can not? Should I create user system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account and assign to cluster-admin?
Thanks to #cookiedough's instruction. I do these steps:
Fork the canhv5/shipment-mobile-service into my root account root/shipment-mobile-service.
Delete gitlab-managed-apps namespace without anything inside, run kubectl delete -f gitlab-admin-service-account.yaml.
Apply this file then get the token as #cookiedough guide.
Back to root/shipment-mobile-service in Gitlab, Remove previous Cluster. Add Cluster back with new token. Install Helm Tiller then Gitlab Runner in Gitlab UI.
Re run the job then the magic happens. But I still unclear why canhv5/shipment-mobile-service still get the same error.
Before you do the following, delete the gitlab-managed-apps namespace:
kubectl delete namespace gitlab-managed-apps
Reciting from the GitLab tutorial you will need to create a serviceaccount and clusterrolebinding got GitLab, and you will need the secret created as a result to connect your project to your cluster as a result.
Create a file called gitlab-admin-service-account.yaml with contents:
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-admin
namespace: kube-system
Apply the service account and cluster role binding to your cluster:
kubectl apply -f gitlab-admin-service-account.yaml
Output:
serviceaccount "gitlab-admin" created
clusterrolebinding "gitlab-admin" created
Retrieve the token for the gitlab-admin service account:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')
Copy the <authentication_token> value from the output:
Name: gitlab-admin-token-b5zv4
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=gitlab-admin
kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: <authentication_token>
Follow this tutorial to connect your cluster to the project, otherwise you will have to stitch up the same thing along the way with a lot more pain!

How helm rollback works in kubernetes?

While going through the helm documentation, i came across rollback feature.
Its a cool feature, but i have some doubts about the implementation of that feature.
How they have implemented it? If they might have used some datastore to preserve old release config, what datastore it is?
Is there any upper limit on consecutive rollbacks? If so, Upto how many rollbacks will it support? Can we change this limit?
As the documentation says, it rolls back the entire release. Helm generally stores release metadata in its own configmaps. Every-time you release changes, it appends that to the existing data. Your changes can have new deployment image, new configmaps, storages, etc. On rollback, everything goes to the previous version.
Helm 3 changed the default release information storage to Secrets in the namespace of the release. Following helm documentation should provide some of the details in this regard:
https://helm.sh/docs/topics/advanced/#storage-backends
For example (only for illustration purpose here) -
$ helm install test-release-1 .
NAME: test-release-1
LAST DEPLOYED: Sun Feb 20 13:27:53 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
We can now see history and secret information for above release as follows:
$ helm history test-release-1
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sun Feb 20 13:27:53 2022 deployed fleetman-helm-chart-test-1-0.1.0 1.16.0 Install complete
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.test-release-1.v1 helm.sh/release.v1 1 41s
$ kubectl describe secrets sh.helm.release.v1.test-release-1.v1
Name: sh.helm.release.v1.test-release-1.v1
Namespace: default
Labels: modifiedAt=1645363673
name=test-release-1
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
Data
====
release: 1924 bytes
Now, it is upgraded to a new version as follows:
$ helm upgrade test-release-1 .
Release "test-release-1" has been upgraded. Happy Helming!
NAME: test-release-1
LAST DEPLOYED: Sun Feb 20 13:30:26 2022
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
Following is the updated information in Kubernetes Secrets:
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.test-release-1.v1 helm.sh/release.v1 1 2m53s
sh.helm.release.v1.test-release-1.v2 helm.sh/release.v1 1 20s

kubernetes rolling update using helm

I am new to Helm. I have installed Minikube & Helm on my windows system. I am able create pods using Helm and see deployment,pods & replicaset in dashboard.
I want to do rolling update using Helm. Guide me how to do rolling update in K8s using Helm.
Creating Tomcat pod using Helm
helm create hello-world
Changed image name and deployment name in deployment.yaml
kind: Deployment
metadata:
name: mytomcat
spec:
containers:
- name: {{ .Chart.Name }}
image: tomcat
Install
helm install hello-world
NAME: whopping-dolphin
LAST DEPLOYED: Wed Aug 30 21:38:42 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whopping-dolphin-hello-world 10.0.0.178 <none> 80/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mytomcat 1 1 1 0 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=hello-world,release=whopping-dolphin" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
I see mytomcat deployment and pod mytomcat-2768693561-hd2hd in dashboard.
Now I would like to give command which will delete my current deployment & pod in k8s and it should create new deployment and pod.
It will be helpful if I get sample commands and yaml.
Below command is working fine for Rolling update.
First time it will be install
next time it will be upgrade
helm upgrade --install tom-release --set appName=mytomcatcon
hello-world
tom-release is my release name and passing runtime values to helm chart using --set option

Categories